Tag Archives: Computing/Software

Beyond Bitcoin: China’s Surveillance Cash

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/beyond-bitcoin-chinas-surveillance-cash

Of all the technological revolutions we’ll live through in this next decade, none will be more fundamental or pervasive than the transition to digital cash. Money touches nearly everything we do, and although all those swipes and taps and PIN-entry moments may make it seem as though cash is already digital, all of that tech merely eases access to our bank accounts. Cash remains stubbornly physical. But that’s soon going to change.

As with so much connected to digital payments, the Chinese got there first. Prompted by the June 2019 public announcement of Facebook’s Libra (now Diem)—the social media giant’s private form of digital cash—the People’s Bank of China unveiled Digital Currency/Electronic Payments, or DCEP. Having rolled out the new currency to tens of millions of users who now hold digital yuan in electronic wallets, the bank expects that by the time Beijing hosts the 2022 Winter Olympics, DCEP will be in widespread use across what some say is the world’s biggest economy. Indeed, the bank confidently predicts that by the end of the decade digital cash will displace nearly all of China’s banknotes.

Unlike the anarchic and highly sought-after Bitcoin—whose creation marked the genesis of digital cash—DCEP gives up all its secrets. The People’s Bank records every DCEP transaction of any size across the entire Chinese economy, enabling a form of economic surveillance that was impossible to conceive of before the advent of digital cash but is now baked into the design of this new kind of money.

China may have been first, but nearly every other major national economy has its central bankers researching their own forms of digital cash. That makes this an excellent moment to consider the tensions at the intersection of money, technology, and society.

Nearly every country tracks the movement of large sums of money in an effort to thwart terrorism and tax evasion. But most nations have been content to preserve the anonymity of cash when the amount conveyed falls below some threshold. Will that still be the case in 2030, or will our money watch us as we spend it? Just because we can use the technology to create an indelible digital record of our every transaction, should we? And if we do, who gets to see that ledger?

Digital cash also means that high-tech items will rapidly become bearers of value. We’ll have more than just smartphone-based wallets: Our automobiles will pay for their own bridge tolls or the watts they gulp to charge their batteries. It also means that such a car can be paid directly if it returns some of those electrons to the grid at times of peak demand. Within this decade, most of our devices could become fully transactional, not just in exchanging data, but also as autonomous financial entities.

Pervasive digital cash will demand a new ecosystem of software services to manage it. Here we run headlong into a fundamental challenge: You can claw back a mistaken or fraudulent credit card transaction with a charge-back, but a cash transfer is forever, whether that’s done by exchanging bills or bytes. That means someone’s buggy code will cost someone else real money. Ever-greater attention will have to be paid to testing, before the code managing these transactions is deployed. Youthful cries of “move fast and break things” ring hollow in a more mature connected world where software plays such a central role in the economy.

This article appears in the February 2021 print issue as “Surveillance Cash.”

Why Aren’t COVID Tracing Apps More Widely Used?

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/why-arent-covid-tracing-apps-more-widely-used

As the COVID-19 pandemic began to sweep around the globe in early 2020, many governments quickly mobilized to launch contract tracing apps to track the spread of the virus. If enough people downloaded and used the apps, it would be much easier to identify people who have potentially been exposed. In theory, contact tracing apps could play a critical role in stemming the pandemic.

In reality, adoption of contract tracing apps by citizens was largely sporadic and unenthusiastic. A trio of researchers in Australia decided to explore why contract tracing apps weren’t more widely adopted. Their results, published on 23 December in IEEE Software, emphasize the importance of social factors such as trust and transparency.

Muneera Bano is a senior lecturer of software engineering at Deakin University, in Melbourne. Bano and her co-authors study human aspects of technology adoption. “Coming from a socio-technical research background, we were intrigued initially to study the contact tracing apps when the Australian Government launched the CovidSafe app in April 2020,” explains Bano. “There was a clear resistance from many citizens in Australia in downloading the app, citing concerns regarding trust and privacy.”

To better understand the satisfaction—or dissatisfaction—of app users, the researchers analyzed data from the Apple and Google app stores. At first, they looked at average star ratings, number of downloads, and conducted a sentiment analysis of app reviews.

However, just because a person downloads an app doesn’t guarantee that they will use it. What’s more, Bano’s team found that sentiment scores—which are often indicative of an app’s popularity, success, and adoption—were not an effective means for capturing the success of COVID-19 contract tracing apps.

“We started to dig deeper into the reviews to analyze the voices of users for particular requirements of these apps. More or less all the apps had issues related to the Bluetooth functionality, battery consumption, reliability and usefulness during pandemic.”

For example, apps that relied on Bluetooth for tracing had issues related to range, proximity, signal strength, and connectivity. A significant number of users also expressed frustration over battery drainage. Some efforts have been made to address this issue; for example, Singapore launched an updated version of its TraceTogether app that allows it to operate with Bluetooth while running in the background, with the goal of improving battery life.

But, technical issues were just one reason for lack of adoption. Bano emphasizes that, “The major issues around the apps were social in nature, [related to] trust, transparency, security, and privacy.”

In particular, the researchers found that resistance to downloading and using the app was high in countries with a voluntary adoption model and low trust-index on their governments such as Australia, the United Kingdom, and Germany.  

“We observed slight improvement only in the case of Germany because the government made sincere efforts to increase trust. This was achieved by increasing transparency during ‘Corona-Warn-App’ development by making it open source from the outset and by involving a number of reputable organizations,” says Bano. “However, even as the German officials were referring to their contact tracing app as the ‘best app’ in the world, Germany was struggling to avoid the second wave of COVID-19 at the time we were analyzing the data, in October 2020.”

In some cases, even when measures to improve trust and address privacy issues were taken by governments and app developers, people were hesitant to adopt the apps. For example, a Canadian contract tracing app called COVID Alert is open source, requires no identifiable information from users, and all data are deleted after 14 days. Nevertheless, a survey of Canadians found that two thirds would not download any contact tracing app because it still “too invasive.” (The survey covered tracing apps in general, and was not specific to the COVID Alert app).

Bano plans to continue studying how politics and culture influence the adoption of these apps in different countries around the world. She and her colleagues are interested in exploring how contact tracing apps can be made more inclusive for diverse groups of users in multi-cultural countries.

It’s Too Easy to Hide Bias in Deep-Learning Systems

Post Syndicated from Matthew Hutson original https://spectrum.ieee.org/computing/software/its-too-easy-to-hide-bias-in-deeplearning-systems

If you’re on Facebook, click on “Why am I seeing this ad?” The answer will look something like “[Advertiser] wants to reach people who may be similar to their customers” or “[Advertiser] is trying to reach people ages 18 and older” or “[Advertiser] is trying to reach people whose primary location is the United States.” Oh, you’ll also see “There could also be more factors not listed here.” Such explanations started appearing on Facebook in response to complaints about the platform’s ad-placing artificial intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI. 

But something about those explanations didn’t sit right with Oana Goga, a researcher at the Grenoble Informatics Laboratory, in France. So she and her colleagues coded up AdAnalyst, a browser extension that automatically collects Facebook’s ad explanations. Goga’s team also became advertisers themselves. That allowed them to target ads to the volunteers they had running AdAnalyst. The result: “The explanations were often incomplete and sometimes misleading,” says Alan Mislove, one of Goga’s collaborators at Northeastern University, in Boston.

When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. “You can select people who are interested in football, and they live in Cote d’Azur, and they were at this college, and they also like drinking,” Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that’s because Facebook doesn’t want to appear creepy; the company declined to comment for this article, so it’s hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users’ suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what’s being revealed is why the algorithm chose a particular ad to show you.

The world around us is increasingly choreographed by such algorithms. They decide what advertisements, news, and movie recommendations you see. They also help to make far more weighty decisions, determining who gets loans, jobs, or parole. And in the not-too-distant future, they may decide what medical treatment you’ll receive or how your car will navigate the streets. People want explanations for those decisions. Transparency allows developers to debug their software, end users to trust it, and regulators to make sure it’s safe and fair.

The problem is that these automated systems are becoming so frighteningly complex that it’s often very difficult to figure out why they make certain decisions. So researchers have developed algorithms for understanding these decision-making automatons, forming the new subfield of explainable AI.

In 2017, the Defense Advanced Research Projects Agency launched a US $75 million XAI project. Since then, new laws have sprung up requiring such transparency, most notably Europe’s General Data Protection Regulation, which stipulates that when organizations use personal data for “automated decision-making, including profiling,” they must disclose “meaningful information about the logic involved.” One motivation for such rules is a concern that black-box systems may be hiding evidence of illegal, or perhaps just unsavory, discriminatory practices.

As a result, XAI systems are much in demand. And better policing of decision-making algorithms would certainly be a good thing. But even if explanations are widely required, some researchers worry that systems for automated decision-making may appear to be fair when they really aren’t fair at all.

For example, a system that judges loan applications might tell you that it based its decision on your income and age, when in fact it was your race that mattered most. Such bias might arise because it reflects correlations in the data that was used to train the AI, but it must be excluded from decision-making algorithms lest they act to perpetuate unfair practices of the past.

The challenge is how to root out such unfair forms of discrimination. While it’s easy to exclude information about an applicant’s race or gender or religion, that’s often not enough. Research has shown, for example, that job applicants with names that are common among African Americans receive fewer callbacks, even when they possess the same qualifications as someone else.

A computerized résumé-screening tool might well exhibit the same kind of racial bias, even if applicants were never presented with checkboxes for race. The system may still be racially biased; it just won’t “admit” to how it really works, and will instead provide an explanation that’s more palatable.

Regardless of whether the algorithm explicitly uses protected characteristics such as race, explanations can be specifically engineered to hide problematic forms of discrimination. Some AI researchers describe this kind of duplicity as a form of “fairwashing”: presenting a possibly unfair algorithm as being fair.

 Whether deceptive systems of this kind are common or rare is unclear. They could be out there already but well hidden, or maybe the incentive for using them just isn’t great enough. No one really knows. What’s apparent, though, is that the application of more and more sophisticated forms of AI is going to make it increasingly hard to identify such threats.

No company would want to be perceived as perpetuating antiquated thinking or deep-rooted societal injustices. So a company might hesitate to share exactly how its decision-making algorithm works to avoid being accused of unjust discrimination. Companies might also hesitate to provide explanations for decisions rendered because that information would make it easier for outsiders to reverse engineer their proprietary systems. Cynthia Rudin, a computer scientist at Duke University, in Durham, N.C., who studies interpretable machine learning, says that the “explanations for credit scores are ridiculously unsatisfactory.” She believes that credit-rating agencies obscure their rationales intentionally. “They’re not going to tell you exactly how they compute that thing. That’s their secret sauce, right?”

And there’s another reason to be cagey. Once people have reverse engineered your decision-making system, they can more easily game it. Indeed, a huge industry called “search engine optimization” has been built around doing just that: altering Web pages superficially so that they rise to the top of search rankings.

Why then are some companies that use decision-making AI so keen to provide explanations? Umang Bhatt, a computer scientist at the University of Cambridge and his collaborators interviewed 50 scientists, engineers, and executives at 30 organizations to find out. They learned that some executives had asked their data scientists to incorporate explainability tools just so the company could claim to be using transparent AI. The data scientists weren’t told whom this was for, what kind of explanations were needed, or why the company was intent on being open. “Essentially, higher-ups enjoyed the rhetoric of explainability,” Bhatt says, “while data scientists scrambled to figure out how to implement it.”

The explanations such data scientists produce come in all shapes and sizes, but most fall into one of two categories: explanations for how an AI-based system operates in general and explanations for particular decisions. These are called, respectively, global and local explanations. Both can be manipulated.

Ulrich Aïvodji at the Université du Québec, in Montreal, and his colleagues showed how global explanations can be doctored to look better. They used an algorithm they called (appropriately enough for such fairwashing) LaundryML to examine a machine-learning system whose inner workings were too intricate for a person to readily discern. The researchers applied LaundryML to two challenges often used to study XAI. The first task was to predict whether someone’s income is greater than $50,000 (perhaps making the person a good loan candidate), based on 14 personal attributes. The second task was to predict whether a criminal will re-offend within two years of being released from prison, based on 12 attributes.

Unlike the algorithms typically applied to generate explanations, LaundryML includes certain tests of fairness, to make sure the explanation—a simplified version of the original system—doesn’t prioritize such factors as gender or race to predict income and recidivism. Using LaundryML, these researchers were able to come up with simple rule lists that appeared much fairer than the original biased system but gave largely the same results. The worry is that companies could proffer such rule lists as explanations to argue that their decision-making systems are fair.

Another way to explain the overall operations of a machine-learning system is to present a sampling of its decisions. Last February, Kazuto Fukuchi, a researcher at the Riken Center for Advanced Intelligence Project, in Japan, and two colleagues described a way to select a subset of previous decisions such that the sample would look representative to an auditor who was trying to judge whether the system was unjust. But the craftily selected sample met certain fairness criteria that the overall set of decisions did not.

Organizations need to come up with explanations for individual decisions more often than they need to explain how their systems work in general. One technique relies on something XAI researchers call attention, which reflects the relationship between parts of the input to a decision-making system (say, single words in a résumé) and the output (whether the applicant appears qualified). As the name implies, attention values are thought to indicate how much the final judgment depends on certain attributes. But Zachary Lipton of Carnegie Mellon and his colleagues have cast doubt on the whole concept of attention.

These researchers trained various neural networks to read short biographies of physicians and predict which of these people specialized in surgery. The investigators made sure the networks would not allocate attention to words signifying the person’s gender. An explanation that considers only attention would then make it seem that these networks were not discriminating based on gender. But oddly, if words like “Ms.” were removed from the biographies, accuracy suffered, revealing that the networks were, in fact, still using gender to predict the person’s specialty.

“What did the attention tell us in the first place?” Lipton asks. The lack of clarity about what the attention metric actually means opens space for deception, he argues.

Johannes Schneider at the University of Liechtenstein and others recently described a system that examines a decision it made, then finds a plausible justification for an altered (incorrect) decision. Classifying Internet Movie Database (IMDb) film reviews as positive or negative, a faithful model categorized one review as positive, explaining itself by highlighting words like “enjoyable” and “appreciated.” But Schneider’s system could label the same review as negative and point to words that seem scolding when taken out of context.

Another way of explaining an automated decision is to use a technique that researchers call input perturbation. If you want to understand which inputs caused a system to approve or deny a loan, you can create several copies of the loan application with the inputs modified in various ways. Maybe one version ascribes a different gender to the applicant, while another indicates slightly different income. If you submit all of these applications and record the judgments, you can figure out which inputs have influence.

That could provide a reasonable explanation of how some otherwise mysterious decision-making systems work. But a group of researchers at Harvard University led by Himabindu Lakkaraju have developed a decision-making system that detects such probing and adjusts its output accordingly. When it is being tested, the system remains on its best behavior, ignoring off-limits factors like race or gender. At other times, it reverts to its inherently biased approach. Sophie Hilgard, one of the authors on that study, likens the use of such a scheme, which is so far just a theoretical concern, to what Volkswagen actually did to detect when a car was undergoing emissions tests, temporarily adjusting the engine parameters to make the exhaust cleaner than it would normally be.

Another way of explaining a judgment is to output a simple decision tree: a list of if-then rules. The tree doesn’t summarize the whole algorithm, though; instead it includes only the factors used to make the one decision in question. In 2019, Erwan Le Merrer and Gilles Trédan at the French National Center for Scientific Research described a method that constructs these trees in a deceptive way, so that they could explain a credit rating in seemingly objective terms, while hiding the system’s reliance on the applicant’s gender, age, and immigration status.

Whether any of these deceptions have or ever will be deployed is an open question. Perhaps some degree of deception is already common, as in the case for the algorithms that explain how advertisements are targeted. Schneider of the University of Liechtenstein says that the deceptions in place now might not be so flagrant, “just a little bit misguiding.” What’s more, he points out, current laws requiring explanations aren’t hard to satisfy. “If you need to provide an explanation, no one tells you what it should look like.”

Despite the possibility of trickery in XAI, Duke’s Rudin takes a hard line on what to do about the potential problem: She argues that we shouldn’t depend on any decision-making system that requires an explanation. Instead of explainable AI, she advocates for interpretable AI—algorithms that are inherently transparent. “People really like their black boxes,” she says. “For every data set I’ve ever seen, you could get an interpretable [system] that was as accurate as the black box.” Explanations, meanwhile, she says, can induce more trust than is warranted: “You’re saying, ‘Oh, I can use this black box because I can explain it. Therefore, it’s okay. It’s safe to use.’ ”

What about the notion that transparency makes these systems easier to game? Rudin doesn’t buy it. If you can game them, they’re just poor systems, she asserts. With product ratings, for example, you want transparency. When ratings algorithms are left opaque, because of their complexity or a need for secrecy, everyone suffers: “Manufacturers try to design a good car, but they don’t know what good quality means,” she says. And the ability to keep intellectual property private isn’t required for AI to advance, at least for high-stakes applications, she adds. A few companies might lose interest if forced to be transparent with their algorithms, but there’d be no shortage of others to fill the void.

Lipton, of Carnegie Mellon, disagrees with Rudin. He says that deep neural networks—the blackest of black boxes—are still required for optimal performance on many tasks, especially those used for image and voice recognition. So the need for XAI is here to stay. But he says that the possibility of deceptive XAI points to a larger problem: Explanations can be misleading even when they are not manipulated.

Ultimately, human beings have to evaluate the tools they use. If an algorithm highlights factors that we would ourselves consider during decision-making, we might judge its criteria to be acceptable, even if we didn’t gain additional insight and even if the explanation doesn’t tell the whole story. There’s no single theoretical or practical way to measure the quality of an explanation. “That sort of conceptual murkiness provides a real opportunity to mislead,” Lipton says, even if we humans are just misleading ourselves.

In some cases, any attempt at interpretation may be futile. The hope that we’ll understand what some complex AI system is doing reflects anthropomorphism, Lipton argues, whereas these systems should really be considered alien intelligences—or at least abstruse mathematical functions—whose inner workings are inherently beyond our grasp. Ask how a system thinks, and “there are only wrong answers,” he says.

And yet explanations are valuable for debugging and enforcing fairness, even if they’re incomplete or misleading. To borrow an aphorism sometimes used to describe statistical models: All explanations are wrong—including simple ones explaining how AI black boxes work—but some are useful.

This article appears in the February 2021 print issue as “Lyin’ AIs.”

Deep Learning at the Speed of Light

Post Syndicated from David Schneider original https://spectrum.ieee.org/computing/software/deep-learning-at-the-speed-of-light

In 2011, Marc Andreessen, ­general partner of venture capital firm Andreessen Horowitz, wrote an influential article in The Wall Street Journal titled,“Why Software Is Eating the World.” A decade later now, it’s deep learning that’s eating the world.

Deep learning, which is to say artificial neural networks with many hidden layers, is regularly stunning us with solutions to real-world problems. And it is doing that in more and more realms, including natural-language processing, fraud detection, image recognition, and autonomous driving. Indeed, these neural networks are getting better by the day.

But these advances come at an enormous price in the computing resources and energy they consume. So it’s no wonder that engineers and computer scientists are making huge efforts to figure out ways to train and run deep neural networks more efficiently.

An ambitious new strategy that’s coming to the fore this year is to perform many of the required mathematical calculations using photons rather than electrons. In particular, one company, ­Lightmatter, will begin marketing late this year a neural-network accelerator chip that calculates with light. It will be a refinement of the prototype Mars chip that the company showed off at the virtual Hot Chips conference last August.

While the development of a commercial optical accelerator for deep learning is a remarkable accomplishment, the general idea of computing with light is not new. Engineers regularly resorted to this tactic in the 1960s and ’70s, when electronic digital computers were too feeble to perform the complex calculations needed to process synthetic-aperture radar data. So they processed the data in the analog domain, using light.

Because of the subsequent Moore’s Law gains in what could be done with digital electronics, optical computing never really caught on, despite the ascendancy of light as a vehicle for data communications. But all that may be about to change: Moore’s Law may be nearing an end, just as the computing demands of deep learning are exploding.

There aren’t many ways to deal with this problem. Deep-learning researchers may develop more efficient algorithms, sure, but it’s hard to imagine those gains will be sufficient. “I challenge you to lock a bunch of theorists in a room and have them come up with a better algorithm every 18 months,” says Nicholas Harris, CEO of ­Lightmatter. That’s why he and his colleagues are bent on “developing a new compute technology that doesn’t rely on the transistor.”

So what then does it rely on?

The fundamental component in Lightmatter’s chip is a Mach-Zehnder interferometer. This optical device was jointly invented by Ludwig Mach and Ludwig Zehnder in the 1890s. But only recently have such optical devices been miniaturized to the point where large numbers of them can be integrated onto a chip and used to perform the matrix multiplications involved in neural-network calculations.

Keren Bergman, a professor of electrical engineering and the director of the Lightwave Research Laboratory at Columbia University, in New York City, explains that these feats have become possible only in the last few years because of the maturing of the manufacturing ecosystem for integrated photonics, needed to make photonic chips for communications. “What you would do on a bench 30 years ago, now they can put it all on a chip,” she says.

Processing analog signals carried by light slashes energy costs and boosts the speed of calculations, but the precision can’t match what’s possible in the digital domain. “We have an 8-bit-equivalent system,” say Harris. This limits his company’s chip to neural-network inference calculations—the ones that are carried out after the network has been trained. Harris and his colleagues hope their technology might one day be applied to training neural networks, too, but training demands more precision than their optical processor can now provide.

Lightmatter is not alone in the quest to harness light for ­neural-network calculations. Other startups working along these lines include Fathom Computing, ­LightIntelligence, LightOn, ­Luminous, and Optalysis. One of these, Luminous, hopes to apply optical computing to spiking neural networks, which take advantage of the way the neurons of the brain process information—­perhaps accounting for why the human brain can do the remarkable things it does using just a dozen or so watts.

Luminous expects to develop practical systems sometime between 2022 and 2025. So we’ll have to wait a few years yet to see what pans out with its approach. But many are excited about the prospects, including Bill Gates, one of the company’s high-profile investors.

It’s clear, though, that the computing resources being dedicated to artificial-intelligence systems can’t keep growing at the current rate, doubling every three to four months. Engineers are now keen to harness integrated photonics to address this challenge with a new class of computing machines that are dramatically different from conventional electronic chips yet are now practical to manufacture. Bergman boasts: “We have the ability to make devices that in the past could only be imagined.”

This article appears in the January 2021 print issue as “Deep Learning at the Speed of Light.”

Virtual event: How AWS Marketplace innovators enhance security for a remote workforce

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/virtual-event-how-aws-marketplace-innovators-enhance-security-for-a-remote-workforce

Enhance security

Dispersed workforces require changes in security parameters and requirements for connecting business-critical resources. In this virtual event, remote workforce security thought leaders, strategists, and technologists will discuss key innovations enabling AWS customers to transform their security for a remote and hybrid workforce.

IBM Makes Encryption Paradox Practical

Post Syndicated from Dan Garisto original https://spectrum.ieee.org/tech-talk/computing/software/ibm-makes-cryptographic-paradox-practical

How do you access the contents of a safe without ever opening its lock or otherwise getting inside? This riddle may seem confounding, but its digital equivalent is now so solvable that it’s becoming a business plan. 

IBM is the latest innovator to tackle the well-studied cryptographic technique called fully homomorphic encryption (FHE), which allows for the processing of encrypted files without ever needing to decrypt them first. Earlier this month, in fact, Big Blue introduced an online demo for companies to try out with their own confidential data. IBM’s FHE protocol is inefficient, but it’s workable enough still to give users a chance to take it for a spin. 

Today’s public cloud services, for all their popularity, nevertheless typically present a tacit tradeoff between security and utility. To secure data, it must stay encrypted; to process data, it must first be decrypted. Even something as simple as a search function has required data owners to relinquish security to providers whom they may not trust.

Yet with a workable and reasonably efficient FHE system, even the most heavily encrypted data can still be securely processed. A customer could, for instance, upload their encrypted genetic data to a website, have their genealogy matched and sent back to them—all without the company ever knowing anything about their DNA or family tree. 

At the beginning of 2020, IBM reported the results of a test with a Brazilian bank, which showed that FHE could be used for a task as complex as machine learning. Using transaction data from Banco Bradesco, IBM trained two models—one with FHE and one with unencrypted data—to make predictions such as when customers would need loans.

Even though the data was encrypted, the FHE scheme made predictions with accuracy equal to the unencrypted model. Other companies, such as Microsoft and Google have also invested in the technology and developed open-source toolkits that allow users to try out FHE. These software libraries, however, are difficult to implement for anyone but a cryptographer, a problem IBM hopes to remedy with its new service.           

“This announcement right now is really about making that first level very consumable for the people [who] are maybe not quite as crypto-savvy,” said Michael Osborne, a security researcher at IBM.

One of the problems with bringing FHE to market is that it must be tailor-made for each situation. What works for Banco Bradesco can’t necessarily be transferred seamlessly over to Bank of America, for example.

“It’s not a generic service,” said Christiane Peters, a senior cryptographic researcher at IBM “You have to package it up. And that’s where we hope from the clients that they guide us a little bit.”

It is not clear whether IBM’s scheme for FHE is any better than that of its competitors. However, by offering a service to clients, the company may have gotten the lead on tackling some of the first practical implementations of the technology, which has been in development for years.

Since the 1970s, cryptographers had considered what it would mean to process encrypted data, but no one was sure whether such an encryption scheme could exist even in theory. In 2009, Craig Gentry, then a Stanford graduate student, proved FHE was possible in his PhD dissertation

Over the past decade, algorithmic improvements have improved the efficiency of FHE by a factor of about a billion. The technique is still anywhere from 100 to a million times slower than traditional data processing—depending on the data and the processing task. But in some cases, Osborne says, FHE could still be attractive. 

One way to understand a key principle behind FHE is to consider ways in which an adversary might break it. Suppose Alice wants to put her grocery list on the cloud, but she’s concerned about her privacy. If Alive encrypts items on her list by shifting one letter forward, she can encode APPLES as BQQMFT. This is easily broken, so Alice adds noise, in the form of a random letter. APPLES instead becomes BQQZMFT. This makes it much, much harder for the attacker to guess the grocery items because they have to account for noise. Alice must strike a balance: too much noise and operations take too much time; too little noise and the list is unsecured. Gentry’s 2009 breakthrough was to introduce a specific, manageable amount of noise. 

While FHE may be of interest to many individual consumers interested in data privacy, its early corporate adopters are mainly limited to the finance and healthcare sectors, according to Peters. 

FHE’s applications may be increasing with time, though. In a data-rich, privacy-poor world, it’s not hard to recognize the appeal of a novel technology that lets people have their secret cake and eat it too.

Revisiting 2020’s Most Popular Blog Posts

Post Syndicated from Evan Ackerman original https://spectrum.ieee.org/computing/software/revisiting-2020s-most-popular-blog-posts

Spectrum first began publishing an online edition in 1996. And in the quarter century since, our website has tried to serve IEEE members as well as the larger worldwide base of tech-savvy readers across the Internet. In 2020, four of ­Spectrum’s top 10 blog posts were about COVID-19; another four were about robots. (One was about both.) Two discussed programming languages, another popular item on our site. Here we revisit five of those favorite postings from the past year, updating readers on new developments, among them promising COVID-19 tests and therapeutics, no-code programming, and an incredibly versatile robotic third leg. All of which, if the tremendous challenges of the past year offer any guidance, could be a useful survival kit for enduring whatever 2021 has in store.

  • COVID-19 Study: Quell the “Bradykinin Storm”

    Precisely how the novel coronavirus causes COVID-19 may still be a mystery. But one year into the pandemic, it’s no longer quite a mystery wrapped inside an enigma. This was the upshot of a landmark coronavirus study from July conducted by a team of American scientists using the Summit supercomputer at the Oak Ridge National Laboratory, in ­Tennessee. Their genetic-data mining paper, published in the journal eLife, concluded that one lesser-studied biomolecule arguably lies at the heart of how the SARS-CoV-2 virus causes COVID-19.

    Bradykinin is a peptide that regulates blood pressure and causes blood vessels to become permeable. The Oak Ridge study concluded that the novel coronavirus effectively hacks the body’s ­bradykinin system, leading to a sort of molecular landslide. In so many words, a “bradykinin storm” overdilates blood vessels in the lungs, leading to fluid buildup, congestion, and difficulty breathing. And because an overabundance of bradykinin can trigger heart, kidney, neurological, and circulatory problems, the bradykinin hypothesis may lead to yet more coronavirus treatments.

    Daniel Jacobson, Oak Ridge chief scientist for computational systems biology, says his team’s eLife study has been partly vindicated in the months since publication. Their paper highlighted a dozen compounds they said could be effective for some COVID-19 patients. Three of those drugs in particular have since proved, in early clinical trials, to show significant promise: Icatibant (a bradykinin blocker), calcifediol (a vitamin D analogue that targets a pathway related to bradykinin), and ­dexamethasone (a steroid that blocks signaling from bradykinin receptors).

    “Our focus is on getting the work out in ways that are going to help people,” Jacobson says. “We’re excited about these other data points that keep confirming the model.”

    The above is an update to a blog post (2020’s second most popular) that originally appeared on 2 August at spectrum.ieee.org/covidcode-aug2020

  • Third Leg Lends a Hand

    Need an extra hand? How about an extra foot? Roboticists in Canada, from the ­Université de Sherbrooke, in ­Quebec, have been developing supernumerary robotic limbs that are designed to explore what humans can do with three arms, or even three legs. The robotic limbs are similar in weight to human limbs, and are strong and fast thanks to ­magnetorheological clutches that feed pressurized water through a hydrostatic transmission system. This system, coupled to a power source inside a backpack, keeps the limb’s inertia low while also providing high torque.

    Mounted at a user’s hips, a supernumerary robotic arm can do things like hold tools, pick apples, play badminton, and even smash through a wall, all while under the remote control of a nearby human. The supernumerary robotic leg is more autonomous, able to assist with several different human gaits at a brisk walk and add as much as 84 watts of power. The leg could also be used to assist with balance, acting as a sort of hands-free cane. It can even move quickly enough to prevent a fall— far more quickly than a biological leg. Adding a second robotic leg opposite the first suggests even more possibilities, including a human-robot quadruped gait, which would be a completely new kind of motion.

    Eventually, the researchers hope to generalize these extra robotic limbs so that a single limb could function as either an arm, a leg, or perhaps even a tail, depending on what you need it to do. Their latest work was presented in October at the 2020 International Conference on Intelligent Robots and Systems (IROS), cosponsored by IEEE and the Robotics Society of Japan.

    The above is an update to a blog post (2020’s fifth most popular ) that originally appeared on 4 June at spectrum.ieee.org/thirdarm-jun2020

  • The Hello Robot Arm Offers a Leg Up

    Last summer was a challenging time to launch a new robotics company. But Hello Robot, which announced its new mobile manipulator this past July, has been working hard to provide its robot (called Stretch) to everyone who wants one. Over the last six months, Hello Robot, based in Martinez, Calif., has shipped dozens of the US $17,950 robots to customers, which have included an even mix of academia and industry.

    One of these early adopters of Stretch is Microsoft, which used the robot as part of a company-wide hackathon last summer. A Microsoft developer, Sidh, has cerebral palsy, and while Sidh has no trouble writing code with his toes, there are some everyday tasks—like getting a drink of water—that he regularly needs help with. Sidh started a hackathon team with Microsoft employees and interns to solve this problem with Stretch. Although most of the team knew very little about robotics, over just three days of remote work they were able to program Stretch to operate semiautonomously through voice control. Now Stretch can manipulate objects (including cups of water) at Sidh’s request. It’s still just a prototype, but Microsoft has already made the code open source, so that others can benefit from the work. Sidh is still working with Stretch to teach it to be even more useful.

    In the past, Hello Robot cofounder Charlie Kemp’s robot of choice has been a $400,000, 227-kilogram robot called PR2. Stretch offers many of the same mobile manipulation capabilities. But its friendly size and much lower cost mean that people who before might not have considered buying a robot are now giving Stretch a serious look.

    The above is an update to a blog post (2020’s sixth most popular) that originally appeared on 14 July at ­spectrum.ieee.org/hellorobot-jul2020

  • At-Home COVID-19 Test Hits Snags

    When last we heard from the maverick biotech entrepreneur Jonathan ­Rothberg, he’d just invented a rapid diagnostic test for COVID-19 that was as accurate as today’s best lab tests but easy enough for regular people to use in their own homes. Rothberg had pivoted one of his companies, the synthetic biology startup Homodeus, to develop a home test kit. During the first months of the pandemic, he worked with academic and clinical collaborators to test his team’s designs. In March, he optimistically projected a ready date of “weeks to months.” By late August, when The New Yorker published an article about his crash project, he spoke of getting the tests “out there by Thanksgiving.”

    Unfortunately, the so-called Detect kits haven’t yet made it to doctors’ offices or drugstore shelves. As of press time, Rothberg hoped to receive emergency use authorization from the U.S. Food and Drug Administration in late December, which would enable Homodeus to distribute the kits to health professionals. The kit could then be approved for consumers early in 2021.

    The Homodeus team got slowed down by their insistence on simplicity and scalability, Rothberg tells IEEE Spectrum. As they finalized the prototype, they also secured their supply chains. Once they receive FDA approval they’ll be able to “deliver upwards of 10 million tests per month,” Rothberg says.

    The above is an update to a blog post (2020’s eighth most popular) that originally appeared on 13 March at spectrum.ieee.org/covidtest-mar2020

  • Toward a World Without Code

    No-code development—building software without writing code—gained momentum in 2020 as a result of the COVID-19 pandemic. Governments and organizations needed swift action for a ­fast-moving crisis. They turned to no-code platforms to rapidly develop and deploy essential software, including a COVID-19 management hub that allowed New York City and Washington, D.C., to deliver critical services to residents; a loan-processing system for a bank so it could receive Paycheck Protection Program applications from small businesses; and a workforce safety solution to aid the return of employees to their workplaces.

    Tech companies capitalized on this trend too. In June 2020, ­Amazon Web Services released its no-code tool, Honeycode, in beta. A month later, Microsoft launched Project Oakdale, a built-in low-code data platform for Microsoft Teams. With Project Oakdale, users can create custom data tables, apps, and bots within the chat and videoconferencing platform using Power Apps, Microsoft’s no-code software.

    The no-code movement is also reaching the frontiers of artificial intelligence. Popular no-code machine-learning platforms include Apple’s Create ML, Google’s AutoML, Obviously AI, and Teachable Machine. These platforms make it easier for those with little to no coding expertise to train and deploy ­machine-learning models, as well as quickly categorize, extract, and analyze data.

    No-code development is set to go mainstream over the coming years, with the market research company Forrester predicting the emergence of hybrid teams of business users and software developers building apps together using no-code platforms. As the trends noted above take root in both the public and private sectors, there is little doubt today that—to modify an old programmer’s maxim—the future increasingly will be written in no-code.

    The above is an update to a blog post (2020’s most popular) that originally appeared on 11 March at ­spectrum.ieee.org/nocode-mar2020

This article appears in the January 2021 print issue as “2020’s Most Popular Blog Posts.”

Quantum Computers Will Speed Up the Internet’s Most Important Algorithm

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/computing/software/quantum-computers-will-speed-up-the-internets-most-important-algorithm

The fast Fourier transform (FFT) is the unsung digital workhorse of modern life. It’s a clever mathematical shortcut that makes possible the many signals in our device-connected world. Every minute of every video stream, for instance, entails computing some hundreds of FFTs. The FFT’s importance to practically every data-processing application in the digital age explains why some researchers have begun exploring how quantum computing can run the FFT algorithm more efficiently still.

“The fast Fourier transform is an important algorithm that’s had lots of applications in the classical world,” says Ian Walmsley, physicist at Imperial College London. “It also has many applications in the quantum domain. [So] it’s important to figure out effective ways to be able to implement it.”

The first proposed killer app for quantum computers—finding a number’s prime factors—was discovered by mathematician Peter Shor at AT&T Bell Laboratories in 1994. Shor’s algorithm scales up its factorization of numbers more efficiently and rapidly than any classical computer anyone could ever design. And at the heart of Shor’s phenomenal quantum engine is a subroutine called—you guessed it—the quantum Fourier transform (QFT).

Here is where the terminology gets a little out of hand. There is the QFT at the center of Shor’s algorithm, and then there is the QFFT—the quantum fast Fourier transform. They represent different computations that produce different results, although both are based on the same core mathematical concept, known as the discrete Fourier transform.

The QFT is poised to find technological applications first, though neither appears destined to become the new FFT. Instead, QFT and QFFT seem more likely to power a new generation of quantum applications.

The quantum circuit for QFFT is just one part of a much bigger puzzle that, once complete, will lay the foundation for future quantum algorithms, according to researchers at the Tokyo University of Science. The QFFT algorithm would process a single stream of data at the same speed as a classical FFT. However, the QFFT’s strength comes not from processing a single stream of data on its own but rather multiple data streams at once. The quantum paradox that makes this possible, called superposition, allows a single group of quantum bits (qubits) to encode multiple states of information simultaneously. So, by representing multiple streams of data, the QFFT appears poised to deliver faster performance and to enable power-saving information processing.

The Tokyo researchers’ quantum-circuit design uses qubits efficiently without producing so-called garbage bits, which can interfere with quantum computations. One of their next big steps involves developing quantum random-access memory for preprocessing large amounts of data. They laid out their QFFT blueprints in a recent issue of the journal ­Quantum Information Processing.

“QFFT and our arithmetic operations in the paper demonstrate their power only when used as subroutines in combination with other parts,” says Ryo Asaka, a physics graduate student at Tokyo University of Science and lead author on the study.

Greg Kuperberg, a mathematician at the University of California, Davis, says the Japanese group’s work provides a scaffolding for future quantum algorithms. However, he adds, “it’s not destined by itself to be a magical solution to anything. It’s trundling out the equipment for somebody else’s magic show.”

It is also unclear how well the proposed QFFT would perform when running on a quantum computer under real-world constraints, says Imperial’s Walmsley. But he suggested it might benefit from running on one kind of quantum computer versus another (for example, a ­magneto-optical trap versus nitrogen vacancies in diamond) and could eventually become a specialized coprocessor in a quantum-classical hybrid computing system.

University of Warsaw physicist Magdalena Stobińska, a main coordinator for the European Commission’s AppQInfo project—which will train young researchers in quantum information processing starting in 2021—notes that one main topic involves developing new quantum algorithms such as the QFFT.

“The real value of this work lies in proposing a different data encoding for computing the [FFT] on quantum hardware,” she says, “and showing that such out-of-box thinking can lead to new classes of quantum algorithms.”

This article appears in the January 2021 print issue as “A Quantum Speedup for the Fast Fourier Transform.”

Full-Wave EM Simulations: Electrically Large Antenna Placement and RCS Scenarios

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/fullwave-em-simulations-electrically-large-antenna-placement-and-rcs-scenarios

Handling various complex simulation scenarios with a single simulation method is a rather challenging task for any software suite.

We will show you how our software, based on Method-of-Moments, can analyze several scenarios including complicated and electrically large models (for instance, antenna placement and RCS) using desktop workstations. 

App Aims To Reduce Deaths From Opioid Overdose

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/computing/software/unityphilly-app-opioid-overdose

Journal Watch report logo, link to report landing page

If administered promptly enough, naloxone can chemically reverse an opioid overdose and save a person’s life. However, timing is critical–quicker administration of the medication can not only save a life, but also reduce the chances that brain damage will occur.

In exploring new ways to administer naloxone faster, a team of researchers has harnessed an effective, community-based approach. It involves an app for volunteers, who receive an alert when another app user nearby indicates that an overdose is occurring and naloxone is needed. The volunteers then have the opportunity to respond to the request.

A recent pilot study in Philadelphia shows that the approach has the potential to save lives. The results were published November 17 in IEEE Pervasive Computing.

“This concept had worked with other medical emergencies like cardiac arrest and anaphylaxis, so we thought we could have success with transferring it to opioid overdoses,” says Gabriela Marcu, an assistant professor at the University of Michigan’s School of Information, who was involved in the study.

The app, called UnityPhilly, is meant for bystanders who are in the presence of someone overdosing. If that bystander doesn’t have naloxone, they can use UnityPhilly to send out an alert with a single push of a button to nearby volunteers who also have the app. Simultaneously, a separate automated call is sent to 911 to initiate an emergency response.

Marcu’s team chose to pilot the app in Philadelphia because the city has been hit particularly hard by the opioid crisis, with 46.8 per 100,000 people dying from overdoses each year. They piloted the UnityPhilly app in the neighborhood of Kensington between March 2019 and February 2020.

In total, 112 participants were involved in the study, half of whom self-identified as active opioid users. Over the one-year period, 291 suspected overdose alerts were reported.

About 30% of alerts were false alarms, cancelled within two minutes of the alert being sent. On the other hand, at least one dose of naloxone was administered by a study participant in 36.6% of cases. Of these instances when naloxone was administered, 96% resulted in a successful reversal of the overdose. This means that a total of 71 out of 291 cases resulted in a successful reversal. 

Marcu notes that there are many advantages to the UnityPhilly approach. “It has been designed with the community, and it’s driven entirely by the community. It’s neighbors helping neighbors,” she explains.

One reported downfall of the approach with the current version of UnityPhilly is that volunteers only see a location on a map, with no context of what kind of building or environment they will be entering to deliver the naloxone dose. To address this, Marcu says her team is interested in refining the user experience and enhancing how app users can communicate with one another before, during, and after they respond to an overdose. 

“What’s interesting is that so many users still remained motivated to incorporate this app into their efforts, and their desire to help others drove adoption and acceptance of the app in spite of the imperfect user experience,” says Marcu. “So we look forward to continuing our work with the community on this app… Next, we plan on rolling it out city-wide in Philadelphia.”

Why the Web Spreads Information and Misinformation Equally Well

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/why-the-web-spreads-information-and-misinformation-equally-well

“A lie gets halfway around the world while the truth is putting on its shoes.” That’s a great line, but who originally said it? Was it Mark Twain, always good for an epigram, or the oft-quoted Winston Churchill? According to The New York Times, it’s an adaptation of something written three centuries ago by famed satirist Jonathan Swift: “Falsehood flies, and the Truth comes limping after it.”

“Truth is the first casualty of war.” (Classical playwright Aeschylus or California statesman Hiram Johnson?) Given truth’s obvious vulnerabilities, we should be doing more to protect it when we send it out to do battle. But having constructed a technological apparatus that disseminates information instantaneously and globally without regard to its veracity, we shouldn’t be surprised that this apparatus has left us drowning in lies.

“First we shape our tools, thereafter our tools shape us.” (Marshall McLuhan or Father John Culkin?) Our copy-and-paste notions of truth and factuality likely have their roots in the early Web, which was intended initially only to link resources stored across a heterogeneous range of computers at CERN, the European Organization for Nuclear Research. The particle physicists’ tool soon helped everyone to share information about everything. But the Web’s hyperlinks inevitably create an impenetrable thicket of pointers, from one resource to another to another to another until it becomes nearly impossible to discern an ultimate source of truth. The Web created a global, hyperlinked document space, paradoxically making the truth more obscure than ever.

Two decades before the Web, hypertext pioneer Ted Nelson offered another model: Rather than just reference sources, include them. More subtle than a simple copy-and-paste operation, Nelson’s approach allows one document to embed the content of another via a link to a portion of the source document. In such a system nothing needs to be copied. The referring document “transcludes” a portion of the material found in the source document.

Transclusion allows for the creation of hypertext documents that are themselves the assembly of other hypertext documents that are themselves the assembly of still more hypertext documents. While any document can contain original content, it simultaneously serves as a window onto other documents, allowing viewers to reach into and through the references, all the way back to their primary sources. Transclusion could have created a Web built on a set of unimpeachable, universally accepted sources of information.

Served up by content-management systems that algorithmically compose documents from multiple sources, the modern Web gradually converged with Nelson’s vision for transclusion—with one key difference: The Web offers no single source of truth, nor any ultimate reference to a set of trusted sources. Instead, everything points to everything else (or to itself), which tends to make the Web appear to be fuller and more authoritative than it really is. That helps explain why conspiracy theories like QAnon are so difficult to root out.

It would only take a few subtle changes to nudge the Web away from the shifting sands of links and plant it firmly in the real world of universally accepted facts. The nature of these authoritative sources will be fought over, naturally, as fierce rivals battle it out to set the terms for defining the truth. Yet where we can build consensus, humanity would possess “a truth universally acknowledged”—to borrow a line that we can all agree belongs to Jane Austen.

This article appears in the December 2020 print issue as “The Web’s Lurking Lies.”

Simulation-Driven Design of a Hyperloop Capsule Motor

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/simulationdriven-design-of-a-hyperloop-capsule-motor

The Hyperloop transportation system is composed of a constrained space characterized by a low-pressure environment that is usually represented by tubes/tunnels. The space also houses a dedicated rail responsible for the mechanical constraining of energy-autonomous vehicles (called capsules or pods) carrying a given payload. Hyperloop capsules are expected to be self-propelled and can use the tube’s rail for guidance, magnetic levitation, and propulsion. For an average speed in the order of two to three times larger than high-speed electric trains and a maximum speed in the order of the speed of sound, the Hyperloop is expected to achieve average energy consumption in the range of 30–90 Wh/passenger/km and CO2 emissions in the range of 5–20 g CO2/passenger/km. A key aspect to achieve this performance is the optimal design of the capsule propulsion. A promising solution is represented by the double-sided linear induction motor (DSLIM). The performance of high-speed DSLIM is affected by material properties and geometrical factors.

In this webinar, we describe how to model a DSLIM using the COMSOL Multiphysics® software to provide an accurate estimation of the exerted thrust by the motor. Furthermore, we illustrate how to carry out a simulation-driven optimization to find the best motor configuration in terms of maximum speed. The results of the simulations are compared with measurements carried out in an experimental test bench developed at the Swiss Federal Institute of Technology, Lausanne, within the context of the participation of the EPFLoop team to the 2019 SpaceX Hyperloop pod competition.

Say Kaixo!

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/computing/software/say-kaixo

Organizers of a European Union–supported software sharing platform for language technologies are planting seeds for applications that could debut on it with some eye-catching results: We might see the sprouting of a Basque-speaking, Alexa-style home language assistant, for instance.

A first-release version of the platform, called the European Language Grid, is already being used to distribute and gain visibility for language usage and translation tools from some of the hundreds of European firms trading in language technology. Many of the tools offer the ability to communicate among speakers of complex languages–Irish Gaelic, Maltese, and Latvian, to name a few–that are spoken by relatively few people.

If it seems global technology giants such as Google or Amazon could deliver these kinds of tools, maybe that’s right. But they may not dedicate the time and ensure the polish that a dedicated niche developer might. Besides, supporters of the initiative say, Europe should take care of its own digital infrastructure. Getting linguistic architectures to work easily and freely is a key interest on a continent that is trying to hold together a strained economic and social union straddling dozens of mother tongues.

The Language Grid is meant to create a broad marketplace for language technology in Europe, says Georg Rehm, a principal researcher at the German Research Center for Artificial Intelligence (DFKI).

The Grid is a scalable web platform that allows access to data sets and tools that are docked behind the platform’s interface. The base infrastructure is operated on a Kubernetes cluster—a set of node machines that run containerized applications built by service providers. It’s all hosted by the cloud provider SysEleven in Berlin. Users can access data and tools in the docker containers without needing to install anything locally. Grid organizers recently picked 10 early-stage projects that can be supported by the platform, boosting them with small research grants. Another open call for projects is running through October and November. Results are likely in early January 2021.

“Our technologies and services will be more visible to a broader market we would otherwise not be able to reach,” says Igor Leturia Azkarate, speech technologies manager at Elhuyar Fundazoia, a non-governmental organization promoting the everyday use of Basque, especially in science and technology. “We hope it will help other speakers of minority languages be aware of the possibilities, and that they will take advantage of our work.”

Azkarate and his colleagues are adapting Basque language text-to-speech and speech recognition tools to work within Mycroft AI, a Python-based open-source software voice assistant. The goal is to make a home assistant speaker, an Alexa-like device, that operates natively in Basque. Right now, the big home assistants operate in the world’s dozen or so most widely spoken languages. But rather than obliging users to go into Spanish or English—or wait for an as-yet-undeveloped Basque front-end facsimile or halfway solution that might still leave a user with a Julio Iglesias playlist on their hands rather than some Iñigo Muguruza—Azkarate’s after something better. Once the Elhuyar team adapts its Basque tools, they’ll be accessible on the Language Grid for others to use or experiment with.

Another early-stage project is coming from Jörg Tiedemann at the University of Helsinki, who is working with colleagues to develop open translation models for the Grid. These models use deep neural networks—layered software architectures that implement complex mathematical functions—to map text into numeric representations. Using data sets to train the models to find the best ways to solve problems takes a lot of computing power and is expensive. Making the models available for re-use will help developers build tools for low-density languages. “Minority languages get too little attention because they are not commercially interesting,” Tiedemann says. “This gap needs to be closed.”

Andrejs Vasiļjevs, chief executive of the language technology company Tilde, got his start because of a scarcity of digital tools in his native Latvia. In the late 1980s, he was studying computer science in Riga; in those days Latvia was part of the Soviet Union, with personal computing a limited realm. As the Union collapsed, PCs came in and people wanted to use them to start independent newspapers and magazines. But because there were no Latvian keyboards nor any Latvian fonts, it was not possible to write in Latvian. Vasiļjevs got to work on the problem and started Tilde in 1991 with a business partner, Uldis Dzenis.

Three decades later, Tilde is still making tools to spur communication—but now in machine translation, speech synthesis, and speech recognition. A Tilde translation engine is currently running underneath Germany’s EU presidency website; it provides on-the-fly translations of source documents from German, French, and English originals into all of the other 21 official EU languages. The Riga-based developer already has several datasets and models on the Language Grid for potential clients to test out, including a machine translation engine for English to Bulgarian and back,x and a text-to-speech model for Latvian language, child’s voice. “We want to integrate our key services into the European Language Grid,” Vasiļjevs says. “It makes for more exposure to the market.”

The Battle for Videogame Culture Isn’t Playstation vs Xbox

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/software/the-battle-for-videogame-culture-isnt-playstation-vs-xbox

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

About ten years ago, I wanted to write an article about why so many rock climbers were scientists and engineers. One person I was eager to talk with was Willie Crowther, who, while employed at BBN in the early 1970s, was one of the founders of the early Internet—there’s even a famous picture of Internet pioneers that includes him—but was also a pioneering rock climber in New England and the Shawangunk mountains of the Hudson Valley. Searching the web for an active email address for him, I kept coming up with a person with the same name who was also a computer programmer and who wrote one of the first—maybe the first—adventure-style computer game. I eventually figured out that there was only one Willie Crowther, who had done all three things—worked on the Arpanet, rock climbed, and wrote a seminal computer game.

November is a big month for the millions of people who devote their time and money to computer games within a two-day period. Sony will be releasing its fifth-generation PlayStation and its main competitor, Microsoft’s newest Xbox, comes out as well.

So it’s a good month to look at the culture of gaming and how it reflects the broader culture; how it reinforces it; and how it could potentially be a force for freeing us from some of the worse angels of our nature—or for trapping us further into them.

I’m not sure I can imagine someone better qualified to talk about this than Megan Condis. She a professor at Texas Tech University and is the author of the 2018 book, Gaming Masculinity: Trolls, Fake Geeks, and the Gendered Battle for Online Culture.

Megan, welcome to the podcast.

Megan Condis Thank you so much for having me.

Steven Cherry The origins of gaming are pretty well represented by rock-climbing Internet-pioneer Willie Crowther, white male engineers of the 1950s and 60s who have leisure time and unique access to the mainframe computers of the era. Megan, you say there are consequences and reverberations of that set of attributes even half a century later.

Megan Condis Yeah. So one of the things to think about with videogames is a lot of times we think about them as these technical objects. But I also like to think about them as stories or as texts. And it maybe is sort of obvious to say this, but people create the types of texts and the types of stories that they would like to see in the world and that appeal to them. And so when you have a medium whose origins are so narrow in terms of who was able to have access to the tools that were needed to create this particular kind of text, then it sets a certain kind of expectation about what type of stories this tool should be used to tell, the type of stories that appeal to, as you said, the straight white male engineers who had access to computer technology in the 70s.

And so even as new generations of gamers start to encounter the pastime and start to get interested in development and start to want to create their own stories within the media, there’s this pressure that exists in terms of what types of genres of story are expected—or what types of communities you imagine yourself to be creating for—that remains in place. Like there’s these pressures that say we expect gamers to be members of these certain demographics. And so, of course, if you want your game to be successful, then you should create for those demographics, at least within [the] triple-A version of the industry where we’re risking millions of dollars on creating a product in hopes that it’s going to recoup its costs.

We can see that even in the early origins of gaming in the 70s and 80s, there were exceptions to these rules. So there were women developers, people of color, queer people who are developing games. But oftentimes the communities in which they were rooted held them up as the exceptions to the rule. Or, these creators were making games for outsiders to the community or they were trying to bring people into the gamer community. All of which are descriptions that take for granted who the gamer community is or who we expect gamers to be.

Steven Cherry I was really surprised to learn that among consumers of videogames, African-Americans are currently overrepresented.

Megan Condis Yeah, I think that’s interesting. If we kind of break down that number, you might ask questions about like, well, what machines are different demographics using in order to game? So who’s gaming on PC versus who’s gaming on console versus who’s gaming on mobile? But yeah, I think if you look at just games as a whole, you’re playing an interactive game on a digital device, it’s a lot more diverse of a population than we might think. And yet when you look at images of gamers in advertisements or in the media, like if you’re watching a movie about gamers like Steven Spielberg’s Ready Player One. It’s this one particular image of the nerdy white guy who lives in his parent’s basement, who comes to stand in for what we expect gamers to look like. Even though, you know, depending on the context or depending on the type of game or depending on the type of hardware, different groups filter into gaming culture in different ways.

Steven Cherry Yeah, you’ve written and talked a lot about Ready Player One. I need to give a heads up for listeners: There are going to be some spoilers here. The book is about 10 years old and the movie almost three. This is a story that’s set in a gaming universe. A universe in which gaming is both an escape from reality and a way of creating real-life success.

Megan Condis I think Ready Player One is interesting because it’s this fantasy version of our world today. Our world today is extremely game of offside in the sense that, if you go to school, a lot of times kids are learning via digital gaming apps. If you get a job, a lot of times the training that you go through, it takes the form of gaming apps. But also, you know, even things that aren’t expressly presented to us in gaming contexts often are based in the same architecture of surveillance and measurement. And in order to succeed within this particular context, you have to assemble enough points or enough reputation or likes on social media or whatever. So there’s so many different ways in which we engage with these digital gamified contexts. And being good at strategically engaging with those systems, being a gamer who is able to kind of hack those systems is how you achieve success today. And so Ready Player One takes what our world already is, this world in which we’re surrounded by these overlapping contexts of digital surveillance and gamification and says, “But what if the games in which we were embroiled were fun games? What if they were games that let us engage with pop culture or games that let us express our skill at manipulating a controller, as opposed to the kinds of reputation management systems where you’re just curating your online presence or you’re maximizing the efficiency of your CV or whatever it happens to be.

And so it’s this fantasy of we’re all gamers in a sense, by which I mean we all have to play the games that corporations and governments and institutions have set up for us. And navigating those games is how we make it in the world. But Ready Player One offers us the fantasy of what if those games were actually fun and what if we could, by participating in those games, be praised for the kinds of knowledge of the kinds of skill that we would enjoy cultivating, as opposed to the types of knowledge or skill that institutions require of us.

Steven Cherry In Ready Player One there’s a bunch of ways in which the real lives of the gamers become game-like. For example, the ways in which the hero Wade pursues the woman he loves—he reenacts scenes from rom-com movies such as the classic Cameron Crowe teen movie Say Anything—in a way very similar to the way the game at the center of the novel requires the competitors to reenact scenes from movies like the 1983 classic teen adventure story, War Games. Maybe nothing represents the fluidity between real life and gaming and the fluidity across meta-levels for gamers than Gamergate. I mean, you can just remind us what Gamergate was about.

Megan Condis Ah, man, that’s that’s hard. So I say that’s difficult because Gamergate means a lot of different things to a lot of different people. And so to summarize it is necessarily to take a perspective on it. And I’m sure there will be people who will object to the perspective that I take on it. And that’s … I’m OK with that. But to me, what Gamergate was about … It started off as a kind of interpersonal conflict between a guy who felt like he had been slighted by his girlfriend and who kind of wanted to recruit the Internet onto his cause and get them to take his side in their interpersonal conflict. But then it ended up spiraling out from there into becoming a story about how women are treated within gaming culture and particularly within the press that covers gaming culture.

So there was this feeling that gamers were getting denigrated in the press or looked down upon by the press. They were being dismissed as these geeky guys who were unsuccessful at getting the girl or who were unsuccessful according to the traditional measures of masculinity. And so it became this cause of fighting back against these people in the media who are giving gamers a bad name and who are trashing what it means to be a gamer. And I think because the origins of this squabble began in this interpersonal conflict—and because a lot of the journalists who were being targeted as people who are saying bad stuff about gamers were women—it ended up becoming this battle of “feminism is the thing that is giving gamers a bad name.” “Feminists are calling gamer culture sexist.” “They’re saying that if you are a gamer, you are by necessity hating women. And so we want to push back against that.”.

And ironically, unfortunately, the manner in which a lot of the participants in Gamergate pushed back against that was to attack female journalists using the fact that they are women as their mode of attack. So gendered attacks, sexual harassment online, doxing people, threatening people and just using a lot of hatred towards the idea that women would have something to say about this culture that they considered to be a safe space to be a guy, and to do masculine things that—like Donald Trump’s proverbial locker room—like the locker room talk that they felt like now their locker room was being invaded and they were being told, you’re not allowed to have this kind of discourse exist in this space.

Steven Cherry It’s funny, I was going to ask you if you thought Gamergate in any way presaged the broader culture wars that we’re seeing in real life, especially in politics.

Megan Condis I do think Gamergate was a preview of some of the issues that were to come. So from 2014 to today, we see a lot of different venues in which this figure of the male who feels like their place in the world has been taken away from them, that they don’t have the same opportunities as they used to have. But also, I think Gamergate was a precursor to the rise of the alt-right in the culture wars, in the sense that there were outlets such as Breitbart.com or various alt-right affiliated writers that very intentionally waded in to the Gamergate debate and tried to stoke those fires, spread the hashtag, spread the kind of terminology or ideology that they wanted to spread within those circles as—I’m going to use the word recruitment, or at least as an onboarding mechanism—to try to introduce a group of people who probably before 2014 didn’t think of themselves as particularly political, but could be introduced to the political implications of what their hobby could mean or what their feelings of being erased; how that might be useful to be recruited into a particular … not a political party, per se, but a political ideology.

Steven Cherry Getting back to the movies for a moment, you say that gaming and the broader mindsets of, and about, computer programmers spill over into other aspects of our culture made me think of the operating system in the 2013 Spike Jones movie, Her which is given an active mind and personality and which (or who) the protagonist Theodore inevitably falls in love with. (Sorry, another spoiler.) Are checkbots becoming another overlap between virtual life and real life?

Megan Condis Hmm. So when I think about a chatbot, at least in the current iteration of a chatbot, I’m thinking about a machine that’s designed to provide the esthetic of a conversation. And a lot of times I think the way that we like to engage with chatbots is to try to find the limits of what they can understand. Some thinking about, like Microsoft’s chatbot Tay that was introduced on Twitter and the kind of big selling point of Tay was that the more that you talked with her, the more that she would learn and the better that she would be able to respond. And so people decided to turn engagement with Tay into a game that was designed to see how far they could push the limits of this chatbot to see if there were any boundaries that had been built into her software. And unfortunately, what they discovered was that the programmers who created Tay had not installed any protections for her to get her to filter out any content. And so she was taught by the Internet—people who were playing this game with her—to use a lot of racist and sexist language. And so the game was … I don’t think the game was we’re going to turn a robot into a racist or sexist. I think the game was what are the limits of the system that has been presented to me? And do those limits—like the linguistic limits that were programmed into this robot—do they match the kind of social limits that are generally agreed upon in society? Was this robot built with the social contract already installed into it? Or could we create a new version of the social contract by teaching this robot that this language that usually would be considered unacceptable or rude is okay? And what they discovered is that actually, they could do it.

Steven Cherry So the game was to get the chat up to acquire characteristics that Microsoft hadn’t intended. But that’s a wide universe of potential characteristics. Do you think there’s any significance to the fact that these people went straight for racism and sexism?

Megan Condis I’m not sure. It also could be Tay—the chatbot—was personified, was given this … You know, she was given a gender, she was given a face, she was turned into a human person. And so rather than being just the disembodied Alexa or the chatbot that pops up that says, can I help you with your purchase when you’re on a Web site. Because it was this bot that was personified as a teenage girl, maybe it becomes more interesting or more provocative to have the, quote-unquote face of this racist, sexist language be this teenage girl’s face. So, yeah, I’m not really sure about that. But that’s something interesting to think about.

Steven Cherry In a 2018 talk you said that gamers are ready to take over the world. Is that more true today or less?

Megan Condis I think it is more true in the sense that, as I kind of alluded to earlier, I think game developers took over the world 10 years ago. I think, you know, even people who whose job doesn’t say in their job description “I am a game developer” are often creating systems that we use to manage the world that are at root games or at least, you know, gamified. They have a set of rules. You act within a structured system according to those rules. Your success or loss-condition is governed by those rules. And I think what is happening more and more is that gamers—so game players who have been living within these gamified systems for the last decade—are starting to realize the power that they might be able to wield within those systems and the ways in which they’ve been trained now for a long time to think about navigating these systems in terms of efficiency and in terms of strategy. And they’re now starting to think, okay, rather than getting really good at navigating these systems and the intended ways, what if we were able to find some of the unintended, unexpected ways to navigate that system? And what if we were able to take our skills at breaking a system down and finding the most efficient pathways through that system—what if we could turn that to our own advantage or to the collective advantage of the users.

And the Tay example then, the Tay chatbot example, is an example of people doing just that, not necessarily towards a productive end or towards like a revolutionary end, but just for fun. Let’s see if they haven’t thought of all the ways in which we could break the system. But more and more, I think gamers are starting to come together and think about the ways in which rather than just playing for the sake of play, what if we were to play for our own purposes? And what if we were able to have some say in the way in which these gamified systems were developed rather than just existing within those systems and trying our best to succeed within those systems?

Steven Cherry We’re currently living in something of an alternative reality with new rules for day-to-day life. I’m referring, of course, to the Coronavirus pandemic. Has it directly affected the gaming world and gamers?

Megan Condis So I think it’s really interesting. A couple of years ago, the World Health Organization had put out this notice that they were going to be investigating addictive gaming behavior. And there was this big outcry within the gaming community that the World Health Organization was pathologies and gaming, and they were stigmatizing gaming and saying that it was like an unhealthy thing to engage with.

And when the coronavirus hit, the WTO ended up releasing a statement that talked about how when people were in quarantine and when they were isolated, gaming could be a crucial means of self-care, a really important way for people to be able to have social interaction besides just watching TV or like passively consuming media. It would be a way for people to be able to reach out and talk and engage with others even though they were stuck at home.

And so I think what the Coronavirus has done is it has forced a lot of institutions that maybe were wanting to dismiss gaming as frivolous or as escapism—as not real—and getting those institutions to recognize that the social interactions that you have in a virtual world are real, they can be productive and supportive and they can be useful in keeping people’s mental health up and can be great as self-care.

But then the flip side of that, of course, becomes … that also means that the negative social interactions that you have in the virtual worlds are also real. And so that, you know, that raises some questions about moderation practices and safety online, especially for young kids. You know, if young kids are going online and they’re having negative social interactions with trolls or people who are acting abusive to them online, then is that the same as being bullied in their classroom face to face? Is that something that we do have to worry about in addition to the sort of productive, positive relationships and friendships that they could be forming online?

Steven Cherry So that’s twice it’s come up that people used to—and maybe still do—look down on games and gamers, the first was the question of whether the press looked down on them in Gamergate. Do people in academia, look down on professors who focus on games and gaming culture?

Megan Condis Whoa. Okay. So I’m not tenured yet. So, as an object of study, I think academia is very welcoming towards looking at games as this object that’s worthy of study, if only because it’s so omnipresent. Most articles and books about gaming open with this paragraph that says there are so many millions of gaming consoles and households across America and the gaming industry makes so many millions of dollars or whatever. So, you know, I think academia is very open towards looking at games as an object of study.

Over the past 15 years, academia has gotten a lot better about being willing to entertain different methodologies of looking at games. So 15, 20 years ago, yes, let’s study video games. We’re gonna study them in terms of media psychology and are going to study them in terms of the effects of video games on the development of brains and stuff like that. But over the course of time, as people got more familiar with video games and more comfortable with video games, academia started to become more open to, well, maybe we could apply humanities-oriented methodologies, maybe we could close-read video games in the same way we would give a novel or a painting or statue close attention as an art object. Or maybe video game cultures—and fan cultures generally—might be worthy of study in the same way that other types of communities or other types of relationships or organizations are considered worthy of study.

Steven Cherry Ironically, Willie Crowther wrote his adventure game for his young daughters as something they could play while visiting him after he and his wife divorced. He’s quoted as saying that his adventure game was deliberately written in a way that would not be intimidating to non-computer people—using natural language commands, for example. You write your own games, I gather mainly as teaching tools. Do you think if you wrote a commercial game, it would be hard to navigate your way through the stereotypes of gaming and the expectations of gamers?

Megan Condis Oftentimes, it’s not necessarily in the writing of a game. That process becomes difficult. It’s more in terms of the marketing of the game, because in the indie gaming scene, there’s tons of people who are writing extremely personal stories, who are writing games that engage with political topics and culturally specific topics and that are narrowcasting towards this really specific audience. And that’s OK when you’re directly marketing your game to people through Kickstarter or Patreon or what have you and you’re able to directly communicate with your audience. I think that one of the problems with commercial games is there’s this expectation in the video game industry, just like the film industry, television industry, that you’re going to need to target your game towards an audience that’s considered safe, that can be relied upon not just to purchase the first game in this series, but the 10th game in the series down the line. And up until extremely recently, the videogame industry had placed its bet on the young teenage male and said, this is going to be the audience that we’re gonna develop as our most reliable audience.

And so we don’t want to take risks in marketing games towards other people, even though we know in our own studies that other types of people are playing this game. We just don’t want to risk reaching out and marketing to those other people because we don’t want to alienate our core. And I think in the last five years, the video game industry has realized that that audience is pretty saturated. They are extremely reliable, but they’ve kind of kept out—in terms of how many people that are in that target demographic that they haven’t already reached yet.

Steven Cherry The movie industry is risk-averse in many of the same ways, especially with respect audience… And so there are the equivalents of independent movies in the game world?

Megan Condis Absolutely. So it’s kind of this loose, just like the film industry, where … What makes you an indie film? Lots of debates around that. But I think a kind of quick and easy definition would be unaffiliated developers who either as individuals or small teams create game projects that aren’t released through the studio system or that you wouldn’t necessarily be able to buy as a physical disk at your local GameStop, but rather are distributed on the Internet.

And so there’s a lot of indie games that get released through steam for P.c or that even get distributed via crowdfunding. So they will go out and find their audience before they even begin the process of developing in order to ensure that they have a sufficient wellspring of people to draw from in order to fund their game. But yeah, I think that the indie scene is a really exciting place for looking at the diversity being improved within gaming culture.

And it’s also a great—I don’t know if this is the right word—like a great stable that the AAA industry can now pull from. So you see someone who created a really successful indie game that addresses some of these questions of diversity and inclusion, and then you have a big company like an EA or Ubisoft who says, you know, we really want to reach out to that demographic. We can look to the indie scene and see here are some developers who have already made relationships with these demographics that we’re hoping to court. We can pull this person up and hire them into our system in order to try to pursue those same demographics with our AAA games.

Steven Cherry We’ve seen that in the movie world, too, where people go from independent director to Star Wars director.

Megan Condis For sure.

Steven Cherry Final question. Are there cultural differences between the PlayStation and the Xbox and in any event, which device’s release are you more looking forward to?

Megan Condis Ooh. It’s one of those things where there are definitely fans of the PlayStation versus the Xbox, and they would say, “it’s totally different and we have this totally different culture.” But I think someone looking in from the outside was, hey, they’re very similar. If you have—I don’t know, any fan culture, right—fans of Star Wars might say we like Empire Strikes Back better than Return of the Jedi. But if you’re not already in that conversation, it just all looks the same to you. So for myself, I mean, I’ve gone back and forth. I … back when the PlayStation 1 and 2 were out, I was definitely diehard PlayStation.

And then I ended up switching over to the Xbox for the previous generation. But right now, I’ve been playing a lot of PlayStation exclusive titles, Horizon Zero Dawn was a big favorite of mine. It’s just now finally starting to migrate over to other consoles. Based on the last couple of years, I would say probably immediately following release, I would be excited about the PlayStation 5. But, you know, it just always depends on which ecosystem is able to land the games that you’re interested in. And the nice thing about being an adult. So when I was a kid, it was Nintendo versus Sega. And when you’re a kid, your parents are like, I’m only going to buy you one. So you have to pick one. And then you have to make sure and always argue for yourself. Like, I picked the right one. It’s got all the best games because you’re a kid, you can’t go by both. But the nice thing about being an adult is, well, if the X box does come out with something that I really want to go after, I don’t have to go call my mom and beg her to get me the other console. I can actually get both of them if I really want. Now that I’m saying that out loud, that’s very privileged, too, right? So I’m very grateful for that.

Steven Cherry So it seems, though, that there isn’t the same sort of lock into a platform where we’re seeing what we’ve always seen with personal computers, Mac versus Windows, phones, iPhone versus Android. Even in the car world, people are starting to be locked into a platform—somebody who has driven a Prius for 10 years is so used to the Prius interface, they’re going to get another one. But that doesn’t seem to be the case in the game.

Megan Condis Well, I would say console exclusives or, yeah, that idea that if you want to play Final Fantasy, it’s PlayStation or nothing, right, that idea is still kicking around in gamer culture. I think it meets a lot more resistance from gamers today than it used to. And it seems to me like usually what happens is if a game is going to be exclusive to one console or another, it often stays exclusive the first year or two after release and then after that it will migrate to other consoles. So usually if you wait long enough, you can get a chance to play some of these games that maybe initially were kept away from you, but still a lot of times that means you missed the critical discourse around a game, like you didn’t get to participate in the initial moment of reaction. The same, like we’ve talked about spoilers earlier, like you mentioned, spoilers for films. You can get spoiled for games, too. And if you don’t get to play it right when the game comes out, sometimes you feel like you missed out on being a part of that critical mass.

Steven Cherry Well, Megan, games provide a refuge from a fractious world, even as they reflect and even reinforce it. And maybe this episode can provide some refuge from a confusing world, even as we try to understand it better. Thank you for your research and thanks for joining us today.

Megan Condis Thank you so much. It was really fun. I hope I was able to be helpful.

Steven Cherry We’ve been speaking with Megan Condis, a professor of game studies at Texas Tech University and the author of Gaming Masculinity, published by the University of Iowa Press in 2018, about the manifold ways gaming culture influences our broader culture.

This interview was recorded October 12th, 2020.

Our thanks to Miles of Gotham Podcast Studio for audio engineering; our music is by Chad Crouch. Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

See Also:

A Review of Code: Debugging the Gender Gap
19 Jun 2015
Reviewed by Stephen Cass

Turing Award for Computer Scientists: More Inclusiveness Needed
Most recipients tend to be older white male academics, who are primarily from the U.S.
02 Nov 2020
Guest post by Chai K Toh

Microsoft’s Flight Simulator 2020 Blurs the Line Between Tools and Toys

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/microsofts-flight-simulator-2020-blurs-the-line-between-tools-and-toys

My nephew recently sent me an email about our latest shared obsession: Microsoft’s Flight Simulator 2020. “Flying a Cessna 152 in this game feels exactly like flying one in real life,” he wrote. And he should know. Growing up next to a small regional airport, he saw private aircraft flying over his home every day, and he learned to fly as soon as his feet were long enough to reach the rudder pedals.

While he relaxes with the game, my experiences with it have been more stressful. Covered with sweat as I carefully adjust ailerons, trim, and throttle, I worked my way through the how-to-fly lessons, emerging exhausted. “It’s just a simulation,” I keep telling myself. “It doesn’t matter how often I crash.” But the game is so realistic that crashing scares the daylights out of me. So while playing I remain hypervigilant, gripping the controls so tightly that it hurts.

All that realism begins with the cockpit instruments and controls, but it extends well beyond. Players fly over terrain and buildings streamed from Microsoft’s Bing Maps data set, and the inclusion of real-time air traffic control information means players need to avoid the flight paths of actual aircraft. For a final touch of realism, the game also integrates real-time meteorological data, generating simulated weather conditions that mirror those of the real world.

Flight Simulator 2020 purposely mixes a simulation of something imaginary with a visualization of actual conditions. Does that still qualify as a game? I’d argue it’s something new, which is possible only now because of the confluence of fast networks, big data, and cheap but incredibly powerful hardware. Seeing YouTube videos of Flight Simulator 2020 users flying their simulated aircraft into Hurricane Laura, I wonder whether an upgrade to the game will one day allow players to pilot real drones into a future storm’s eye wall.

If that prospect seems far-fetched, consider what’s going on hundreds of kilometers higher up. That’s the realm of Saber Astronautics, whose software can be used to visualize—and manage—the immense number of objects orbiting Earth.

Before Saber, space-mission controllers squinted at numbers on a display screen to judge whether there was a danger from space debris. Now they can work with a visualization that blends observational data with computational simulations, just as Flight Simulator 2020 does. That makes it far easier to track threats and gently nudge the orbits of satellites before they run into a piece of space flotsam, which could turn the kind of cascading collisions of orbital space junk depicted in the film Gravity into a real-life catastrophe.

We’ve now got the data, the networks, and the software to create a unified simulation of Earth, from its surface all the way into space. That could be valuable for entertainment, sure—but also for much more. Weaving together data from weather sensors, telescopes, aircraft traffic control, and satellite tracking could transform the iconic “Whole Earth” image photographed by a NASA satellite in 1967 into a dynamic model, one that could be used for entertaining simulations or to depict goings-on in the real world. It would provide enormous opportunities to explore, to play, and to learn.

It’s often said that what can’t be measured can’t be managed. We need to manage everything from the ground to outer space, for our well-being and for the planet’s. At last, we now have a class of tools—ones that look a lot like toys—to help us do that.

This article appears in the November 2020 print issue as “When Games get Real.”

The Devil is in the Data: Overhauling the Educational Approach to AI’s Ethical Challenge

Post Syndicated from NYU Tandon School of Engineering original https://spectrum.ieee.org/computing/software/the-devil-is-in-the-data-overhauling-the-educational-approach-to-ais-ethical-challenge

The evolution and wider use of artificial intelligence (AI) in our society is creating an ethical crisis in computer science like nothing the field has ever faced before. 

“This crisis is in large part the product of our misplaced trust in AI in which we hope that whatever technology we denote by this term will solve the kinds of societal problems that an engineering artifact simply cannot solve,” says Julia Stoyanovich,  an Assistant Professor in the Department of Computer Science and Engineering at the NYU Tandon School of Engineering, and the Center for Data Science at New York University. “These problems require human discretion and judgement, and where a human must be held accountable for any mistakes.”

Stoyanovich believes the strikingly good performance of machine learning (ML) algorithms on tasks ranging from game playing, to perception, to medical diagnosis, and the fact that it is often hard to understand why these algorithms do so well and why they sometimes fail, is surely part of the issue. But Stoyanovich is concerned that it is also true that simple rule-based algorithms such as score-based rankers — that compute a score for each job applicant, sort applicants on their score, and then suggest to interview the top-scoring three — can have discriminatory results. “The devil is in the data,” says Stoyanovich.

As an illustration of this point, in a comic book that Stoyanovich produced with Falaah Arif Khan entitled “Mirror, Mirror”,  it is made clear that when we ask AI to move beyond games, like chess or Go, in which the rules are the same irrespective of a player’s gender, race, or disability status, and look for it to perform tasks that allocated resources or predict social outcomes, such as deciding who gets a job or a loan, or which sidewalks in a city should be fixed first, we quickly discover that embedded in the data are social, political and cultural biases that distort results.

In addition to societal bias in the data, technical systems can introduce additional skew as a result of their design or operation. Stoyanovich explains that if, for example, a job application form has two options for sex, ‘male’ and ‘female,’ a female applicant may choose to leave this field blank for fear of discrimination. An applicant who identifies as non-binary will also probably leave the field blank. But if the system works under the assumption that sex is binary and post-processes the data, then the missing values will be filled in. The most common method for this is to set the field to the value that occurs most frequently in the data, which will likely be ‘male’. This introduces systematic skew in the data distribution, and will make errors more likely for these individuals.

This example illustrates that technical bias can arise from an incomplete or incorrect choice of data representation. “It’s been documented that data quality issues often disproportionately affect members of historically disadvantaged groups, and we risk compounding technical bias due to data representation with pre-existing societal bias for such groups,” adds Stoyanovich.

This raises a host of questions, according to Stoyanovich, such as: How do we identify ethical issues in our technical systems? What types of “bias bugs” can be resolved with the help of technology? And what are some cases where a technical solution simply won’t do? As challenging as these questions are, Stoyanovich maintains we must find a way to reflect them in how we teach computer science and data science to the next generation of practitioners.

“Virtually all of the departments or centers at Tandon do research and collaborations involving AI in some way, whether artificial neural networks, various other kinds of machine learning, computer vision and other sensors, data modeling, AI-driven hardware, etc.,” says Jelena Kovačević, Dean of the NYU Tandon School of Engineering. “As we rely more and more on AI in everyday life, our curricula are embracing not only the stunning possibilities in technology, but the serious responsibilities and social consequences of its applications.”

Stoyanovich quickly realized as she looked at this issue as a pedagogical problem that professors who were teaching the ethics courses for computer science students were not computer scientists themselves, but instead came from humanities backgrounds. There were also very few people who had expertise in both computer science and the humanities, a fact that is exacerbated by the “publish or perish” motto that keeps professors siloed in their own areas of expertise.

“While it is important to incentivize technical students to do more writing and critical thinking, we should also keep in mind that computer scientists are engineers.  We want to take conceptual ideas and build them into systems,” says Stoyanovich.  “Thoughtfully, carefully, and responsibly, but build we must!”

But if computer scientists need to take on this educational responsibility, Stoyanovich believes that they will have to come to terms with the reality that computer science is in fact limited by the constraints of the real world, like any other engineering discipline.

“My generation of computer scientists was always led to think that we were only limited by the speed of light. Whatever we can imagine, we can create,” she explains. “These days we are coming to better understand how what we do impacts society and we have to impart that understanding to our students.”

Kovačević echoes this cultural shift in how we must start to approach the teaching of AI. Kovačević notes that computer science education at the collegiate level typically keeps the tiller set on skill development, and exploration of the technological scope of computer science — and a unspoken cultural norm in the field that since anything is possible, anything is acceptable.  “While exploration is critical, awareness of consequences must be, as well,” she adds.

Once the first hurdle of understanding that computer science has restraints in the real world is met, Stoyanovich argues that we will next have to confront the specious idea that AI is the tool that will lead humanity into some kind of utopia.

“We need to better understand that whatever an AI program tells us is not true by default,” says Stoyanovich. “Companies claim they are fixing bias in the data they present into these AI programs, but it’s not that easy to fix thousands of years of injustice embedded in this data.”

In order to include these fundamentally different approaches to AI and how it is taught, Stoyanovich has created a new course at NYU Tandon entitled Responsible Data Science. This course has now become a requirement for students getting a BA degree in data science at NYU. Later, she would like to see the course become a requirement for graduate degrees as well. In the course, students are taught both “what we can do with data” and, at the same time, “what we shouldn’t do.”

Stoyanovich has also found it exciting to engage students in conversations surrounding AI regulation.  “Right now, for computer science students there are a lot of opportunities to engage with policy makers on these issues and to get involved in some really interesting research,” says Stoyanovich. “It’s becoming clear that the pathway to seeing results in this area is not limited to engaging industry but also extends to working with policy makers, who will appreciate your input.”

In these efforts towards engagement, Stoyanovich and NYU are establishing the Center for Responsible AI, to which IEEE-USA offered its full support last year. One of the projects the Center for Responsible AI is currently engaged in is a new law in New York City to amend its administrative code in relation to the sale of automated employment decision tools.

“It is important to emphasize that the purpose of the Center for Responsible AI is to serve as more than a colloquium for critical analysis of AI and its interface with society, but as an active change agent,” says Kovačević. “What that means for pedagogy is that we teach students to think not just about their skill sets, but their roles in shaping how artificial intelligence amplifies human nature, and that may include bias.”

Stoyanovich notes: “I encourage the students taking Responsible Data Science to go to the hearings of the NYC Committee on Technology.  This keeps the students more engaged with the material, and also gives them a chance to offer their technical expertise.”

The Subtle Effects of Blood Circulation Can Be Used to Detect Deep Fakes

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/computing/software/blook-circulation-can-be-used-to-detect-deep-fakes

You probably have etched in your mind the first time you saw a synthetic video of a someone that looked good enough to convince you it was real. For me, that moment came in 2014, after seeing a commercial for Dove Chocolate that resurrected the actress Audrey Hepburn, who died in 1993.

Awe about what image-processing technology could accomplish changed to fear, though, a few years later, after I viewed a video that Jordan Peele and Buzzfeed had produced with the help of AI. The clip depicted Barack Obama saying things he never actually said. That video went viral, helping to alert the world to the danger of faked videos, which have become increasingly easy to create using deep learning.

Dubbed deep-fakes, these videos can be used for various nefarious purposes, perhaps most troublingly for political disinformation. For this reason, Facebook and some other social-media networks prohibit such fake videos on their platforms. But enforcing such prohibitions isn’t straightforward.

Facebook, for one, is working hard to develop software that can detect deep fakes. But those efforts will no doubt just motivate the development of software for creating even better fakes that can pass muster with the available detection tools. That cat-and-mouse game will probably continue for the foreseeable future. Still, some recent research promises to give the upper hand to the fake-detecting cats, at least for the time being.

This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals,” the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye.

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

That newer work, posted to the arXiv pre-print server on 26 August, was titled, “How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals.” In it, the researchers showed they that can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators (DeepFakes, Face2Face, FaceSwap or NeuralTex) was used to create a bogus video.

Will a newer generation of deep-fake generators someday be able to outwit this physiology-based approach to detection? No doubt, that will eventually happen. But for the moment, knowing that there’s a promising new means available to thwart fraudulent videos warms my deep fake–revealing heart.

Making Absentee Ballot Voting Easier

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/software/making-absentee-ballot-voting-easier

While there is a controversy raging over the legality and security of states like California, Hawaii,  NevadaNew Jersey, and Vermont among others deciding to send out vote by mail (VBM) ballots to every registered voter, there is little controversy over voters applying for absentee ballots from their local election officials. U.S. Attorney General William Barr, for example, who is against the mass mailing of ballots says, “absentee ballots are fine.” The problem is that applying for an absentee ballot is not always easy or secure, often requiring what might be seen as intrusive, irrelevant, or duplicate personal information to prove voter identity.

For example, Virginia’s Board of Elections online voter portal requires a person’s driver’s license information and full social security number be provided as proof of identity, whereas the only legally required information to request a ballot is the last four digits of the voter’s  social security number. In fact, if you don’t provide your driver’s license number (or don’t have one), you can’t request an absentee ballot. This is strange, considering that a mailed in paper absentee ballot application requires only the last four social security numbers be provided as an identifier. This basic information, along with the person’s name and address, is sufficient for local officials to determine whether they are a registered voter or not. Registering online for an absentee ballot cries out to be streamlined.

Two students attending Thomas Jefferson High School for Science and Technology in Fairfax, Virginia, stepped up to meet this need not only for Virginia, but possibly other states in the future. Senior Raunak Daga and junior Sumanth Ratna set out developing an online app called eAbsentee that makes the process of applying for an absentee ballot very easy and accessible to everyone. “With us, it’s five clicks, a one-page form that can be done from your phone,” says Raunak.

The app asks for your name, address, last four digits of your social security number, email and phone number, as well as a legal attestation of truthfulness of the information you are submitting, and you are done. Immediately afterwards, Raunak says, “Both the election registrar and applicant receive a confirmation email,” which helps ensure security of the process. State election boards will often process the electronic application within a day of receipt.  Only first-time voters will need to submit a copy of a valid ID with their absentee ballot or ballot application. In Virginia, absentee ballots were sent out beginning this past Friday, the 18th of September.

Completely online absentee ballot applications were first approved by the Virginia Board of Elections in 2015 after then Republican Speaker of the Virginia House Bill Howell requested the board clarify a new state law allowing electronic signatures on absentee ballot requests. Soon afterwards, the state created its online application form.

Shortly afterwards, Raunak, while working for the nonprofit Vote Absentee Virginia, saw how puzzled voters were while trying to use the state’s portal. He thought the absentee ballot request process could be simplified, so he enlisted fellow student and friend Sumanth. Together, they spent the summer of 2019 developing the app. eAbsentee was officially deployed in September 2019 and was used in last year’s state elections. Some 750 voters used the app to request their absentee ballots.

This year, with Covid-19 making people wary of going to the polls in person, and a presidential election stoking voter interest, there is greater motivation for using the app. As of this week, nearly 8,000 Virginia voters have used eAbsentee. That number will likely continue to grow, as several other changes to Virginia election regulations have been made this year. The first change is that an absentee voter doesn’t have to have a pre-approved excuse for requesting a ballot as in the past. A second is that the envelope provided for the return of the absentee ballot includes prepaid postage. The third is that the envelopes sent to and from the voter and the local board of elections will have bar codes to allow both the voter and board to track their transit through the mail. Raunak and Sumanth told me they were deeply involved with Vote Absentee Virginia in the first two efforts to make absentee voting easier.

Raunak and Sumanth, who are both planning to pursue college degrees in computer science or data science, deployed the app on PythonAnywhere which aids portability. “We built the application from the start with the intention that it be easily deployable on multiple platforms and in multiple locations,” Sumanth says. “Another person could very easily deploy our project in another state, which has been a goal since the beginning.”

If you are a Virginia voter thinking of absentee voting, you might consider using eAbsentee. Those of you in other states, well, maybe keep an eye out for it in the next election cycle.

Open-Source Vote-Auditing Software Can Boost Voter Confidence

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/opensource-voteauditing-software-can-boost-voter-confidence

Election experts were already concerned about the security and accuracy of the 2020 U.S. presidential election. Now, with the ongoing COVID-19 pandemic and the new risk it creates for in-person voting—not to mention the debate about whether mail-in ballots lead to voter fraud—the amount of anxiety around the 2020 election is unprecedented.

“Elections are massively complicated, and they are run by the most OCD individuals, who are process oriented and love color coding,” says Monica Childers, a product manager with the nonprofit organization VotingWorks. “And in a massively complex system, the more you change things, especially at the last minute, the more you introduce the potential for chaos.” But that’s just what election officials are being forced to do.

Most of the conversation around election security focuses on the security of voting machines and preventing interference. But it’s equally important to prove that ballots were correctly counted. If a party or candidate cries foul, states will have to audit their votes to prove there were no miscounts.

VotingWorks has built an open-source vote-auditing software tool called Arlo, and the organization has teamed up with the U.S. Cybersecurity and Infrastructure Security Agency to help states adopt the tool. Arlo helps election officials conduct a risk-limiting audit [PDF], which ensures that the reported results match the actual results. And because it’s open source, all aspects of the software are available for inspection.

There are actually several ways to audit votes. You’re probably most familiar with recounts, a process dictated by law that orders a complete recounting of ballots if an election is very close. But full recounts are rare. More often, election officials will audit the ballots tabulated by a single machine, or verify the ballots cast in a few precincts. However, those techniques don’t give a representative sample of how an entire state may have voted.

This is where a risk-limiting audit excels. The audit takes a random sample of the ballots from across the area undergoing the audit and outlines precisely how the officials should proceed. This includes giving explicit instructions for choosing the ballots at random (pick the fourth box on shelf A and then select the 44th ballot down, for example). It also explains how to document a “chain of custody” for the selected ballots so that it’s clear which auditors handled which ballots.

The random-number generator that Arlo uses to select the ballots is published online. Anyone can use the tool to select the same ballots to audit and compare their results. The software provides the data-entry system for the teams of auditors entering the ballot results. Arlo will also indicate how likely it is that the entire election was reported correctly.

The technology may not be fancy, but the documentation and attention to a replicable process is. And that’s most important for validating the results of a contested election.

Arlo has been tested in elections in Michigan, Ohio, Pennsylvania, and a few other states. The software isn’t the only way a state or election official can conduct a risk-limiting audit, but it does make the process easier. Childers says Colorado took almost 10 years to set up risk-limiting audits. VotingWorks has been using Arlo and its staff to help several states set up these processes, which has taken less than a year.

The upcoming U.S. election is dominated by partisanship, but risk-limiting audits have been embraced by both parties. So far, it seems everyone agrees that if your vote gets counted, the government needs to count it correctly.

This article appears in the October 2020 print issue as “Making Sure Votes Count.”

Interactive: The Top Programming Languages 2020

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/static/interactive-the-top-programming-languages-2020