Tag Archives: computing

Microsoft’s Flight Simulator 2020 Blurs the Line Between Tools and Toys

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/software/microsofts-flight-simulator-2020-blurs-the-line-between-tools-and-toys

My nephew recently sent me an email about our latest shared obsession: Microsoft’s Flight Simulator 2020. “Flying a Cessna 152 in this game feels exactly like flying one in real life,” he wrote. And he should know. Growing up next to a small regional airport, he saw private aircraft flying over his home every day, and he learned to fly as soon as his feet were long enough to reach the rudder pedals.

While he relaxes with the game, my experiences with it have been more stressful. Covered with sweat as I carefully adjust ailerons, trim, and throttle, I worked my way through the how-to-fly lessons, emerging exhausted. “It’s just a simulation,” I keep telling myself. “It doesn’t matter how often I crash.” But the game is so realistic that crashing scares the daylights out of me. So while playing I remain hypervigilant, gripping the controls so tightly that it hurts.

All that realism begins with the cockpit instruments and controls, but it extends well beyond. Players fly over terrain and buildings streamed from Microsoft’s Bing Maps data set, and the inclusion of real-time air traffic control information means players need to avoid the flight paths of actual aircraft. For a final touch of realism, the game also integrates real-time meteorological data, generating simulated weather conditions that mirror those of the real world.

Flight Simulator 2020 purposely mixes a simulation of something imaginary with a visualization of actual conditions. Does that still qualify as a game? I’d argue it’s something new, which is possible only now because of the confluence of fast networks, big data, and cheap but incredibly powerful hardware. Seeing YouTube videos of Flight Simulator 2020 users flying their simulated aircraft into Hurricane Laura, I wonder whether an upgrade to the game will one day allow players to pilot real drones into a future storm’s eye wall.

If that prospect seems far-fetched, consider what’s going on hundreds of kilometers higher up. That’s the realm of Saber Astronautics, whose software can be used to visualize—and manage—the immense number of objects orbiting Earth.

Before Saber, space-mission controllers squinted at numbers on a display screen to judge whether there was a danger from space debris. Now they can work with a visualization that blends observational data with computational simulations, just as Flight Simulator 2020 does. That makes it far easier to track threats and gently nudge the orbits of satellites before they run into a piece of space flotsam, which could turn the kind of cascading collisions of orbital space junk depicted in the film Gravity into a real-life catastrophe.

We’ve now got the data, the networks, and the software to create a unified simulation of Earth, from its surface all the way into space. That could be valuable for entertainment, sure—but also for much more. Weaving together data from weather sensors, telescopes, aircraft traffic control, and satellite tracking could transform the iconic “Whole Earth” image photographed by a NASA satellite in 1967 into a dynamic model, one that could be used for entertaining simulations or to depict goings-on in the real world. It would provide enormous opportunities to explore, to play, and to learn.

It’s often said that what can’t be measured can’t be managed. We need to manage everything from the ground to outer space, for our well-being and for the planet’s. At last, we now have a class of tools—ones that look a lot like toys—to help us do that.

This article appears in the November 2020 print issue as “When Games get Real.”

Measuring Progress in the ‘Noisy’ Era of Quantum Computing

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/computing/hardware/measuring-progress-in-the-noisy-era-of-quantum-computing

Measuring the progress of quantum computers can prove tricky in the era of “noisy” quantum computing technology. One concept, known as “quantum volume,” has become a favored measure among companies such as IBM and Honeywell. But not every company or researcher agrees on its usefulness as a yardstick in quantum computing.

Minsk’s Teetering Tech Scene

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/computing/it/minsks-teetering-tech-scene

Nearly every day, cops in black riot gear move in phalanxes through the streets, swinging batons to clear thousands of chanting protesters, cracking heads, and throwing people to the ground. An often tone-deaf president jails people on political grounds and refuses to leave office in the wake of a disputed election.

It’s a scene from Belarus, a landlocked former Soviet republic where, for the past couple of months, public outcry and the state’s authoritarian response have kept the nation on a razor’s edge.

Also hanging in the balance is the robust Belarusian digital scene, which has flourished over recent years, accounts for perhaps five to six percent of the nation’s economy, and has provided a steady engine for growth. This, in a place which may be better known for mining potash.

Belarus is led by President Alexander Lukashenko, who came to power in 1994. On 9 August, Lukashenko stood for his sixth term in office: This time, as the announced results in his favor topped 80 percent, the vote was widely seen as fixed and people took to the streets of Minsk by the thousands to call for new elections. They met harsh police response.

In the weeks since, the capital’s coders came under increasing pressure then dialed up pressure of their own. State authorities arrested four employees at sales-process software developer PandaDoc, including the Minsk office director, who after more than a month in jail was released on Oct. 11. Belarusian mass protesters organized their response using digital tools. Open letters calling for new elections and the release of political prisoners circulated among tech industry executives, with one gaining 2,500 signatures. That missive came with a warning that conditions could get to where the industry would no longer function.

That would be a huge loss. More than 54,000 IT specialists and 1,500 tech companies called Belarus home as of 2019. Product companies span a broad swath of programming: natural language processing and computational linguistics; augmented reality; mobile technologies and consumer apps; and gaming. A Medellín, Colombia–based news service that covers startups even did a roundup of the machine learning and artificial intelligence development going on in Minsk. According to the report, this activity is worth some $3 billion a year in sales and revenue, with Minsk-built mobile apps drawing in more than a billion users.

Belarus’s tech trade has become vital to the structure of the local economy and its future. The sector showed double-digit growth over the past 10 years, says Dimitar Bogov, regional economist for the European Bank for Reconstruction and Development. “After manufacturing and agriculture, ICT is the biggest sector,” he says. “It is the most important. It is the source of growth during the last several years.”

Though it may seem surprising that the marshy Slavic plains of Belarus would bear digital fruit, it makes sense that computing found roots here. During the mid-1950s, the Soviet Union’s council of ministers wanted to ramp up computer production in the country, with Minsk selected as one of the hubs. It would produce as much as 70 percent of all computers built in the Soviet Union.

Lukashenko’s government itself had a hand in spurring digital growth in recent years by opening a High Tech Park—both a large incubator building in Minsk and a federal legal framework in the country—fertilized by tax breaks and a reduction in red tape. The scene hummed along from just after the turn of the century through the aughts: By 2012, IHS Markit, a business intelligence company that uses in-house digital development as part of its secret sauce, could snap up semantic search engine developers working in a Minsk coding factory by the dozen.

Eight years later, that team is still working in Belarus, but no longer in a brick warehouse adorned by a Lenin mural. They are in a glass office pod complex, neighbors to home furnishing corporates and the national franchise operations for McDonald’s. And despite the global economic downturn wrought by COVID-19, the tech sector in Belarus is even showing growth in 2020, Bogov says. “It grew by more than eight percent. This is less than in previous years, but it is still impressive to show growth during the pandemic.”

But a shadow hangs over all that now. Reports by media outlets including the Wall Street Journal, BBC, and Bloomberg have cited the PandaDoc chief executive and other tech sources as saying the whole sector could shut down.

Though—so far—there is no evidence of a mass exodus, there are reports of some techies leaving Belarus. There are protests every week, but people also go back to work, in a tense and somewhat murky standoff.

“I talk a lot to people in Belarusian IT. It looks like everyone is outraged,” says Sergei Guriev, a political economist at Paris’s Sciences Po Institute. “Even people who do not speak out support the opposition quietly with resources and technology.” Yuri Gurski, chief executive of the co-founding firm Palta and VC investor Haxus, announced he would help employees of the companies Haxus invests in—including the makers of image editors Fabby and Fabby Look, and the ovulation tracker app Flo—to temporarily relocate outside of Belarus if they fear for life and health.

But Youheni Preiherman, a Minsk-based analyst and director of the Minsk Dialogue Council on International Relations, hears a lot of uncertainty. “Some people ask their managers to let them go for the time being until the situation calms down a bit,” he says. “Some companies, on the contrary, are now saying no, we want to make sure we stay—we feel some change is in the air and we want to be here.”

Meanwhile, the Digital Transformation Ministry in Ukraine is already looking to snap up talent sitting on the fence. Former Bloomberg Moscow bureau chief James Brook reported on his Ukraine Business News site that In September, Ukraine retained the Belarusian lawyer who developed the Minsk Hi-Tech Park concept to do the same there. The Ukrainians are sweetening the pot by opening a Web portal to help Belarusian IT specialists wanting to make the move.

The standoff in Belarus could move into a deliberative state with brokered talks over a new constitution and eventual exit for Lukashenko, but analysts say it could also be prone to fast and unexpected moves—for good or for ill. The future direction for Belarus is being written by the week. But with AI engineers and augmented reality developers who had been content in Minsk no longer sure whether to stay or go, the outcome will be about more than just who runs the government. And the results will resound for years to come.

The Problem of Old Code and Older Coders

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/it/the-problem-of-old-code-and-older-coders

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

The coronavirus pandemic has exposed any number of weaknesses in our technologies, business models, medical systems, media, and more. Perhaps none is more exposed than what my guest today calls, “The Hidden World of Legacy IT.” If you remember last April’s infamous call for volunteer COBOL programmers by the governor of New Jersey, when his state’s unemployment and disability benefits systems needed to be updated, that turned out to be just the tip of a ubiquitous multi-trillion-dollar iceberg—yes, trillion with ‘t’—of outdated systems. Some of them are even more important to us than getting out unemployment checks—though that’s pretty important in its own right. Water treatment plants, telephone exchanges, power grids, and air traffic control are just a few of the systems controlled by antiquated code.

In 2005, Bob Charette wrote a seminal article, entitled “Why Software Fails.” Now, fifteen years later, he strikes a similar nerve with another cover story that shines a light at the vast and largely hidden problem of legacy IT. Bob is a 30-year veteran of IT consulting, a fellow IEEE Spectrum contributing editor, and I’m happy to say my good friend as well as my guest today. He joins us by Skype.

Bob, welcome to the podcast.

Bob Charette Thank you, Steven.

Steven Cherry Bob, legacy software, like a middle child or your knees meniscus, isn’t something we think about much until there’s a problem. You note that we know more about government systems because the costs are a matter of public record. But are the problems of the corporate world just as bad?

Bob Charette Yes. There’s really not a lot of difference between what’s happening in government and what’s happening in industry. As you mentioned, government is more visible because you have auditors who are looking at failures and [are] publishing reports. But there’s been major problems in airlines and banks and insurance companies—just about every industry that has IT has a problem with legacy systems in one way or another.

Steven Cherry Bob, the numbers are staggering. In the past 10 years, at least $2.5 trillion has been spent trying to replace legacy IT systems, of which some seven hundred and twenty billion dollars was utterly wasted on failed replacement efforts. And that’s before the last of the COBOL generation retires. Just how big a problem is this?

Bob Charette That’s a really good question. The size of the problem really is unknown. We have no clear count of the number of systems that are legacy in government where we should be able to have a pretty good idea. We have really no insight into what’s happening in industry. The only thing that we that we do know is that we’re spending trillions of dollars annually in terms of operations and maintenance of these systems, and as you mentioned, we’re spending hundreds of billions per year in trying to modernize them with large numbers failing. This is this is one of the things that when I was doing the research and you try to find some authoritative number, there just isn’t any there at all.

In fact, a recent report by the Social Security Administration’s Inspector General basically said that even they could not figure out how many systems were actually legacy in Social Security. And in fact, the Social Security Administration didn’t know itself either.

Steven Cherry So some of that is record-keeping problems, some of that is secrecy, especially on the corporate side. A little bit of that might be definitional. So maybe we should step back and ask what the philosophers call the the ti esti [τὸ τί ἐστι] question—the what-is question. Does everybody agree on what legacy IT is? What counts as legacy?

Bob Charette No. And that’s another problem. What happens is there’s different definitions in different organizations and in different government agencies, even in the US government. And no one has a standard definition. About the closest that we come to is that it’s a system that does not meet the business need for some reason. Now, I want to make it very clear: The definition doesn’t say that it has to be old, or past a certain point in time Nor does it mean that it’s COBOL. There are systems that have been built and are less than 10 years old that are considered legacy because they no longer meet the business need. So the idea is, is that there’s lots of reasons why it may not meet the business needs—there may be obsolescent hardware, the software software may not be usable or feasible to be improved. There may be bugs in the system that just can’t be fixed at any reasonable cost. So there’s a lot of reasons why a system may be termed legacy, but there’s really no general definition that everybody agrees with.

Steven Cherry Bob, states like your Virginia and New York and every state in the Union keep building new roads, seemingly without a thought. A few years ago, a Bloomberg article noted that every mile of fresh new road will one day become a mile of crumbling old road that needs additional attention. Less than half of all road budgets go to maintenance. A Texas study found that the 40-year cost to maintain a $120 million  highway was about $800 million. Do we see the same thing in IT? Do we keep building new systems, seemingly without a second thought that we’re going to have to maintain them?

Bob Charette Yes, and for good reason. When we build a system and it actually works, it works usually for a fairly long time. There’s kind of an irony and a paradox. The irony is that the longer these systems live, the harder they are to replace. Paradoxically, because they’re so important, they also don’t receive any attention in terms of spend. Typically, for every dollar that’s spent on developing a system, there’s somewhere between eight and 10 dollars that’s being spent to maintain it over its life. But very few systems actually are retired before their time. Almost every system that I know of, of any size tends to last a lot longer than what the designers ever intended.

Steven Cherry The Bloomberg article noted that disproportionate spending by states on road expansion, at the expense of regular repair—again, less than half of state road budgets are spent on maintenance—has left many roads in poor condition. IT spent a lot of money on maintenance, but a GAO study found that a big part of IT budgets are for operations and maintenance at the expense of modernization or replacement. And in fact, that ratio is getting higher, that less and less money is available for upgrades.

Bob Charette  Well, there’s two factors at play. One is, it’s easier to build new systems, so there’s money to build new systems, and that’s what we we constantly do. So we’re building new IT systems over time, which has again, proliferated the number of systems that we need to maintain. So as we build more systems, we’re going to eat up more of our funding so that when it comes time to actually modernize these, there’s less money available. The other aspect is, as we build these systems, we don’t build them a standalone systems. These systems are interconnected with others. And so when you interconnect lots of different systems, you’re not maintaining just an individual s— you’re maintaining this system of systems. And that becomes more costly. Because the systems are interconnected, and because they are very costly to replace, we tend to hold onto these systems longer. And so what happens is that the more systems that you build and interconnect, the harder it is later to replace them, because the cost of replacement is huge. And the probability of failure is also huge.

Steven Cherry Finally—and I promise to get off the highway comparison after this—there seems to be a point at which roads, even when well maintained, need to be reconstructed, not just maintained and repaved. And that point for roads is typically the 40-year mark. Are we seeing something like that in IT?

Bob Charette Well, we’re starting to see a radical change. I think that one of the real changes in IT development and maintenance and support has been the notion of what’s called DevOps, this notion of having development and operations being merged into one.

Since the beginning almost of IT systems development, we’ve thought about it as kind of being in two phases. There is the development phase, and then there was a separate maintenance phase. And a maintenance phase could last anywhere from 8, 10, some systems now are 20, 30, 40 years old. The idea now is to say when you develop it, you have to think about how you’re going to support it and therefore, development and maintenance are rolled into one. It’s kind of this idea that software is never done and therefore, hopefully in the future, this problem of legacy in many ways will go away. We’ll still have to worry about at some point where you can’t really support it anymore. But we should have a lot fewer failures, at least in the operational side. And our costs hopefully will also go down.

Steven Cherry So we can have the best of intentions, but we build roads and bridges and tunnels to last for 40 or 50 years, and then seventy-five years later, we realize we still need them and will for the foreseeable future. Are we still going to need COBOL programmers in 2030? 2050? 2100?

Bob Charette Probably. There’s so much coded in COBOL. And a lot of them work extremely well. And it’s not the software so much that that is the problem. It’s the hardware obsolescence. I can easily foresee COBOL systems being around for another 30, 40, maybe even 50 years. And even that I may be underestimating the longevity of these systems. What’s true in the military, where aircraft like to be 50 to, which was supposed to have about a 20 to 25 year life, is now one hundred years old, replacing everything in the aircraft over a period of time.

There is research being done by DARPA and others to look at how to extend systems and possibly have a system be around for 100 years. And you can do that if you’re extremely clever in how you design it. And also have this idea of how I’m going to constantly upgrade and constantly repair the system and make it easy to move both the data and the hardware. And so I think, again, we’re starting to see the realization that IT, which at one time—again, systems designers were always thinking about 10 years is great, twenty years is fantastic—that maybe now that these system’s, core systems, may be around for one hundred years,.

Steven Cherry Age and complexity have another consequence: Unlike roads, there’s a cybersecurity aspect to all of this as well.

Bob Charette Yeah, that’s probably the biggest weakness that that occurs in new systems, as well as with legacy systems. Legacy systems were never really built with security in mind. And in fact, one of the common complaints even today with new systems is that security isn’t built in; it’s bolted on afterwards, which makes it extremely difficult.

I think security has really come to the fore, especially in the last two or three years where we’ve had this … In fact last year we had over 100 government systems in the United States—local, state and federal systems—that were subject to ransomware attacks and successful ransomware attacks because the attackers focused in on legacy systems, because they were not as well maintained in terms of their security practices as well as the ability to be made secure. So I think security is going to be an ongoing issue into the foreseeable future.

Steven Cherry The distinction between development and operations brings to mind another one, and that is we think of executable software and data as very separate things. That’s the basis of computing architectures ever since John von Neumann. But legacy IT has a problem with data as well as software, doesn’t it?

Bob Charette One of the areas that we didn’t get to explore very deeply in the story, mostly because of space limitations, is is the problem of data. Data is one of the most difficult things to move from one system to another. In the story, we talked about a Navy system, a payroll system … The Navy was trying to consolidate 55 systems into one and they use dozens of programing languages. They have multiple databases. The formats are different. How the data is accessed—what business processes, how they use the data—is different. And when you try to think about how you’re going to move all that information and make sure that the information is relevant, it’s correct. We want to make sure we don’t have dirty data. Those things all need to come to be so that when we move to a new system, the data actually is what we want. And in fact, if you take a look at the IRS, the IRS has 60-year-old systems and the reason they have 60-year-old systems is because they have 60-year-old data on millions of companies and millions of—or hundreds of millions of—taxpayers and trying to move that data to new systems and make sure that you don’t lose it and you don’t corrupt it has been a decades-long problem that they’ve been trying to solve.

Steven Cherry Making sure you don’t lose individuals or duplicate individuals across databases when you merge them.

Bob Charette One of the worst things that you can do is have not only duplicate data, but have data that actually is incorrect and then you just move that incorrect data into a new system.

Steven Cherry Well, Bob, as I said, you did it before with why software fails and you’ve done it again with this detailed investigation. Thanks for publishing “The Hidden World of Legacy IT,” and thanks for joining us today.

Bob Charette My pleasure. Steven.

We’ve been speaking with IT consultant Bob Charette about the enormous and still-growing problem of legacy IT systems.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Protect Electronic Components with Dam & Fill

Post Syndicated from MasterBond original https://spectrum.ieee.org/computing/hardware/protect-electronic-components-with-dam-and-fill

Watch now to see the dam and fill method.

Sensitive electronic components mounted on a circuit board often require protection from exposure to harsh environments. While there are several ways to accomplish this, the dam and fill method offers many benefits. Dam-and-filling entails dispensing the damming material around the area to be encapsulated, thereby restricting the flow of the fill from spreading to other parts of the board.

By using a damming compound such as Supreme 3HTND-2DM-1, you can create a barrier around the component. Then, with a flowable filler material like EP3UF-1, the component can be completely covered for protection.

To start, apply the damming compound, Supreme 3HTND-2DM-1, around the component. Supreme 3HTND-2DM-1 is readily dispensed to create a structurally sound barrier. This material will not run and will cure in place in 5-10 minutes at 300°F, in essence forming a dam.

After the damming compound has been applied and cured, a filling compound such as EP3UF-1 is     dispensed to fill the area inside the dam and cover the component to be protected. EP3UF-1 is a specialized, low viscosity one part system with a filler that has ultra small particle sizes, which enables it to flow even in tiny spaces. This system cures in 10-15 minutes at 300°F and features low shrinkage and high dimensional stability once cured.

Both Supreme 3HTND-2DM-1 and EP3UF-1 are thermally conductive, electrically insulating compounds and are available for use in syringes for automated or manual dispensing.

Despite being a two step process, dam and fill offers the following advantages over glob topping:

  • Flow of the filling compound is controlled and restricted
  • It can be applied to larger sections of the board
  • Filling compound flows better than a glob top, allowing better protection underneath and around component

The Devil is in the Data: Overhauling the Educational Approach to AI’s Ethical Challenge

Post Syndicated from NYU Tandon School of Engineering original https://spectrum.ieee.org/computing/software/the-devil-is-in-the-data-overhauling-the-educational-approach-to-ais-ethical-challenge

The evolution and wider use of artificial intelligence (AI) in our society is creating an ethical crisis in computer science like nothing the field has ever faced before. 

“This crisis is in large part the product of our misplaced trust in AI in which we hope that whatever technology we denote by this term will solve the kinds of societal problems that an engineering artifact simply cannot solve,” says Julia Stoyanovich,  an Assistant Professor in the Department of Computer Science and Engineering at the NYU Tandon School of Engineering, and the Center for Data Science at New York University. “These problems require human discretion and judgement, and where a human must be held accountable for any mistakes.”

Stoyanovich believes the strikingly good performance of machine learning (ML) algorithms on tasks ranging from game playing, to perception, to medical diagnosis, and the fact that it is often hard to understand why these algorithms do so well and why they sometimes fail, is surely part of the issue. But Stoyanovich is concerned that it is also true that simple rule-based algorithms such as score-based rankers — that compute a score for each job applicant, sort applicants on their score, and then suggest to interview the top-scoring three — can have discriminatory results. “The devil is in the data,” says Stoyanovich.

As an illustration of this point, in a comic book that Stoyanovich produced with Falaah Arif Khan entitled “Mirror, Mirror”,  it is made clear that when we ask AI to move beyond games, like chess or Go, in which the rules are the same irrespective of a player’s gender, race, or disability status, and look for it to perform tasks that allocated resources or predict social outcomes, such as deciding who gets a job or a loan, or which sidewalks in a city should be fixed first, we quickly discover that embedded in the data are social, political and cultural biases that distort results.

In addition to societal bias in the data, technical systems can introduce additional skew as a result of their design or operation. Stoyanovich explains that if, for example, a job application form has two options for sex, ‘male’ and ‘female,’ a female applicant may choose to leave this field blank for fear of discrimination. An applicant who identifies as non-binary will also probably leave the field blank. But if the system works under the assumption that sex is binary and post-processes the data, then the missing values will be filled in. The most common method for this is to set the field to the value that occurs most frequently in the data, which will likely be ‘male’. This introduces systematic skew in the data distribution, and will make errors more likely for these individuals.

This example illustrates that technical bias can arise from an incomplete or incorrect choice of data representation. “It’s been documented that data quality issues often disproportionately affect members of historically disadvantaged groups, and we risk compounding technical bias due to data representation with pre-existing societal bias for such groups,” adds Stoyanovich.

This raises a host of questions, according to Stoyanovich, such as: How do we identify ethical issues in our technical systems? What types of “bias bugs” can be resolved with the help of technology? And what are some cases where a technical solution simply won’t do? As challenging as these questions are, Stoyanovich maintains we must find a way to reflect them in how we teach computer science and data science to the next generation of practitioners.

“Virtually all of the departments or centers at Tandon do research and collaborations involving AI in some way, whether artificial neural networks, various other kinds of machine learning, computer vision and other sensors, data modeling, AI-driven hardware, etc.,” says Jelena Kovačević, Dean of the NYU Tandon School of Engineering. “As we rely more and more on AI in everyday life, our curricula are embracing not only the stunning possibilities in technology, but the serious responsibilities and social consequences of its applications.”

Stoyanovich quickly realized as she looked at this issue as a pedagogical problem that professors who were teaching the ethics courses for computer science students were not computer scientists themselves, but instead came from humanities backgrounds. There were also very few people who had expertise in both computer science and the humanities, a fact that is exacerbated by the “publish or perish” motto that keeps professors siloed in their own areas of expertise.

“While it is important to incentivize technical students to do more writing and critical thinking, we should also keep in mind that computer scientists are engineers.  We want to take conceptual ideas and build them into systems,” says Stoyanovich.  “Thoughtfully, carefully, and responsibly, but build we must!”

But if computer scientists need to take on this educational responsibility, Stoyanovich believes that they will have to come to terms with the reality that computer science is in fact limited by the constraints of the real world, like any other engineering discipline.

“My generation of computer scientists was always led to think that we were only limited by the speed of light. Whatever we can imagine, we can create,” she explains. “These days we are coming to better understand how what we do impacts society and we have to impart that understanding to our students.”

Kovačević echoes this cultural shift in how we must start to approach the teaching of AI. Kovačević notes that computer science education at the collegiate level typically keeps the tiller set on skill development, and exploration of the technological scope of computer science — and a unspoken cultural norm in the field that since anything is possible, anything is acceptable.  “While exploration is critical, awareness of consequences must be, as well,” she adds.

Once the first hurdle of understanding that computer science has restraints in the real world is met, Stoyanovich argues that we will next have to confront the specious idea that AI is the tool that will lead humanity into some kind of utopia.

“We need to better understand that whatever an AI program tells us is not true by default,” says Stoyanovich. “Companies claim they are fixing bias in the data they present into these AI programs, but it’s not that easy to fix thousands of years of injustice embedded in this data.”

In order to include these fundamentally different approaches to AI and how it is taught, Stoyanovich has created a new course at NYU Tandon entitled Responsible Data Science. This course has now become a requirement for students getting a BA degree in data science at NYU. Later, she would like to see the course become a requirement for graduate degrees as well. In the course, students are taught both “what we can do with data” and, at the same time, “what we shouldn’t do.”

Stoyanovich has also found it exciting to engage students in conversations surrounding AI regulation.  “Right now, for computer science students there are a lot of opportunities to engage with policy makers on these issues and to get involved in some really interesting research,” says Stoyanovich. “It’s becoming clear that the pathway to seeing results in this area is not limited to engaging industry but also extends to working with policy makers, who will appreciate your input.”

In these efforts towards engagement, Stoyanovich and NYU are establishing the Center for Responsible AI, to which IEEE-USA offered its full support last year. One of the projects the Center for Responsible AI is currently engaged in is a new law in New York City to amend its administrative code in relation to the sale of automated employment decision tools.

“It is important to emphasize that the purpose of the Center for Responsible AI is to serve as more than a colloquium for critical analysis of AI and its interface with society, but as an active change agent,” says Kovačević. “What that means for pedagogy is that we teach students to think not just about their skill sets, but their roles in shaping how artificial intelligence amplifies human nature, and that may include bias.”

Stoyanovich notes: “I encourage the students taking Responsible Data Science to go to the hearings of the NYC Committee on Technology.  This keeps the students more engaged with the material, and also gives them a chance to offer their technical expertise.”

IBM’s Envisons the Road to Quantum Computing Like an Apollo Mission

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/computing/hardware/ibms-envisons-the-road-to-quantum-computing-like-an-apollo-mission

At its virtual Quantum Computing Summit last week, IBM laid out its roadmap for the future of quantum computing. To illustrate the enormity of the task ahead of them, Jay Gambetta, IBM Fellow and VP, Quantum Computing, drew parallels between the Apollo missions and the next generation of Big Blue’s quantum computers. 

In a post published on the IBM Research blog, Gambetta said:  “…like the Moon landing, we have an ultimate objective to access a realm beyond what’s possible on classical computers: we want to build a large-scale quantum computer.”

Lofty aspirations can lead to landing humankind on the moon and possibly enabling quantum computers to help solve our biggest challenges such as administering healthcare and managing natural resources. But it was clear from Gambetta’s presentation that it is going to require a number of steps to achieve IBM’s ultimate aim: a 1,121-qubit processor named Condor.

For IBM, it’s been a process that started in the mid-2000s with its initial research into superconducting qubits, which are on its roadmap through at least 2023.

This reliance on superconducting qubits stands in contrast to Intel’s roadmap which depends on silicon spin qubits. It would appear that IBM is not as keen as Intel to have qubits resemble a transistor.

However, there is one big issue for quantum computers based on superconducting qubits: they require extremely cold temperatures of about 20 millikelvin (-273 degrees C). In addition, as the number of superconducting qubits increases, the refrigeration system needs to expand as well. With an eye toward reaching its 1,121-qubit processor by 2023, IBM is currently building an enormous “super fridge” dubbed Goldeneye that will be 3 meters tall and 2 meters wide.

Upon reaching the 1,121-qubit threshold, IBM believes it could lay the groundwork for an entirely new era in quantum computing in which it will become possible to scale to error-corrected, interconnected, 1-million-plus-qubit quantum computers.

“At 1,121 qubits, we expect to start demonstrating real error correction which will result in the circuits having higher fidelity than without error correction,” said Gambetta.

Gambetta says that Big Blue engineers will need to overcome a number of technical challenges to get to 1,121 qubits. Back in early September, IBM made available its 65-qubit Hummingbird processor, a step up from its 27-qubit Falcon processor, which had run a quantum circuit long enough for IBM to declare it had reached a quantum volume of 64. (Quantum volume is a measurement of how many physical qubits there are, how connected they are, and how error prone they may be.)

Another issue is the so-called “fan-out” problems that result when you scale up the number of qubits on a quantum chip. As the qubits increase, you need to add multiple control wires for each qubit.

The issue has become such a concern that quantum computer scientists have adopted the Rent’s Rule that the semiconductor industry defined back in the mid-1960s. E.F. Rent, a scientist at IBM in the 1960s, observed that there was relationship between the number of external signal connections to a logic block and the number of logic gates in the logic block.  Quantum scientists have adopted the terminology to describe their own challenge with wiring of qubits.

IBM plans to address these issues next year when it introduces its 127-qubit Quantum Eagle processor that will feature through-silicon vias and multi-level wiring that will enable the “fan-out” of a large density classical control signal while protecting the qubits in order to maintain high coherence times.

“Quantum computing will face things similar to the Rent Rule, but with the multi-level wiring and multiplex readout—things we presented in our roadmap which we are working on are already—we are demonstrating a path to solving how we scale up quantum processors,” said Gambetta.

It is with this Eagle processor that IBM will introduce concurrent real-time classical computing capabilities that will enable the introduction of a broader family of quantum circuits and codes.

It’s in IBM’s next release, in 2022—the 433-qubit Quantum Osprey system—that some pretty big technical challenges will start looming.

“Getting to a 433-qubit machine will require increased density of cryo-infrastructure and controls and cryo-flex cables, said Gambetta. “It has never been done, so the challenge is to make sure we are designing with a million-qubit system in mind. To this end, we have already begun fundamental feasibility tests.”

After Osprey comes the Condor and its 1,121-qubit processor. As Gambetta noted in his post: “We think of Condor as an inflection point, a milestone that marks our ability to implement error correction and scale up our devices, while simultaneously complex enough to explore potential Quantum Advantages—problems that we can solve more efficiently on a quantum computer than on the world’s best supercomputers.”

The Subtle Effects of Blood Circulation Can Be Used to Detect Deep Fakes

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/computing/software/blook-circulation-can-be-used-to-detect-deep-fakes

You probably have etched in your mind the first time you saw a synthetic video of a someone that looked good enough to convince you it was real. For me, that moment came in 2014, after seeing a commercial for Dove Chocolate that resurrected the actress Audrey Hepburn, who died in 1993.

Awe about what image-processing technology could accomplish changed to fear, though, a few years later, after I viewed a video that Jordan Peele and Buzzfeed had produced with the help of AI. The clip depicted Barack Obama saying things he never actually said. That video went viral, helping to alert the world to the danger of faked videos, which have become increasingly easy to create using deep learning.

Dubbed deep-fakes, these videos can be used for various nefarious purposes, perhaps most troublingly for political disinformation. For this reason, Facebook and some other social-media networks prohibit such fake videos on their platforms. But enforcing such prohibitions isn’t straightforward.

Facebook, for one, is working hard to develop software that can detect deep fakes. But those efforts will no doubt just motivate the development of software for creating even better fakes that can pass muster with the available detection tools. That cat-and-mouse game will probably continue for the foreseeable future. Still, some recent research promises to give the upper hand to the fake-detecting cats, at least for the time being.

This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals,” the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye.

In particular, video of a person’s face contains subtle shifts in color that result from pulses in blood circulation. You might imagine that these changes would be too minute to detect merely from a video, but viewing videos that have been enhanced to exaggerate these color shifts will quickly disabuse you of that notion. This phenomenon forms the basis of a technique called photoplethysmography, or PPG for short, which can be used, for example, to monitor newborns without having to attach anything to a their very sensitive skin.

Deep fakes don’t lack such circulation-induced shifts in color, but they don’t recreate them with high fidelity. The researchers at SUNY and Intel found that “biological signals are not coherently preserved in different synthetic facial parts” and that “synthetic content does not contain frames with stable PPG.” Translation: Deep fakes can’t convincingly mimic how your pulse shows up in your face.

The inconsistencies in PPG signals found in deep fakes provided these researchers with the basis for a deep-learning system of their own, dubbed FakeCatcher, which can categorize videos of a person’s face as either real or fake with greater than 90 percent accuracy. And these same three researchers followed this study with another demonstrating that this approach can be applied not only to revealing that a video is fake, but also to show what software was used to create it.

That newer work, posted to the arXiv pre-print server on 26 August, was titled, “How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals.” In it, the researchers showed they that can distinguish with greater than 90 percent accuracy whether the video was real, or which of four different deep-fake generators (DeepFakes, Face2Face, FaceSwap or NeuralTex) was used to create a bogus video.

Will a newer generation of deep-fake generators someday be able to outwit this physiology-based approach to detection? No doubt, that will eventually happen. But for the moment, knowing that there’s a promising new means available to thwart fraudulent videos warms my deep fake–revealing heart.

Making Absentee Ballot Voting Easier

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/software/making-absentee-ballot-voting-easier

While there is a controversy raging over the legality and security of states like California, Hawaii,  NevadaNew Jersey, and Vermont among others deciding to send out vote by mail (VBM) ballots to every registered voter, there is little controversy over voters applying for absentee ballots from their local election officials. U.S. Attorney General William Barr, for example, who is against the mass mailing of ballots says, “absentee ballots are fine.” The problem is that applying for an absentee ballot is not always easy or secure, often requiring what might be seen as intrusive, irrelevant, or duplicate personal information to prove voter identity.

For example, Virginia’s Board of Elections online voter portal requires a person’s driver’s license information and full social security number be provided as proof of identity, whereas the only legally required information to request a ballot is the last four digits of the voter’s  social security number. In fact, if you don’t provide your driver’s license number (or don’t have one), you can’t request an absentee ballot. This is strange, considering that a mailed in paper absentee ballot application requires only the last four social security numbers be provided as an identifier. This basic information, along with the person’s name and address, is sufficient for local officials to determine whether they are a registered voter or not. Registering online for an absentee ballot cries out to be streamlined.

Two students attending Thomas Jefferson High School for Science and Technology in Fairfax, Virginia, stepped up to meet this need not only for Virginia, but possibly other states in the future. Senior Raunak Daga and junior Sumanth Ratna set out developing an online app called eAbsentee that makes the process of applying for an absentee ballot very easy and accessible to everyone. “With us, it’s five clicks, a one-page form that can be done from your phone,” says Raunak.

The app asks for your name, address, last four digits of your social security number, email and phone number, as well as a legal attestation of truthfulness of the information you are submitting, and you are done. Immediately afterwards, Raunak says, “Both the election registrar and applicant receive a confirmation email,” which helps ensure security of the process. State election boards will often process the electronic application within a day of receipt.  Only first-time voters will need to submit a copy of a valid ID with their absentee ballot or ballot application. In Virginia, absentee ballots were sent out beginning this past Friday, the 18th of September.

Completely online absentee ballot applications were first approved by the Virginia Board of Elections in 2015 after then Republican Speaker of the Virginia House Bill Howell requested the board clarify a new state law allowing electronic signatures on absentee ballot requests. Soon afterwards, the state created its online application form.

Shortly afterwards, Raunak, while working for the nonprofit Vote Absentee Virginia, saw how puzzled voters were while trying to use the state’s portal. He thought the absentee ballot request process could be simplified, so he enlisted fellow student and friend Sumanth. Together, they spent the summer of 2019 developing the app. eAbsentee was officially deployed in September 2019 and was used in last year’s state elections. Some 750 voters used the app to request their absentee ballots.

This year, with Covid-19 making people wary of going to the polls in person, and a presidential election stoking voter interest, there is greater motivation for using the app. As of this week, nearly 8,000 Virginia voters have used eAbsentee. That number will likely continue to grow, as several other changes to Virginia election regulations have been made this year. The first change is that an absentee voter doesn’t have to have a pre-approved excuse for requesting a ballot as in the past. A second is that the envelope provided for the return of the absentee ballot includes prepaid postage. The third is that the envelopes sent to and from the voter and the local board of elections will have bar codes to allow both the voter and board to track their transit through the mail. Raunak and Sumanth told me they were deeply involved with Vote Absentee Virginia in the first two efforts to make absentee voting easier.

Raunak and Sumanth, who are both planning to pursue college degrees in computer science or data science, deployed the app on PythonAnywhere which aids portability. “We built the application from the start with the intention that it be easily deployable on multiple platforms and in multiple locations,” Sumanth says. “Another person could very easily deploy our project in another state, which has been a goal since the beginning.”

If you are a Virginia voter thinking of absentee voting, you might consider using eAbsentee. Those of you in other states, well, maybe keep an eye out for it in the next election cycle.

How Estonia’s Management of Legacy IT Has Helped It Weather the Pandemic

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/it/estonia-manages-legacy-it

In “The Hidden World of IT Legacy Systems,” I describe how the Covid-19 pandemic spotlighted the problems legacy IT systems pose for companies, and how they especially affect governments. A recent Bloomberg News story that discussed how government legacy IT systems in Japan are holding back that country’s economic recovery further illustrates the magnitude of the problem.

Japanese economist Yukio Noguchi is quoted in the story as warning that the country is “behind the world by at least 20 years” in administrative technology. The poor shape of Japan’s governmental IT hinders the country’s global competitiveness by restraining private sector technological advances.

This helps explain why, despite being the world’s third-largest economy, Japan is now ranked only 23rd out of 63 countries when it comes to digital competitiveness as measured by the IMD World Competitiveness Center, a Swiss business school. In addition, the frayed IT infrastructure has diminished the potential benefits from the government’s $2.2 trillion pandemic fiscal stimulus, the Bloomberg story states.

One the other hand, Estonia’s IT systems that have weathered the pandemic well. According to an article from the World Economic Forum, during the country’s pandemic lock-down, the country’s online government services continued to be readily available. Further, its schools experienced little difficulty supporting digital learning, remote working seems to have been a non-issue, and its health information systems were able to be quickly reconfigured to provide information about newly diagnosed Covid-19 cases in near real-time; contact tracers were able to be updated with new case information five times a day, for example.

Why has Estonia’s IT systems been able to handle the pandemic so well? In the words of David Eaves from the Harvard Kennedy School of Government, by being both “lucky and good.” Eaves says Estonia was “lucky” because, after the Baltic country won back its independence from Russia in August 1991 and the last Russian troops left in 1994, the country of 1.3 million people found itself desperately poor. The Russians, Eaves notes, “took everything,” leaving essentially a clean-slate IT and telecommunications environment.

Eaves states Estonia was “good” in that its political leadership was savvy enough to recognize how important modern technology was not only to its future economy but its political stability and independence. Eaves said being poor meant that the country’s leadership could not “afford to make bad decisions,” like richer countries. Estonia began by modernizing its telecommunications infrastructure—mobile first because it was easiest and cheapest—followed by a fiber-optic backbone, and then beginning in 2001, public Wi-Fi areas, which were set up across the country.

The Estonian government realized early on that world-class communications, along with universal access to the Internet, were key to quickly modernizing industry as well as government. Estonia embarked on a program to digitalize government operations and planned the extensive use of the Internet to allow its citizens to communicate and interact with the government. Eaves states that Estonia’s political leadership also understood that to do so successfully required the creation of the necessary legal infrastructure. For example, this meant ensuring the protection of individual privacy, safeguarding personal information and providing total transparency regarding on how personal data would be used. Without these, Estonians would not trust a totally wired government or society, especially after its unpleasant experience of being part of the Soviet Union.

Estonia also wanted to avoid being encumbered with old technologies to reduce IT system maintenance costs, so it decided to adopt an eliminate “legacy IT” principle. In other words, for systems in the public sector, the government decreed that, “IT solutions of material importance must never be older than 13-years-old.” Once a system reaches ten-years-old, it must be reviewed and then replaced before it becomes a teenager. While the 13-year period seems arbitrary, it serves the purpose of a forcing function to ensure existing systems don’t fall into the prevailing twilight world of technology maintenance.

Estonia proudly proclaims that along with 99 percent of its public services being online 24/7, “E-services are only impossible for marriages, divorces and real-estate transactions—you still have to get out of the house for those.” Everything else, from voting to filing taxes, can easily be done online securely and quickly.

This is possible because since 2007, there has been an information “once only” policy, where the government cannot ask citizens to re-enter the same information twice. If your personal information was collected by the census bureau, the hospital will not ask for the same information again, for example. This policy meant different government ministries had to figure out how to share and protect citizen information, which had an added benefit of making their operations very efficient. Estonia’s online tax system is reported to allow people to file their taxes in as little as three minutes.

Scaling up Estonia’s e-Governance ecosystem [PDF] to larger countries might not be easy. However, there is still much to be learned about what an e-government approach can achieve, and which IT legacy modernization strategies might be quickly implemented, Eaves argues.

Yet, even in Estonia, there are a few dark clouds forming in the distance over its IT systems. The government’s Chief Information Officer Siim Sikkut has repeatedly warned that while there has been funding available to build new online capabilities, the country’s IT infrastructure has been chronically under-funded for several years. Sikkut argues that it may be time to start shifting the balance of funding away from new IT initiatives to supporting and replacing existing systems. A September 2019 Organization for Economic Cooperation and Development report indicates that Estonia needs to spend approximately 1.5% of the state budget on its digitalization efforts, but is only currently spending around 1.1% to 1.3%.

Finding more IT modernization funding to make up the shortfall may not be easy, given the pandemic’s economic impacts on Estonia and other competing government e-governance objectives. Given its seeming e-governance prowess, Estonia is surprisingly only ranked 29th by the IMD in global competitiveness, a position the government wants to rectify, which will require even more funding of new IT initiatives.

It will be interesting to see over the long run whether Estonia will be able to find the funding for both new IT initiatives and IT modernizations, or if it will choose to fund the former over the latter, and end up stumbling into the legacy IT system trap like so many other countries have.

Open-Source Vote-Auditing Software Can Boost Voter Confidence

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/computing/software/opensource-voteauditing-software-can-boost-voter-confidence

Election experts were already concerned about the security and accuracy of the 2020 U.S. presidential election. Now, with the ongoing COVID-19 pandemic and the new risk it creates for in-person voting—not to mention the debate about whether mail-in ballots lead to voter fraud—the amount of anxiety around the 2020 election is unprecedented.

“Elections are massively complicated, and they are run by the most OCD individuals, who are process oriented and love color coding,” says Monica Childers, a product manager with the nonprofit organization VotingWorks. “And in a massively complex system, the more you change things, especially at the last minute, the more you introduce the potential for chaos.” But that’s just what election officials are being forced to do.

Most of the conversation around election security focuses on the security of voting machines and preventing interference. But it’s equally important to prove that ballots were correctly counted. If a party or candidate cries foul, states will have to audit their votes to prove there were no miscounts.

VotingWorks has built an open-source vote-auditing software tool called Arlo, and the organization has teamed up with the U.S. Cybersecurity and Infrastructure Security Agency to help states adopt the tool. Arlo helps election officials conduct a risk-limiting audit [PDF], which ensures that the reported results match the actual results. And because it’s open source, all aspects of the software are available for inspection.

There are actually several ways to audit votes. You’re probably most familiar with recounts, a process dictated by law that orders a complete recounting of ballots if an election is very close. But full recounts are rare. More often, election officials will audit the ballots tabulated by a single machine, or verify the ballots cast in a few precincts. However, those techniques don’t give a representative sample of how an entire state may have voted.

This is where a risk-limiting audit excels. The audit takes a random sample of the ballots from across the area undergoing the audit and outlines precisely how the officials should proceed. This includes giving explicit instructions for choosing the ballots at random (pick the fourth box on shelf A and then select the 44th ballot down, for example). It also explains how to document a “chain of custody” for the selected ballots so that it’s clear which auditors handled which ballots.

The random-number generator that Arlo uses to select the ballots is published online. Anyone can use the tool to select the same ballots to audit and compare their results. The software provides the data-entry system for the teams of auditors entering the ballot results. Arlo will also indicate how likely it is that the entire election was reported correctly.

The technology may not be fancy, but the documentation and attention to a replicable process is. And that’s most important for validating the results of a contested election.

Arlo has been tested in elections in Michigan, Ohio, Pennsylvania, and a few other states. The software isn’t the only way a state or election official can conduct a risk-limiting audit, but it does make the process easier. Childers says Colorado took almost 10 years to set up risk-limiting audits. VotingWorks has been using Arlo and its staff to help several states set up these processes, which has taken less than a year.

The upcoming U.S. election is dominated by partisanship, but risk-limiting audits have been embraced by both parties. So far, it seems everyone agrees that if your vote gets counted, the government needs to count it correctly.

This article appears in the October 2020 print issue as “Making Sure Votes Count.”

Make IoT Devices Certifiably Safe—and Secure

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/networks/make-iot-devices-certifiably-safeand-secure

After unboxing a new gadget, few people stop to consider how things could go horribly wrong when it’s plugged into the wall: A shorted wire could, for example, quickly produce a fire. We trust that our machines will not fail so catastrophically—a trust developed through more than a century of certification processes.

To display the coveted “approved” logo from a certification agency—for example, UL (from a U.S. organization formerly known as Underwriters Laboratories), CE (Conformité ­Européenne) in Europe, or Australia’s Regulatory Compliance Mark—the maker of the device has to pull a production unit off the manufacturing line and send it to the testing laboratory to be poked and prodded. As someone who’s been through that process, I can attest that it’s slow, detailed, ­expensive—and entirely necessary. Many retailers won’t sell uncertified devices to the ­public. And for good reason: They could be dangerous.

Sure, certification carries certain costs for both the manufacturer and consumer, but it prevents much larger expenses. It’s now considered so essential that the biggest question these days isn’t whether an electrical product is certified; it’s whether the certification mark is authentic.

Certification assures us we can plug something in without worry that it will electrocute somebody or burn down the house. That’s necessary but, in today’s thoroughly connected era, insufficient. The consequences of plugging a compromised device into a home network are not as catastrophic as shock or fire, but they are still bad—and they’ve gone largely unappreciated.

We need to change our thinking. We need to become far more circumspect when we plug a new device into our networks, asking ourselves if its maker has given as much thought to cybersecurity as to basic electrical safety.

The answer to that question will almost invariably be no. A recent report detailing a security test of home Wi-Fi routers by Germany’s Fraunhofer Institute FKIE showed every unit tested to have substantial security flaws, even when upgraded to the latest firmware.

Although security researchers plead with the public to keep the software on their connected devices up-to-date, it appears even that sort of digital hypervigilance isn’t enough. Nor should this burden rest on the consumer’s shoulders. After all, manufacturers don’t expect consumers to do periodic maintenance on their blenders and electric toothbrushes to prevent them from catching fire or causing an electric shock.

The number of connected devices within our homes has grown by an order of magnitude over the last decade, enlarging the attack surfaces available to cyber-miscreants. At some point in the not-too-distant future, the risks will outweigh the benefits. Consumers will then lose their appetites for using such devices at all.

How could we prevent this impending security catastrophe? We can copy what worked once before, crafting a certification process for connected devices, one that tests and prods them and certifies only those that can resist—and stay ahead of—the black hats. A manufacturer does that by designing a device that can be easily and quickly updated—so easily that it can perform important updates unattended. Success here will mean that connected devices will cost more to design, and prices will rise for consumers. But security is never cheap. And the costs of poor security are so much higher.

This article appears in the xDATEx print issue as “Certifiably Secure IoT.”

Co-designing electronics and microfluidics for a cooling boost

Post Syndicated from Prachi Patel original https://spectrum.ieee.org/tech-talk/computing/hardware/codesigning-electronics-and-microfluidics-for-a-cooling-boost

The heat generated by today’s densely-packed electronics is a costly resource drain. To keep systems at the right temperature for optimal computational performance, data center cooling in the United States consumes the as much energy and water as all the residents of the city of Philadelphia. Now, by integrating liquid cooling channels directly into semiconductor chips, researchers hope to reduce that drain at least in power electronics devices, making them smaller, cheaper and less energy-intensive. 

Traditionally, the electronics and the heat management system are designed and made separately, says Elison Matioli, an electrical engineering professor at École Polytechnique Fédérale de Lausanne in Switzerland. That introduces a fundamental obstacle to improving cooling efficiency since heat has to propagate relatively long distances through multiple materials for removal. In today’s processors, for instance, thermal materials syphon heat away from the chip to a bulky, air-cooled copper heat sink.

For a more energy-efficient solution, Matioli and his colleagues have developed a low-cost process to put a 3D network of microfluidic cooling channels directly into a semiconductor chip. Liquids remove heat better than air, and the idea is to put coolant micrometers away from chip hot spots.

But unlike previously reported microfluidic cooling techniques, he says, “we design the electronics and the cooling together from the beginning.” So the microchannels are right underneath the active region of each transistor device, where it heats up the most, which increases cooling performance by a factor of 50. They reported their co-design concept in the journal Nature today.

Researchers first proposed microchannel cooling back in 1981, and startups such as Cooligy have pursued the idea for processors. But the semiconductor industry is moving from planar devices to 3D ones and towards future chips with stacked multi-layer architectures, which makes cooling channels impractical. “This type of embedded cooling solution is not meant for modern processors and chips, like the CPU,” says Tiwei Wei, who studies electronic cooling solutions at Interuniversity Microelectronics Centre and KU Leuven in Belgium.  Instead, this cooling technology makes the most sense for power electronics, he says.

Power electronics circuits manage and convert electrical energy, and are used widely in computers, data centers, solar panels, and electric vehicles, among other things. They use large-area discrete devices made from wide-bandgap semiconductors like gallium nitride. The power density of these devices has gone up dramatically over the years, which means they have to be “hooked to a massive heat sink,” Matioli says.

First Photonic Quantum Computer on the Cloud

Post Syndicated from Charles Q. Choi original https://spectrum.ieee.org/tech-talk/computing/hardware/photonic-quantum

Quantum computers based on photons may possess key advantages over those based on electrons. To benefit from those advantages, quantum computing startup Xanadu has, for the first time, made a photonic quantum computer publicly available over the cloud.

Whereas classical computers switch transistors either on or off to symbolize data as ones and zeroes, quantum computers use quantum bits or “qubits” that, because of the surreal nature of quantum physics, can be in a state known as superposition where they can act as both 1 and 0. This essentially lets each qubit perform two calculations at once.

If two qubits are quantum-mechanically linked, or entangled, they can help perform 2^2 or four calculations simultaneously; three qubits, 2^3 or eight calculations; and so on. In principle, a quantum computer with 300 qubits could perform more calculations in an instant than there are atoms in the visible universe.

Inside the Hidden World of Legacy IT Systems

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/computing/it/inside-hidden-world-legacy-it-systems

“Fix the damn unemployment system!”

This past spring, tens of millions of Americans lost their jobs due to lockdowns aimed at slowing the spread of the SARS-CoV-2 virus. And untold numbers of the newly jobless waited weeks for their unemployment benefit claims to be processed, while others anxiously watched their bank accounts for an extra US $600 weekly payment from the federal government.

Delays in processing unemployment claims in 19 states—Alaska, Arizona, Colorado, Connecticut, Hawaii, Iowa, Kansas, Kentucky, New Jersey, New York, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Texas, Vermont, Virginia, and Wisconsin—are attributed to problems with antiquated and incompatible state and federal unemployment IT systems. Most of those systems date from the 1980s, and some go back even further.

Things were so bad in New Jersey that Governor Phil Murphy pleaded in a press conference for volunteer COBOL programmers to step up to fix the state’s ­Disability Automated Benefits ­System. A clearly exasperated Murphy said that when the pandemic passed, there would be a post mortem focused on the question of “how the heck did we get here when we literally needed cobalt [sic] programmers?”

Similar problems have emerged at the federal level. As part of the federal government’s pandemic relief plan, eligible U.S. taxpayers were to receive $1,200 payments from the Internal Revenue Service. However, it took up to 20 weeks to send out all the payments because the IRS computer systems are even older than the states’ unemployment systems, some dating back almost 60 years.

As the legendary investor Warren Buffett once said, “It’s only when the tide goes out that you learn who’s been swimming naked.” The pandemic has acted as a powerful outgoing tide that has exposed government’s dependence on aging legacy IT systems.

But governments aren’t the only ones struggling under the weight of antiquated IT. It is equally easy to find airlines, banks, insurance companies, and other commercial entities that continue to rely on old IT, contending with software or hardware that is no longer supported by the supplier or has defects that are too costly to repair. These systems are prone to outages and errors, vulnerable to cyber­intrusions, and progressively more expensive and difficult to maintain.

Since 2010, corporations and governments worldwide have spent an estimated $35 trillion on IT products and services. Of this amount, about three-quarters went toward operating and maintaining existing IT systems. And at least $2.5 trillion was spent on trying to replace legacy IT systems, of which some $720 billion was wasted on failed replacement efforts.

But it’s astonishing how seldom people notice these IT systems, even with companies and public institutions spending hundreds of billions of dollars every year on them. From the time we get up until we go to bed, we interact, often unknowingly, with ­dozens of IT systems. Our voice-activated digital assistants read the headlines to us before we hop into our cars loaded with embedded processors, some of which help us drive, others of which entertain us as we guzzle coffee brewed by our own robotic baristas. Infrastructure like wastewater treatment plants, power grids, air traffic control, telecommunications services, and government administration depends on hundreds of thousands of unseen IT systems that form another, hidden infrastructure. Commercial organizations rely on IT systems to manage payroll, order supplies, and approve cashless sales, to name but three of thousands of automated tasks necessary to the smooth functioning of a modern economy. Though these systems run practically every aspect of our lives, we don’t give them a second thought because, for the most part, they function. It doesn’t even occur to us that IT is something that needs constant attention to be kept in working order.

In his landmark study The Shock of the Old: Technology and Global History Since 1900 (Oxford University Press, 2007), British historian David Edgerton claims that although maintenance and repair are central to our relationship with technology, they are “matters we would rather not think about.” As a result, technology maintenance “has lived in a twilight world, hardly visible in the formal accounts societies make of themselves.”

Indeed, the very invisibility of legacy IT is a kind of testament to how successful these systems are. Except, of course, when they’re not.

There’s no formal definition of “legacy system,” but it’s commonly understood to mean a critical system that is out of date in some way. It may be unable to support future business operations; the vendors that supplied the application, operating system, or hardware may no longer be in business or support their products; the system architecture may be fragile or complex and therefore unsuitable for upgrades or fixes; or the finer details of how the system works are no longer understood.

To modernize a computing system or not is a question that bedevils nearly every organization. Given the many problems caused by legacy IT systems, you’d think that modernization would be a no-brainer. But that decision isn’t nearly as straightforward as it appears. Some legacy IT systems end up that way because they work just fine over a long period. Others stagger along because the organization either doesn’t want to or can’t afford to take on the cost and risk associated with modernization.

Obviously, a legacy system that’s critical to day-to-day operations cannot be replaced or enhanced without major disruption. And so even though that system contributes mightily to the organization’s operations, management tends to ignore it and defer modernization. On most days, nothing goes catastrophically wrong, and so the legacy system remains in place.

This “kick the can” approach is understandable. Most IT systems, whether new or modernized, are expensive affairs that go live late and over budget, assuming they don’t fail partially or completely. These situations are not career-enhancing experiences, as many former chief information officers and program managers can attest. Therefore, once an IT system is finally operating reliably, there’s little motivation to plan for its eventual retirement.

What management does demand, however, is for any new IT system to provide a return on investment and to cost as little as possible for as long as possible. Such demands often lead to years of underinvestment in routine maintenance. Of course, those same executives who approved the investment in the new system probably won’t be with the organization a decade later, when that system has legacy status.

Similarly, the developers of the system, who understand in detail how it operates and what its limitations are, may well have moved on to other projects or organizations. For especially long-lived IT systems, most of the developers have likely retired. Over time, the system becomes part of the routine of its users’ daily life, like the office elevator. So long as it works, no one pays much attention to it, and eventually it recedes into the organization’s operational shadows.

Thus does an IT system quietly age into legacy status.

Millions of people every month experience the frustrations and inconveniences of decrepit legacy IT.

U.K. bank customers know this frustration only too well. According to the U.K. Financial Conduct Authority, the nation’s banks reported nearly 600 IT operational and security incidents between October 2017 and September 2018, an increase of 187 percent from a year earlier. Government regulators point to the banks’ reliance on decades-old IT systems as a recurring cause for the incidents.

Airline passengers are equally exasperated. Over the past several years, U.S. air carriers have experienced on average nearly one IT-related outage per month, many of them attributable to legacy IT. Some have lasted days and caused the delay or cancellation of thousands of flights.

Poorly maintained legacy IT systems are also prone to cybersecurity breaches. At the credit reporting agency Equifax, the complexity of its legacy systems contributed to a failure to patch a critical vulnerability in the company’s Automated Consumer Interview System, a custom-built portal developed in the 1970s to handle consumer disputes. This failure led, in 2017, to the loss of 146 million individuals’ sensitive personal information.

Aging IT systems also open the door to crippling ransomware attacks. In this type of attack, a cyberintruder hacks into an IT system and encrypts all of the system data until a ransom is paid. In the past two years, ransomware attacks have been launched against the cities of Atlanta and Baltimore as well as the Florida municipalities of Riviera Beach and Lake City. The latter two agreed to pay their attackers $600,000 and $500,000, respectively. Dozens of state and local governments, as well as school systems and hospitals, have experienced ransomware attacks.

Even if they don’t suffer an embarrassing and costly failure, organizations still have to contend with the steadily climbing operational and maintenance costs of legacy IT. For instance, a recent U.S. Government Accountability Office report found that of the $90 billion the U.S. government spent on IT in fiscal year 2019, nearly 80 percent went toward operation and maintenance of existing systems. Furthermore, of the 7,000 federal IT investments the GAO examined in detail, it found that 5,233 allocated all their funding to operation and maintenance, leaving no monies to modernize. From fiscal year 2010 to 2017, the amount spent on IT modernization dropped by $7.3 billion, while operation and maintenance spending rose by 9 percent. Tony Salvaggio, founder and CEO of CAI, an international firm that specializes in supporting IT systems for government and commercial firms, notes that ever-growing IT legacy costs will continue to eat government’s IT modernization “seed corn.”

While not all operational and maintenance costs can be attributed to legacy IT, the GAO noted that the rise in spending is likely due to supporting obsolete computing hardware—for example, two-thirds of the Internal Revenue Service’s hardware is beyond its useful life—as well as “maintaining applications and systems that use older programming languages, since programmers knowledgeable in these older languages are becoming increasingly rare and thus more expensive.”

Take COBOL, a programming language that dates to 1959. Computer science departments stopped teaching COBOL some decades ago. And yet the U.S. Social Security Administration reportedly still runs some 60 million lines of COBOL. The IRS has nearly as much COBOL ­programming, along with 20 million lines of assembly code. And, according to a 2016 GAO report, the departments of Commerce, Defense, Treasury, Health and Human Services, and ­Veterans Affairs are still “using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago.”

Given the vast amount of outdated software that’s still in use, the cost of maintaining it will likely keep climbing not only for government, but for commercial organizations, too.

The first step in fixing a massive problem is to admit you have one. At least some governments and companies are finally starting to do just that. In December 2017, for example, President Trump signed the Modernizing Government Technology Act into law. It allows federal agencies and departments to apply for funds from a $150 million Technology Modernization Fund to accelerate the modernization of their IT systems. The Congressional Budget Office originally indicated the need was closer to $1.8 billion per year, but politicians’ concerns over whether the money would be well spent resulted in a significant reduction in funding.

Part of the modernization push by governments in the United States and abroad has been to provide more effective administrative controls, increase the reliability and speed of delivering benefits, and improve customer service. In the commercial sector, by contrast, IT modernization is being driven more by competitive pressures and the availability of newer computing technologies like cloud computing and machine learning.

“Everyone understands now that IT drives organization innovation,” Salvaggio told IEEE Spectrum. He believes that the capabilities these new technologies will create over the next few years are “going to blow up 30 to 40 percent of [existing] business models.” Companies saddled with legacy IT systems won’t be able to compete on the expected rapid delivery of improved features or customer service, and therefore “are going to find themselves forced into a box canyon, unable to get out,” Salvaggio says.

This is already happening in the banking industry. Existing firms are having a difficult time competing with new businesses that are spending most of their IT budgets on creating new offerings instead of supporting legacy systems. For example, Starling Bank in the United Kingdom, which began operations in 2014, offers only mobile banking. It uses Amazon Web Services to host its services and spent a mere £18 million ($24 million) to create its infrastructure. In comparison, the U.K.’s TSB bank, a traditional full-service bank founded in 1810, spent £417 million ($546 million) moving to a new banking platform in 2018.

Starling maintains all its own code and does an average of one software release per day. It can do this because it doesn’t have the intricate connections to myriad legacy IT systems, where every new software release carries a measurable risk of operational failure, according to the U.K.’s bank regulators. Simpler systems mean fewer and shorter IT-related outages. Starling has had only one major outage since it opened, whereas each of the three largest U.K. banks has had at least a dozen apiece over the same period.

Modernization creates its own problems. Take the migration of legacy data to a new system. When TSB moved to its new IT platform in 2018, some 1.9 million online and mobile customers discovered they were locked out of their accounts for nearly two weeks. And modernizing one legacy system often means having to upgrade other interconnecting systems, which may also be legacy. At the IRS, for instance, the original master tax file systems installed in the 1960s have become buried under layers of more modern, interconnected systems, each of which made it harder to replace the preceding system. The agency has been trying to modernize its interconnected legacy tax systems since 1968 at a cumulative cost of at least $20 billion in today’s money, so far with very little success. It plans to spend up to another $2.7 billion on modernization over the next five years.

Another common issue is that legacy systems have duplicate functions. The U.S. Navy is in the process of installing its $167 million Navy Pay and Personnel system, which aims to consolidate 223 applications residing in 55 separate IT systems, including 10 that are more than 30 years old and a few that are more than 50 years old. The disparate systems used 21 programming languages, executing on nine operating systems ranging across 73 data centers and networks.

Such massive duplication and data silos sound ridiculous, but they are shockingly common. Here’s one way it often happens: The government issues a new mandate that includes a requirement for some type of automation, and the policy comes with fresh funding to implement it. Rather than upgrade an existing system, which would be disruptive, the department or agency finds it easier to just create a new IT system, even if some or most of the new system duplicates what the existing system is doing. The result is that different units within the same organization end up deploying IT systems with overlapping functions.

“The shortage of thinking about systems engineering” along with the lack of coordinating IT developments to avoid duplication have long plagued government and corporations alike, Salvaggio says.

The best way to deal with legacy IT is to never let IT become legacy. Growing recognition of legacy IT systems’ many costs has sparked a rethinking of the role of software maintenance. One new approach was recently articulated in Software Is Never Done, a May 2019 report from the U.S. Defense Innovation Board. It argues that software should be viewed “as an enduring capability that must be supported and continuously improved throughout its life cycle.” This includes being able to test, integrate, and deliver improvements to software systems within short periods of time and on an ongoing basis.

Here’s what that means in practice. Currently, software development, operations, and support are considered separate activities. But if you fuse those activities into a single integrated activity—employing what is called DevOps—the operational system is then always “under development,” continuously and incrementally being improved, tested, and deployed, sometimes many times a day.

DevOps is just one way to keep core IT systems from turning into legacy systems. The U.S. Defense Advanced Research Projects Agency has been exploring another, potentially more effective way, recognizing the longevity of IT systems once implemented.

Since 2015, DARPA has funded research aimed at making software that will be viable for more than 100 years. The Building Resource Adaptive Software Systems (BRASS) program is trying to figure out how to build “long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate,” according to program manager ­Sandeep Neema.

Creating such timeless systems will require a “start from scratch” approach to software design that doesn’t make assumptions about how an IT system should be designed, coded, or maintained. That will entail identifying the logical (libraries, data formats, structures) and physical resources (processing, storage, energy) a software program needs for execution. Such analyses could use advanced AI techniques that discover and make visible an application’s operations and interactions with other applications and systems. By doing so, changes to resources or interactions with other systems, which account for many system failures or inefficient operations, can be actively managed before problems occur. Developers will also need to create a capability, again possibly using AI, to monitor and repair all elements of the execution environment in which the application resides.

The goal is to be able to update or upgrade applications without the need for extensive intervention by a human programmer, Neema told Spectrum, thereby “buying down the cost of maintenance.”

The BRASS program has funded nine projects, each of which represents different aspects of what a resource-adaptive software system will need to do. Some of the projects involve UAVs, mobile robots, and high-performance computing. The final results of the effort are expected later this year, when the technologies will be released to open-source repositories, industry, and the Defense Department.

Neema says no one should expect BRASS to deliver “a general-purpose software repair capability.” A more realistic outcome is an approach that can work within specific data, software, and system parameters to help the maintainers who oversee those systems to become more efficient and effective. He of course hopes that private companies and other government organizations will build on the BRASS program’s results.

The COVID-19 pandemic has exposed the debilitating consequences of relying on antiquated IT systems for essential services. Unfortunately, that dependence, along with legacy IT’s enormous and increasing costs, will still be with us long after the pandemic has ended. For the U.S. government alone, even a concerted and well-executed effort would take decades to replace the thousands of existing legacy systems. Over that time, current IT systems will also become legacy and themselves require replacement. Given the budgetary impacts of the pandemic, even less money for legacy system modernization may be available in the future across all government sectors.

The problems associated with legacy systems will only worsen as the Internet of Things, with its billions of interconnected computing devices, matures. These devices are already being connected to legacy IT, which will make it even more difficult to replace and modernize those systems. And eventually the IoT devices will become legacy. Just as with legacy systems today, those devices likely won’t be replaced as long as they continue to work, even if they are no longer supported. The potential cybersecurity risk of vast numbers of obsolete but still operating IoT devices is a huge unknown. Already, many IoT devices have been deployed without basic cybersecurity built into them, and this shortsightedness is taking a toll. Cybersecurity concerns compelled the U.S. Food and Drug Administration to recall implantable pacemakers and insulin pumps and the National Security Agency to warn about IoT-enabled smart furniture, among other things of the Internet.

Now imagine a not-too-distant future where hundreds of millions or even billions of legacy IoT devices are deeply embedded into government and commercial offices, schools, hospitals, factories, homes, and even people. Further imagine that their cybersecurity or technical flaws are not being fixed and remain connected to legacy IT systems that themselves are barely supported. In such a world, the pervasive dependence upon increasing numbers of interconnected, obsolete systems will have created something far grimmer and murkier than Edgerton’s twilight world.

This article appears in the September 2020 print issue as “The Hidden World of Legacy IT.”

About the Author

As a risk consultant for businesses and a slew of three-lettered U.S. government agencies, Contributing Editor Robert N. Charette has seen more than his share of languishing legacy IT systems. As a civilian, he’s also been a casualty of a legacy system gone berserk. A few years ago, his bank’s IT system, which he later found out was being upgraded, made an error that was most definitely not in his favor.

He’d gone to an ATM to withdraw some weekend cash. The machine told him that his account was overdrawn. Puzzled, because he knew he had sufficient funds in his account to cover the withdrawal, he had to wait until Monday to contact the bank for an explanation. When he called, the customer service representative insisted that he was indeed overdrawn. This was an understatement, considering that the size of the alleged overdraft might have caused a person less versed in software debacles to have a stroke.

“ ‘You know, you’re overdrawn by [US] $1,229,200,’ ” Charette recalls being told. “I was like, well, that’s interesting, because I don’t have that much money in my bank account.”

The customer service rep then acknowledged it could be an error caused by a computer glitch during a recent systems upgrade. Two days later he received the letter pictured above from his bank, apparently triggered by a check he had written for $55.80. Charette notes that it wasn’t the million-dollar-plus overdraft that triggered the letter, just that last double-nickel drop in the bucket.

The bank never did send a letter apologizing for the inconvenience or explaining the problem, which he believes likely affected other customers. And like so many of the failed legacy-system upgrades—some costing billions, which Charette describes here—it never made the news, either.

Anti-Phishing Testers Put Themselves on the Hook

Post Syndicated from Robert Charette original https://spectrum.ieee.org/riskfactor/computing/it/antiphishing-testers-put-themselves-on-the-hook

IEEE COVID-19 coverage logo, link to landing page

Do you want to break into computer networks or steal money from people’s bank accounts without doing all the tedious hard work of defeating security systems directly? Then phishing is for you, where a convincing email can be all that’s required to have victims serve up their passwords or personal information on a platter. With so many people working from home and doing business online thanks to Covid-19, this year is proving to be a phisher’s paradise, with a myriad of new opportunities to scam the unsuspecting. Solicitations from fake charities, along with emails purporting to be from government organizations like state unemployment agencies, health agencies, and tax collection agencies are flooding into people’s inboxes.

it’s not just individuals who have to worry. Because so many organizations have shifted their employees to remote work, cybercrime targeting “has shown a significant target shift from individuals and small businesses to major corporations, governments and critical infrastructure,” according to Interpol. In response, IT departments have ramped up their efforts to stop staffers giving the store away. But sometimes these efforts have caused unexpected collateral damage.

The urgency IT departments feel is understandable. Interpol predicts that phishing attacks—which already made up 59 percent of Covid-related threats reported to it by member countries—will be ramped up even more in the coming months. And the nature of the threat is evolving: for example, false invitations to videoconference meetings are a phisher’s new favorite for trying to steal network credentials, says the Wall Street Journal.

The human element is central to phishing, so government agencies and corporations have increased their employee phishing-training, including the use of phishing tests. These tests use mock phishing emails and websites, often using the same information contained in real phishing emails as a template, to see how their employees respond. When done well “for educational purposes and not as a punitive ‘gotcha’ exercise, employees can improve their ability to spot” and properly report phishing attacks, states Ryan Hardesty, President of PhishingBox, an anti-phishing training and testing company.

But, Hardesty acknowledges, a delicate balance is required to make the phishing lure attractive without causing knock-on problems.  This can be seen in two incidents in 2009 and again in 2014 involving U.S. federal employees who contributed to the government’s Thrift Savings Plan, which is the government’s version of a 401(k) plan.

The employees received emails, ostensibly from the TSP, claiming that their accounts were at risk unless they submitted their personal account information to a designated website. However, both times the emails were actually part of phishing security tests conducted by different government agencies.

The first time it was the U.S. Department of Justice who sent out the email, while the second time, it was a U.S. Army command. In both cases, multiple employees who received the phishing test emails sent them to friends and family in numerous government agencies, which caused widespread concerns if not panic. Furthermore, in both cases, the people involved in the phishing tests did not let the TSP know what they had done, either before of after. Indeed, if they’d given advance warning, TSP lawyers would have immediately sent them cease and desist letters. And the lack of candor afterwards meant that it took weeks of effort by furious TSP officials and multiple government agencies to unravel what happened, who was responsible, and to put an end to it.

The 2014 U.S. Army phishing test was especially successful in stoking fears because TSP had suffered a breach exposing the personal information of 123,000 members in 2012. The email claimed that TSP accounts had been breached again and members needed to change their passwords.

Other similar incidents have also occurred, including one in 2015 where a Belgian regional government phishing exercise used a supposed booking from the French-Belgian high-speed train operator Thalys as bait (Thalys was unamused,) and another where the Michigan Democratic Party conducted a phishing exercise in 2018 that involved a highly sensitive voter database, but did not inform the Democratic National Committee, which was also, funny enough, not amused either.

Mark Henderson, a security specialist with the Internal Revenue Service’s Online Fraud Detection and Prevention department told me that the problem of phishing email tests “going awry” seems to be proliferating. The IRS, for example, has seen an uptick in reports of phishing emails purporting to be from the IRS or Department of the Treasury that are not actual phishing attacks but mock attacks from organizations conducting internal phishing tests. On top of being illegal— Henderson points out that phishing emails are prohibited from using the IRS name, logo, or insignia in a manner that suggests association with or endorsement by the IRS—they can cause undue distress to those being tested, as well as increase the administrative workload for the IRS and Department of the Treasury and so divert attention from real threats.

While there aren’t publicly available statistics on phishing exercises that create collateral damage, I suspect that many other U.S. government organizations, such as the Centers for Disease Control, the Department of Homeland Security, and the Federal Drug Administration, are also experiencing the same problem. State and local governments are almost certainly dealing with phishing test spillover effects, as well as governmental organizations abroad.

Admittedly, it is highly tempting to use Covid-19 related issues for phishing lures as these issues are on everyone’s mind. If you are a U.S. taxpayer still awaiting your economic impact payment, any email that looks like it might be from the IRS will immediately get your attention (NOTE:  no one from the IRS will reach out to you by phone, email, mail or in person asking for any kind of information to complete their economic impact payment).

The security industry has not come to a consensus on the sensitivities regarding pandemic-related bait. Cofense, an anti-phishing company, declared in March that it decided to remove all Covid-19-themed phishing templates from its repositories, and called on the anti-phishing industry to do the same. However, other anti-phishing companies took issue with that request. Perry Carpenter, KnowBe4’s strategy officer, wrote that with the rapid acceleration of phishing attacks they were seeing, phishing security testing needed to be ramped up. In fact, Carpenter argued that “not conducting phishing training during this time amounts to negligence.”

There are no accepted industry standards yet for conducting phishing exercises, although the UK National Security Centre has published an abundance of practical guidance on what to do and what not to do in conducting these sorts of tests.  It also published very useful information on how organizations can reduce phishing attacks, which is really the first line of defense.

In speaking with Ryan Hardesty at PhishingBox, he also believes that conducting phishing security tests should continue, but only if they are well-considered in light of Covid-19 sensibilities and have an objective of education, not shaming. Hardesty makes it clear to PhishingBox clients about appropriate rules of engagement, like not using the IRS as bait in their phishing exercises. Most clients are careful, but when they’re not, it can spark a call from the IRS.  As Hardesty states, “you never want a call from the IRS concerning a phishing exercise that originated on your platform.”

What Intel Is Planning for The Future of Quantum Computing: Hot Qubits, Cold Control Chips, and Rapid Testing

Post Syndicated from Samuel K. Moore original https://spectrum.ieee.org/tech-talk/computing/hardware/intels-quantum-computing-plans-hot-qubits-cold-control-chips-and-rapid-testing

Quantum computing may have shown its “supremacy” over classical computing a little over a year ago, but it still has a long way to go. Intel’s director of quantum hardware, Jim Clarke, says that quantum computing will really have arrived when it can do something unique that can change our lives, calling that point “quantum practicality.” Clarke talked to IEEE Spectrum about how he intends to get silicon-based quantum computers there:

Jim Clarke on…

IEEE Spectrum: Intel seems to have shifted focus from quantum computers that rely on superconducting qubits to ones with silicon spin qubits. Why do you think silicon has the best chance of leading to a useful quantum computer?

Jim Clarke: It’s simple for us… Silicon spin qubits look exactly like a transistor. … The infrastructure is there from a tool fabrication perspective. We know how to make these transistors. So if you can take a technology like quantum computing and map it to such a ubiquitous technology, then the prospect for developing a quantum computer is much clearer.

I would concede that today silicon spin-qubits are not the most advanced quantum computing technology out there. There has been a lot of progress in the last year with superconducting and ion trap qubits.

But there are a few more things: A silicon spin qubit is the size of a transistor—which is to say it is roughly 1 million times smaller than a superconducting qubit. So if you take a relatively large superconducting chip, and you say “how do I get to a useful number of qubits, say 1000 or a million qubits?” all of a sudden you’re dealing with a form factor that is … intimidating.

We’re currently making server chips with billions and billions of transistors on them. So if our spin qubit is about the size of a transistor, from a form-factor and energy perspective, we would expect it to scale much better.

Spectrum: What are silicon spin-qubits and how do they differ from competing technology, such as superconducting qubits and ion trap systems?

Clarke: In an ion trap you are basically using a laser to manipulate a metal ion through its excited states where the population density of two excited states represents the zero and one of the qubit. In a superconducting circuit, you are creating the electrical version of a nonlinear LC (inductor-capacitor) oscillator circuit, and you’re using the two lowest energy levels of that oscillator circuit as the zero and one of your qubit. You use a microwave pulse to manipulate between the zero and one state.

We do something similar with the spin qubit, but it’s a little different. You turn on a transistor, and you have a flow of electrons from one side to another. In a silicon spin qubit, you essentially trap a single electron in your transistor, and then you put the whole thing in a magnetic field [using a superconducting electromagnet in a refrigerator]. This orients the electron to either spin up or spin down. We are essentially using its spin state as the zero and one of the qubit.

That would be an individual qubit. Then with very good control, we can get two separated electrons in close proximity and control the amount of interaction between them. And that serves as our two-qubit interaction.

So we’re basically taking a transistor, operating at the single electron level, getting it in very close proximity to what would amount to another transistor, and then we’re controlling the electrons.

Spectrum: Does the proximity between adjacent qubits limit how the system can scale?

Clarke: I’m going to answer that in two ways. First, the interaction distance between two electrons to provide a two-qubit gate is not asking too much of our process. We make smaller devices every day at Intel. There are other problems, but that’s not one of them.

Typically, these qubits operate on a sort of a nearest neighbor interaction. So you might have a two-dimensional grid of qubits, and you would essentially only have interactions between one of its nearest neighbors. And then you would build up [from there]. That qubit would then have interactions with its nearest neighbors and so forth. And then once you develop an entangled system, that’s how you would get a fully entangled 2D grid. [Entanglement is a condition necessary for certain quantum computations.]

Spectrum: What are some of the difficult issues right now with silicon spin qubits?

Clarke: By highlighting the challenges of this technology, I’m not saying that this is any harder than other technologies. I’m prefacing this, because certainly some of the things that I read in the literature would suggest that  qubits are straightforward to fabricate or scale. Regardless of the qubit technology, they’re all difficult.

With a spin qubit, we take a transistor that normally has a current of electrons go through, and you operate it at the single electron level. This is the equivalent of having a single electron, placed into a sea of several hundred thousand silicon atoms and still being able to manipulate whether it’s spin up or spin down.

So we essentially have a small amount of silicon, we’ll call this the channel of our transistor, and we’re controlling a single electron within that that piece of silicon. The challenge is that silicon, even a single crystal, may not be as clean as we need it.  Some of the defects—these defects can be extra bonds, they can be charge defects, they can be dislocations in the silicon—these can all impact that single electron that we’re studying. This is really a materials issue that we’re trying to solve.

Back to top

Spectrum: Just briefly, what is coherence time and what’s its importance to computing?

Clarke: The coherence time is the window during which information is maintained in the qubit. So, in the case of a silicon spin qubit, it’s how long before that electron loses its orientation, and randomly scrambles the spin state. It’s the operating window for a qubit.

Now, all of the qubit types have what amounts to coherence times. Some are better than others. The coherence times for spin qubits, depending on the type of coherence time measurement, can be on the order of milliseconds, which is pretty compelling compared to other technologies.

What needs to happen [to compensate for brief coherence times] is that we need to develop an error correction technique. That’s a complex way of saying we’re going to put together a bunch of real qubits and have them function as one very good logical qubit.

Spectrum: How close is that kind of error correction?

Clarke: It was one of the four items that really needs to happen for us to realize a quantum computer that I wrote about earlier. The first is we need better qubits. The second is we need better interconnects. The third is we need better control. And the fourth is we need error correction. We still need improvements on the first three before we’re really going to get, in a fully scalable manner, to error correction.

You will see groups starting to do little bits of error correction on just a few qubits. But we need better qubits and we need a more efficient way of wiring them up and controlling them before you’re really going to see fully fault-tolerant quantum computing.

Back to top

Spectrum: One of the improvements to qubits recently was the development of “hot” silicon qubits. Can you explain their significance?

Clarke: Part of it equates to control.

Right now you have a chip at the bottom of a dilution refrigerator, and then, for every qubit, you have several wires that that go from there all the way outside of the fridge. And these are not small wires; they’re coax cables. And so from a form factor perspective and a power perspective—each of these wires dissipates power—you really have a scaling problem.

One of the things that Intel is doing is that we are developing control chips. We have a control chip called Horse Ridge that’s a conventional CMOS chip that we can place in the fridge in close proximity to our qubit chip. Today that control chip sits at 4 kelvins and our qubit chip is at 10 millikelvins and we still have to have wires between those two stages in the fridge.

Now, imagine if we can operate our qubit slightly warmer. And by slightly warmer, I mean maybe 1 kelvin. All of a sudden, the cooling capacity of our fridge becomes much greater. The cooling capacity of our fridge at 10 millikelvin is roughly a milliwatt. That’s not a lot of power.  At 1 Kelvin, it’s probably a couple of Watts. So, if we can operate at higher temperatures, we can then place control electronics in very close proximity to our qubit chip.

By having hot qubits we can co-integrate our control with our qubits, and we begin to solve some of the wiring issues that we’re seeing in today’s early quantum computers.

Spectrum: Are hot qubits structurally the same as regular silicon spin qubits?

Clarke: Within silicon spin qubits there are several different types of materials, some are what I would call silicon MOS type qubits— very similar to today’s transistor materials.  In other silicon spin qubits you have silicon that’s buried below a layer of silicon germanium. We’ll call that a buried channel device. Each have their benefits and challenges.

We’ve done a lot of work with TU Delft working on a certain type of [silicon MOS] material system, which is a little different than most in the community are studying [and lets us] operate the system at a slightly higher temperature.

I loved the quantum supremacy work. I really did. It’s good for our community. But it’s a contrived problem, on a brute force system, where the wiring is a mess (or at least complex).

What we’re trying to do with the hot qubits and with the Horse Ridge chip is put us on a path to scaling that will get us to a useful quantum computer that will change your life or mine. We’ll call that quantum practicality.

Back to top

Spectrum: What do you think you’re going to work on next most intensely?

Clarke: In other words, “What keeps Jim up at night?”

There are a few things. The first is time-to-information. Across most of the community, we use these dilution refrigerators. And the standard way [to perform an experiment] is: You fabricate a chip; you put it in a dilution refrigerator; it cools down over the course of several days; you experiment with it over the course of several weeks; then you warm it back up and put another chip in.

Compare that to what we do for transistors: We take a 300-millimeter wafer, put it on a probe station, and after two hours we have thousands and thousands of data points across the wafer that tells us something about our yield, our uniformity, and our performance.

That doesn’t really exist in quantum computing. So we asked, “Is there way to—at slightly higher temperatures—to combine a probe station with a dilution refrigerator?” Over the last two years, Intel has been working with two companies in Finland [Bluefors Oy and Afore Oy] to develop what we call the cryoprober. And this is just coming online now. We’ve been doing an impressive job of installing this massive piece of equipment in the complete absence of field engineers from Finland due to the Coronavirus.

What this will do is speed up our time-to-information by a factor of up to 10,000. So instead of wire bonding a single sample, putting it in the fridge, taking a week to study it, or even a few days to study it, we’re going to be able to put a 300-millimeter wafer into this unit and over the course of an evening step and scan. So we’re going to get a tremendous increase in throughput. I would say a 100 X improvement. My engineers would say 10,000.  I’ll leave that as a challenge for them to impress me beyond the 100.

Here’s the other thing that keeps me up at night. Prior to starting the Intel quantum computing program, I was in charge of interconnect research in Intel’s Components Research Group. (This is the wiring on chips.) So, I’m a little less concerned with the wiring into and out of the fridge than I am just about the wiring on the chip.

I’ll give an example:  An Intel server chip has probably north of 10 billion transistors on a single chip. Yet the number of wires coming off that chip is a couple of thousand. A quantum computing chip has more wires coming off the chip than there are qubits. This was certainly the case for the Google [quantum supremacy] work last year. This was certainly the case for the Tangle Lake chip that Intel manufactured in 2018, and it’s the case with our spin qubit chips we make now.

So we’ve got to find a way to make the interconnects more elegant. We can’t have more wires coming off the chip than we have devices on the chip. It’s ineffective.

This is something the conventional computing community discovered in the late 1960s with Rent’s Rule [which empirically relates the number of interconnects coming out of a block of logic circuitry to the number of gates in the block]. Last year we published a paper with Technical University Delft on the quantum equivalent of Rent’s Rule. And it talks about, amongst other things the Horse Ridge control chip, the hot qubits, and multiplexing.

We have to find a way to multiplex at low temperatures. And that will be hard. You can’t have a million-qubit quantum computer with two million coax cables coming out of the top of the fridge.

Spectrum: Doesn’t Horse Ridge do multiplexing?

Clarke: It has multiplexing. The second generation will have a little bit more. The form factor of the wires [in the new generation] is much smaller, because we can put it in closer proximity to the [quantum] chip.

So if you kind of combine everything I’ve talked about. If I give you a package that has a classical control chip—call it a future version of Horse Ridge—sitting right next to and in the same package as a quantum chip, both operating at a similar temperature and making use of very small interconnect wires and multiplexing, that would be the vision.

Spectrum: What’s that going to require?

Clarke: It’s going to require a few things. It’s going to require improvements in the operating temperature of the control chip. It’s probably going to require some novel implementations of the packaging so there isn’t a lot of thermal cross talk between the two chips. It’s probably going to require even greater cooling capacity from the dilution refrigerator. And it’s probably going to require some qubit topology that facilitates multiplexing.

Spectrum: Given the significant technical challenges you’ve talked about here, how optimistic are you about the future of quantum computing?

Clarke: At Intel, we’ve consistently maintained that we are early in the quantum race. Every major change in the semiconductor industry has happened on the decade timescale and I don’t believe quantum will be any different. While it’s important to not underestimate the technical challenges involved, the promise and potential are real. I’m excited to see and participate in the meaningful progress we’re making, not just within Intel but the industry as a whole. A computing shift of this magnitude will take technology leaders, scientific research communities, academia, and policy makers all coming together to drive advances in the field, and there is tremendous work already happening on that front across the quantum ecosystem today.

Back to top

Three Ways to Hack a Printed Circuit Board

Post Syndicated from Samuel H. Russ original https://spectrum.ieee.org/computing/hardware/three-ways-to-hack-a-printed-circuit-board

In 2018, an article in Bloomberg ­Businessweek made the stupendous assertion that Chinese spy services had created back doors to servers built for Amazon, Apple, and others by inserting millimeter-size chips into circuit boards. 

This claim has been roundly and specifically refuted by the companies involved and by the U.S. Department of Homeland Security. Even so, the possibility of carrying out such a stupendous hack is quite real. And there have been more than a dozen documented examples of such system-level attacks.

We know much about malware and counterfeit ICs, but the vulnerabilities of the printed circuit board itself are only now starting to get the attention they deserve. We’ll take you on a tour of some of the best-known weak points in printed-circuit-board manufacturing. Fortunately, the means to shore up those points are relatively straightforward, and many of them simply amount to good engineering practice.

In order to understand how a circuit board can be hacked, it’s worth reviewing how they are made. Printed circuit boards typically contain thousands of components. (They are also known as printed wiring boards, or PWBs, before they are populated with components.) The purpose of the PCB is, of course, to provide the structural support to hold the components in place and to provide the wiring needed to connect signals and power to the components.

PCB designers start by creating two electronic documents, a schematic and a layout. The schematic describes all the components and how they are interconnected. The layout depicts the finished bare board and locates objects on the board, including both components and their labels, called reference designators. (The reference designator is extremely important—most of the assembly process, and much of the design and procurement process, is tied to reference designators.)

Not all of a PCB is taken up by components. Most boards include empty component footprints, called unpopulated components. This is because boards often contain extra circuitry for debugging and testing or because they are manufactured for several purposes, and therefore might have versions with more or fewer components.

Once the schematic and layout have been checked, the layout is converted to a set of files. The most common file format is called “Gerber,” or RS-274X. It consists of ASCII-formatted commands that direct shapes to appear on the board. A second ASCII-formatted file, called the drill file, shows where to place holes in the circuit board. The manufacturer then uses the files to create masks for etching, printing, and drilling the boards. Then the boards are tested.

Next, “pick and place” machines put surface-mount components where they belong on the board, and the PCBs pass through an oven that melts all the solder at once. Through-hole components are placed, often by hand, and the boards pass over a machine that applies solder to all of the through-hole pins. It’s intricate work: An eight-pin, four-resistor network can cover just 2 millimeters by 1.3 mm, and some component footprints are as small as 0.25 mm by 0.13 mm. The boards are then inspected, tested, repaired as needed, and assembled further into working products.

Attacks can be made at every one of these design steps. In the first type of attack, extra components are added to the schematic. This attack is arguably the hardest to detect because the schematic is usually regarded as the most accurate reflection of the designer’s intent and thus carries the weight of authority.

A variation on this theme involves adding an innocuous component to the schematic, then using a maliciously altered version of the component in production. This type of attack, in which seemingly legitimate components have hardware Trojans, is outside the scope of this article, but it should nevertheless be taken very seriously.

In either case, the countermeasure is to review the schematic carefully, something that should be done in any case. One important safeguard is to run it by employees from other design groups, using their “fresh eyes” to spot an extraneous component.

In a second type of attack, extra components can be added to the layout. This is a straightforward process, but because there are specific process checks to compare the layout to the schematic, it is harder to get away with it: At a minimum, a layout technician would have to falsify the results of the comparison. And combatting this form of attack is simple: Have an engineer—or, better, a group of engineers—observe the layout-to-­schematic comparison step and sign off on it.

In a third type of attack, the Gerber and drill files can be altered. There are three important points on the Gerber and drill files from a security perspective: First, they’re ASCII-formatted, and therefore editable in very common text-editing tools; second, they’re human-readable; and third, they contain no built-in cryptographic protections, such as signatures or checksums. Since a complete set of ­Gerber files can be hundreds of thousands of lines, this is a very efficient mode of attack, one that is easily missed.

In one example, an attacker could insert what appears to be an electrostatic discharge diode. This circuit’s design files are made up of 16 Gerber and drill files. Of the 16 files, nine would need altering; of those nine, seven would vary in a total of 79 lines, and two files need changes in about 300 lines each. The latter two files specify the power and ground planes. A more skilled attack, such as one adding vertical connections called vias, would dramatically reduce the number of lines that needed rewriting.

Unprotected Gerber files are vulnerable to even a single bad actor who sneaks in at any point between the designing company and the production of the photolithographic masks. As the Gerber­ files are based on an industry standard, acquiring the knowledge to make the changes is relatively straightforward.

One might argue that standard cryptographic methods of protecting files would protect Gerber files, too. While it is clear that such protections would guard a ­Gerber file in transit, it is unclear whether those protections hold when the files reach their destination. The fabrication of circuit boards almost always occurs outside the company that designs them. And, while most third-party manufacturers are reputable companies, the steps they take to protect these files are usually not documented for their customers.

One way to protect files is to add a digital signature, cryptographic hash, or some other sort of authentication code to the internal contents of the file in the form of a comment. However, this protection is effective only if the mask-making process authenticates the file quite late in the process; ideally, the machines that create the photolithography masks should have the ability to authenticate a file. Alternatively, the machine could retain a cryptographic hash of the file that was actually used to create the mask, so that the manufacturer can audit the process. In either case, the mask-making machine would itself require secure handling.

If bad actors succeed at one of these three attacks, they can add an actual, physical component to the assembled circuit board. This can occur in three ways.

First, the extra component can be added in production. This is difficult because it requires altering the supply chain to add the component to the procurement process, programming the pick-and-place machine to place the part, and attaching a reel of parts to the machine. In other words, it would require the cooperation of several bad actors, a conspiracy that might indicate the work of a corporation or a state.

Second, the extra component can be added in the repair-and-rework area—a much easier target than production. It’s common for assembled circuit boards to require reworking by hand. For example, on a board with 2,000 components, the first-pass yield—the fraction of boards with zero defects—might be below 70 percent. The defective boards go to a technician who then adds or removes components by hand; a single technician could easily add dozens of surreptitious components per day. While not every board would have the extra component, the attack might still succeed, especially if there was a collaborator in the shipping area to ship the hacked boards to targeted customers. Note that succeeding at this attack (altered Gerber files, part inserted in repair, unit selectively shipped) requires only three people.

Third, a component can be added by hand to a board after production—in a warehouse, for instance. The fact that an in-transit attack is possible may require companies to inspect incoming boards to confirm that unpopulated parts remain unpopulated.

Knowing how to sabotage a PCB is only half the job. Attackers also have to know what the best targets are on a computer motherboard. They’ll try the data buses, specifically those with two things in common—low data rates and low pin counts. High-speed buses, such as SATA, M.2, and DDR are so sensitive to data rates that the delay of an extra component would very likely keep them from working correctly. And a component with a smaller number of pins is simpler to sneak into a design; therefore, buses with low pin counts are easier targets. On a PC motherboard, there are three such buses.

The first is the System Management Bus (SMBus), which controls the voltage regulators and clock frequency on most PC motherboards. It’s based on the two-wire Inter-IC (I2C) standard created by Philips Semiconductor back in 1982. That standard has no encryption, and it allows a number of connected devices to directly access critical onboard components, such as the power supply, independently of the CPU.

A surreptitious component on an SMBus could enable two types of attacks against a system. It could change the voltage settings of a regulator and damage components. It could also interfere with communications between the processor and onboard sensors, either by impersonating another device or by intentionally interfering with incoming data.

The second target is the Serial Peripheral Interface (SPI) bus, a four-wire bus created by Motorola in the mid-1980s. It’s used by most modern flash-memory parts, and so is likely to be the bus on which the important code, such as the BIOS (Basic Input/Output System), is accessed.

A well-considered attack against the SPI bus could alter any portion of the data that is read from an attached memory chip. Modifications to the BIOS as it is being accessed could change hardware configurations done during the boot process, leaving a path open for malicious code.

The third target is the LPC (Low Pin Count) bus, and it’s particularly attractive because an attack can compromise the operation of the computer, provide remote access to power and other vital control functions, and compromise the security of the boot process. This bus carries seven mandatory signals and up to six optional signals; it is used to connect a computer’s CPU to legacy devices, such as serial and parallel ports, or to physical switches on the chassis, and in many modern PCs, its signals control the fans.

The LPC bus is such a vulnerable point because many servers use it to connect a separate management processor to the system. This processor, called the baseboard management controller (BMC), can perform basic housekeeping functions even if the main processor has crashed or the operating system has not been installed. It’s convenient because it permits remote control, repair, and diagnostics of server components. Most BMCs have a dedicated Ethernet port, and so an attack on a BMC can also result in network access.

The BMC also has a pass-through connection to the SPI bus, and many processors load their BIOS through that channel. This is a purposeful design decision, as it permits the BIOS to be patched remotely via the BMC.

Many motherboards also use the LPC bus to access hardware implementing the Trusted Platform Module (TPM) standard, which provides cryptographic keys and a range of other services to secure a computer and its software.

Start your search for surreptitious components with these buses. You can search them by machine: At the far end of the automation is a system developed by Mark M. Tehranipoor, director of the Florida Institute for Cybersecurity Research, in Gainesville. It uses optical scans, microscopy, X-ray tomography, and artificial intelligence to compare a PCB and its components with the intended design. Or you can do the search by hand, which consists of four rounds of checks. While these manual methods may take time, they don’t need to be done on every single board, and they require little technical skill.

In the first round, check the board for components that lack a reference designator. This is a bright red flag; there is no way that a board so hobbled could be manufactured in a normal production process. Finding such a component is a strong indication of an attack on the board layout files (that is, the Gerber and drill files), because that step is the likeliest place to add a component without adding a reference designator. Of course, a component without a reference designator is a major design mistake and worth catching under any circumstances.

In the second round of checks, make sure that every reference designator is found in the schematic, layout, and bill of materials. A bogus reference designator is another clear indication that someone has tampered with the board layout files.

In the third round, focus on the shape and size of the component footprints. For example, if a four-pin part is on the schematic and the layout or board has an eight-pin footprint, this is clear evidence of a hack.

The fourth round of checks should examine all the unpopulated parts of the board. Although placing components in an unpopulated spot may well be the result of a genuine mistake, it may also be a sign of sabotage, and so it needs to be checked for both reasons.

As you can see, modern motherboards, with their thousands of sometimes mote-size components, are quite vulnerable to subversion. Some of those exploits make it possible to gain access to vital system functions. Straightforward methods can detect and perhaps deter most of these attacks. As with malware, heightened sensitivity to the issue and well-planned scrutiny can make attacks unlikely and unsuccessful.

About the Author

Samuel H. Russ is an associate professor of electrical engineering at the University of South Alabama. Jacob Gatlin is a Ph.D. candidate at the university.

The CPU’s Silent Partner: The Coprocessor’s Role Is Often Unappreciated

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/computing/hardware/the-cpus-silent-partner-the-coprocessors-role-is-often-unappreciated

One reason the PC has endured for nearly 40 years is that its design was almost entirely open: No patents restricted reproduction of the fully documented hardware and firmware. When you bought a PC, IBM gave you everything you needed to manufacture your own clone. That openness seeded an explosion of PC compatibles, the foundation of the computing environment we enjoy today.

In one corner of the original PC’s motherboard, alongside the underpowered-but-epochal 8088 CPU, sat an empty socket. It awaited an upgrade it rarely received: an 8087 floating-point coprocessor.

Among the most complex chips of its day, the 8087 accelerated mathematical computations—in particular, the calculation of transcendental functions—by two orders of magnitude. While not something you’d need for a Lotus 1-2-3 spreadsheet, for early users of AutoCAD those functions were absolutely essential. Pop that chip into your PC and rendering detailed computer-aided-design (CAD) drawings no longer felt excruciatingly slow. That speed boost didn’t come cheap, though. One vendor sold the upgrade for US $295—almost $800 in today’s dollars.

Recently, I purchased a PC whose CPU runs a million times as fast as that venerable 8088 and uses a million times as much RAM. That computer cost me about as much as an original PC—but in 2020 dollars, it’s worth only a third as much. Yet the proportion of my spend that went into a top-of-the-line graphics processing unit (GPU) was the same as what I would have invested in an 8087 back in the day.

Although I rarely use CAD, I do write math-intensive code for virtual or augmented reality and videogrammetry. My new coprocessor—a hefty slice of Nvidia silicon—performs its computations at 200 million times the speed of its ancestor.

That kind of performance bump can only partially be attributed to Moore’s Law. Half or more of the speedup derives from the massive parallelism designed into modern GPUs, which are capable of simultaneously executing several thousand pixel-shader programs (to compute position, color, and other attributes when rendering objects).

Such massive parallelism has its direct analogue in another, simultaneous revolution: the advent of pervasive connectivity. Since the late 1990s, it’s been a mistake to conceive of a PC as a stand-alone device. Through the Web, each PC has been plugged into a coprocessor of a different sort: the millions of other PCs that are similarly connected.

The computing hardware we quickly grew to depend on was eventually refined into a smartphone, representing the essential parts of a PC, trimmed to accommodate a modest size and power budget. And smartphones are even better networked than early PCs were. So we shouldn’t think of the coprocessor in a smartphone as its GPU, which helps draw pretty pictures on the screen. The real coprocessor is the connected capacity of some 4 billion other smartphone-­carrying people, each capable of sharing with and learning from one another through the Web or on various social-media platforms. It’s something that brings out both the best and worst in us.

We all now have powerful tools at our fingertips for connecting with others. When we plug ourselves into this global coprocessor, we can put our heads together to imagine, plan, and produce—or to conspire, harass, and thwart. We can search and destroy, or we can create and share. With the great power that this technology confers comes great responsibility. That’s something we should remind ourselves of every time we peer at our screens.

This article appears in the September 2020 print issue as “The CPU’s Silent Partner.”

Has the Summit Supercomputer Cracked COVID’s Code?

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/the-human-os/computing/hardware/has-the-summit-supercomputer-cracked-the-covid-code

IEEE COVID-19 coverage logo, link to landing page

A supercomputer-powered genetic study of COVID-19 patients has spawned a possible breakthrough into how the novel coronavirus causes disease—and points toward new potential therapies to treat its worst symptoms.

The genetic data mining research uncovered a common pattern of gene activity in the lungs of symptomatic COVID-19 patients, which when compared to gene activity in healthy control populations revealed a mechanism that appears to be a key weapon in the coronavirus’s arsenal.

The good news is there are already drugs—a few of which are already FDA-approved—aimed at some of these very same pathologies.

“We think we have a core mechanism that explains a lot of the symptoms where the virus ends up residing,” said Daniel Jacobson, chief scientist for computational systems biology at Oak Ridge National Labs in Oak Ridge, Tenn.

The mechanism, detailed in Jacobson’s group’s new paper in the journal eLife, centers around a compound the body produces to regulate blood pressure, called bradykinin. A healthy body produces small amounts of bradykinin to dilate blood vessels and make them more permeable. Which typically lowers blood pressure.

However, Jacobson said, lung fluid samples from COVID-19 patients consistently revealed over-expression of genes that produce bradykinin, while also under-expressing genes that would inhibit or break down bradykinin.

In other words, the new finding predicts a hyper-abundance of bradykinin in a coronavirus patient’s body at the points of infection, which can have well-known and sometimes deadly consequences. As Jacobson’s paper notes, extreme bradykinin levels in various organs can lead to dry coughs, myalgia, fatigue, nausea, vomiting, diarrhea, anorexia, headaches, decreased cognitive function, arrhythmia and sudden cardiac death. All of which have been associated with various manifestations of COVID-19.

The bradykinin genetic discovery ultimately came courtesy of Oak Ridge’s supercomputers Summit and Rhea, which crunched data sets representing some 17,000 genetic samples (from more than 1,000 patients) while comparing each of these samples to some 40,000 genes.

Summit, the world’s second fastest supercomputer as of June, ran some 2.5 billion correlation calculations across this data set. It took Summit one week to run these numbers, compared to months of compute time on a typical workstation or cluster.

Jacobson said that the genetic bradykinin connection the team made may have rendered COVID-19 a little less mysterious. “Understanding some of these fundamental principles gives us places to start,” he said. “It’s not as much of a black box anymore. We think we have good indications of the mechanisms. So now how do we attack those mechanisms to have better therapeutic outcomes?”

One of the most persistent and deadly outcomes of extreme COVID disease involves the lungs of patients filling with fluid, forcing the patient to fight for every breath. There, too, the mechanism and genetic pathway the researchers have uncovered could possibly explain what’s going on.

Because bradykinin makes blood vessels more permeable, lung tissue gets inundated with fluid that begins to make it swell. “You have two interconnected pathways, and the virus can tilt the balance to these two pathways with a catastrophic outcome,” Jacobson said. “The bradykinin cascade goes out control, and that allows fluid to leak out of the blood vessels, with immune cells infiltrating out. And you effectively have fluid pouring into your lungs.”

The presence of typically blood-borne immune cells in the lungs of some patients can, Jacobson said, also produce extreme inflammation and out-of-control immune responses, which have been observed in some coronavirus cases.

But another genetic tendency this work revealed was up-regulation in the production of hyaluronic acid. This compound is slimy to the touch. In fact, it’s the primary component in snail slime. And it has the remarkable property of being able to absorb 1000 times its own weight in water.

The team also discovered evidence of down-regulated genes in COVID patients that might otherwise have kept hyaluronic acid levels in check. So with fluid inundating the lungs and gels that absorb those fluids being over-produced as well, a coronavirus patient’s lung, Jacobson said, “fills up with a jello-like hydrogel.”

“One of the causes of death is people are basically suffocating,” Jacobson said. “And we may have found the mechanisms responsible for how this gets out of control, why all the fluid is leaking in, why you’re now producing all this hyaluronic acid—this gelatin-like substance—in your lung, and possibly why there are all these inflammatory responses.”

Jacobson’s group’s paper then highlights ten possible therapies developed for other conditions that might also address the coronavirus’s “bradykinin storm” problem. Potential therapies include compounds like icatibantdanazolstanozololecallantideberinertcinryze and haegarda, all of whose predicted effect is to reduce bradykinin levels in a patient. Even Vitamin D, whose observed deficiency in COVID-19 patients is also explained by the group’s research, could play a role in future COVID-19 therapies.

None of which, it’s important to stress, has yet been tested in clinical trials. But, Jacobson said, they’re already in touch with groups who are considering testing these new findings and recommended therapies.

“We have to get this message out,” Jacobson said. “We have started to be contacted by people. But … clinical partners and funding agencies who will hopefully support this work is the next step that needs to happen.”