Tag Archives: Computing/IT

Why Electronic Health Records Haven’t Helped U.S. With Vaccinations

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/it/ehr-pandemic-promises-not-kept

IEEE COVID-19 coverage logo, link to landing page

Centers for Disease Control and Prevention (CDC) website states that if you have or suspect you have Covid-19, to inform your doctor. However, who do you tell that you actually received your shot?

If you get the shot at a local hospital that is affiliated with your doctor, the vaccination information might show up in your electronic health record (EHR). However, if it doesn’t, or you get the shot through a pharmacy like CVS, your vaccination information may end up stranded on the little paper CDC vaccination card you receive with your shot. The onus will be on you to give your doctor your vaccination information, which will have to be manually (hopefully without error) entered into your EHR.

It was not supposed to be like this.

When President George W. Bush announced in his 2004 State of the Union address that he wanted every U.S. citizen to have an EHR by 2014, one of the motivations was to greatly improve medical care in case of a public health crisis such as a pandemic. The 2003 SARS-Cov-1 pandemic had highlighted the need for the access, interoperability and fusion of health information from a wide variety of governmental and private sources so that the best public health decisions could be made by policy makers in a rapid fashion. As one Chinese doctor remarked in a candid 2004 lessons learned article, in the spring of 2003 SARS had become “out of control due to incorrect information.”

Today, President Bush’s goal that nearly all Americans have access to electronic health records has been met, thanks in large part to the $35 billion in Congressionally-mandated incentives for EHR adoption set out in the 2009 American Recovery and Reinvestment Act. Unfortunately, EHRs have created challenges in combatting the current SARS-Cov-2 pandemic, and at least one doctor is saying that they are proving to be an impediment.

In a recent National Public Radio interview, Dr. Bob Kocher, an adjunct professor at Stanford University School of Medicine and former government healthcare policy official in the Obama Administration, blamed EHRs in part for problems with Americans not being able to easily schedule their vaccinations. A core issue, he states, is that most EHRs are not universally interoperable, meaning health and other relevant patient information they contain is not easily shared either with other EHRs or government public health IT systems. In addition, Kocher notes, key patient demographic information like race and ethnicity may not be captured by an EHR to allow public health officials to understand whether all communities are being equitably vaccinated.

As a result, the information public officials need to determine who has been vaccinated, and where scarce vaccine supplies need to be allocated next, is less accurate and timely than it needs to be. With Johnson & Johnson’s Covid-19 vaccine rolling out this week, the need for precise information increases to ensure the vaccine is allocated to where it is needed most. 

It is a bit ironic that EHRs have not been central in the fight against COVID-19, given how much President Bush was obsessed with preparing the U.S. against a pandemic. Even back in 2007, researchers were outlining how EHRs should be designed to support public health crises and the criticality of seamless data interoperability. However, the requirement for universal EHR interoperability was lost in the 2009 Congressional rush to get hospitals and individual healthcare providers to move from paper medical records to adopt EHRs. Consequently, EHR designs were primarily focused on supporting patient-centric healthcare and especially on aiding healthcare provider billing.

A myriad of other factors has exacerbated the interoperability situation. One is the sheer number of different EHR systems used across the thousands of healthcare providers, each with its own unique way of capturing and storing health information. Even hospitals themselves operate several different EHRs, with the average reportedly having some 16 disparate EHR vendors in use at its affiliated practices. 

Another is the number of different federal, let alone state or local public health IT systems that data needs to flow among. For example, there are six different CDC software systems, including two brand new systems, that are used for vaccine administration and distribution alongside the scores of vaccine registry systems that the states are using. With retail pharmacies now authorized to give COVID-19 vaccinations, the number of IT systems involved keeps growing. To say the process involved in creating timely and accurate information for health policy decision makers is convoluted is being kind.

In addition, the data protocols used to link EHRs to government vaccine registries are a maddeningly tortuous mess that often impede information sharing rather than fostering it. Some EHR vendors like Cerner and Epic have already successfully rolled out improvements to their EHR products to support state mass vaccination efforts, but they are more the exception than the rule.

Furthermore, many EHR vendors (and healthcare providers) have not really been interested in sharing patient information, worrying that it would make it too easy for patients to move to a competitor. While the federal government has recently clamped down on information blocking, it is unlikely to go away anytime soon given the exceptions in the interoperability rules.

Yet another factor has been that state and local public health departments have been cut steadily since the recession of 2008, along with investments in health information systems. Many, if not most, existing state public health IT systems for allocating, scheduling, monitoring and reporting vaccinations have proven not up to the task of supporting the complex health information logistical requirements in a pandemic, and have required significant upgrades or supplemental support. Even then, these supplemental efforts have been fraught with problems. Of course, the federal government’s IT efforts to support state vaccination efforts in the wake of such obstacles has hardly been stellar, either. 

Perhaps the only good news is that the importance of having seamless health IT systems, from health provider EHR systems to state and federal government public health IT systems, is now fully appreciated. Late last year, Congress authorized $500 million under the Coronavirus Aid, Relief, and Economic Security (CARES) Act to the CDC for its Public Health Data Modernization Initiative. The Initiative aims to “bring together state, tribal, local, and territorial (STLT) public health jurisdictions and our private and public sector partners to create modern, interoperable, and real-time public health data and surveillance systems that will protect the American public.” The Office of the National Coordinator for Health Information Technology will also receive $62 million to increase the interoperability of public health systems.

The $560 million will likely be just a small down payment, as many existing EHRs will need to be upgraded not only to better support a future pandemic or other public health crises, but to correct existing issues with EHRs such as poor operational safety and doctor burnout. Furthermore, state and local public health organizations along with their IT infrastructure will also have to be modernized, which will take years. This assumes the states are even willing to invest the funding required given all the other funding priorities caused by the pandemic. 

One last issue that will likely come to the fore is whether it is now time for a national patient identifier that could be used for uniquely identifying patient information. It would make achieving EHR interoperability easier, but creating one has been banned since 1998 over both privacy and security concerns, and more recently, arguments that it isn’t necessary. While the U.S. House of Representatives voted to overturn the ban again last summer, the Senate has not followed suit. If they do decide to overturn the ban and boost EHR interoperability efforts, let us also hope Senators keep in mind just how vulnerable healthcare providers are to cybersecurity attacks.

Systematic Cybersecurity Threat Analysis and Risk Assessment

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/systematic-cybersecurity-threat-analysis-and-risk-assessment

Cybersecurity

The digital revolution has given transportation companies and consumers a host of new features and benefits, introducing significant complexities in concert with unprecedented connectivity that leaves vehicles vulnerable to cyberattack. To safeguard these systems, engineers must consider a dizzying array of components and interconnected systems to identify any vulnerabilities. Traditional workflows and outdated tools will not be enough to ensure products meet the upcoming ISO 21434 and other applicable cybersecurity standards. Achieving a high level of cybersecurity to protect supply chains and personal vehicles requires high-quality threat analysis.

Ansys medini for Cybersecurity helps secure in-vehicle systems and substantially improves time to market for critical security-related functions. Addressing the increasing market needs for systematic analysis and assessment of security threats to cyber-physical systems, medini for Cybersecurity starts early in the system design. Armed with this proven tool, engineers will replace outdated processes reliant on error-prone human analysis.

In this webinar, we dynamically demonstrate how systematic cybersecurity analysis enables engineers to mitigate vulnerabilities to hacking and cyberattacks.

  • Learn how to identify assets in the system and their important security attributes.
  • Discover new methods for systematically identifying system vulnerabilities that can be exploited to execute attacks.
  • Understand the consequences of a potentially successful attack.
  • Receive expert tips on how to estimate the potential of an attack.
  • Learn how to associate a risk with each threat.
  • Leverage new tools for avoiding overengineering and underestimation.

Presenter

Mario Winkler

Mario Winkler, Lead Product Manager

After finishing his studies in Computer Science at the Humboldt University Berlin and after working for the Fraunhofer Research Institute Mario joined the medini team in 2001. During the past 15 years he gained expertise in functional safety expecially in the automotive domain by helping various customers, OEMs and suppliers, to apply medini analyze to their functional safety process according to ISO26262. With the extended focus of medini to Cybersecurity Mario took on the role of a Product Manager to drive the development of medini analyze in this direction.

Minsk’s Teetering Tech Scene

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/tech-talk/computing/it/minsks-teetering-tech-scene

Nearly every day, cops in black riot gear move in phalanxes through the streets, swinging batons to clear thousands of chanting protesters, cracking heads, and throwing people to the ground. An often tone-deaf president jails people on political grounds and refuses to leave office in the wake of a disputed election.

It’s a scene from Belarus, a landlocked former Soviet republic where, for the past couple of months, public outcry and the state’s authoritarian response have kept the nation on a razor’s edge.

Also hanging in the balance is the robust Belarusian digital scene, which has flourished over recent years, accounts for perhaps five to six percent of the nation’s economy, and has provided a steady engine for growth. This, in a place which may be better known for mining potash.

Belarus is led by President Alexander Lukashenko, who came to power in 1994. On 9 August, Lukashenko stood for his sixth term in office: This time, as the announced results in his favor topped 80 percent, the vote was widely seen as fixed and people took to the streets of Minsk by the thousands to call for new elections. They met harsh police response.

In the weeks since, the capital’s coders came under increasing pressure then dialed up pressure of their own. State authorities arrested four employees at sales-process software developer PandaDoc, including the Minsk office director, who after more than a month in jail was released on Oct. 11. Belarusian mass protesters organized their response using digital tools. Open letters calling for new elections and the release of political prisoners circulated among tech industry executives, with one gaining 2,500 signatures. That missive came with a warning that conditions could get to where the industry would no longer function.

That would be a huge loss. More than 54,000 IT specialists and 1,500 tech companies called Belarus home as of 2019. Product companies span a broad swath of programming: natural language processing and computational linguistics; augmented reality; mobile technologies and consumer apps; and gaming. A Medellín, Colombia–based news service that covers startups even did a roundup of the machine learning and artificial intelligence development going on in Minsk. According to the report, this activity is worth some $3 billion a year in sales and revenue, with Minsk-built mobile apps drawing in more than a billion users.

Belarus’s tech trade has become vital to the structure of the local economy and its future. The sector showed double-digit growth over the past 10 years, says Dimitar Bogov, regional economist for the European Bank for Reconstruction and Development. “After manufacturing and agriculture, ICT is the biggest sector,” he says. “It is the most important. It is the source of growth during the last several years.”

Though it may seem surprising that the marshy Slavic plains of Belarus would bear digital fruit, it makes sense that computing found roots here. During the mid-1950s, the Soviet Union’s council of ministers wanted to ramp up computer production in the country, with Minsk selected as one of the hubs. It would produce as much as 70 percent of all computers built in the Soviet Union.

Lukashenko’s government itself had a hand in spurring digital growth in recent years by opening a High Tech Park—both a large incubator building in Minsk and a federal legal framework in the country—fertilized by tax breaks and a reduction in red tape. The scene hummed along from just after the turn of the century through the aughts: By 2012, IHS Markit, a business intelligence company that uses in-house digital development as part of its secret sauce, could snap up semantic search engine developers working in a Minsk coding factory by the dozen.

Eight years later, that team is still working in Belarus, but no longer in a brick warehouse adorned by a Lenin mural. They are in a glass office pod complex, neighbors to home furnishing corporates and the national franchise operations for McDonald’s. And despite the global economic downturn wrought by COVID-19, the tech sector in Belarus is even showing growth in 2020, Bogov says. “It grew by more than eight percent. This is less than in previous years, but it is still impressive to show growth during the pandemic.”

But a shadow hangs over all that now. Reports by media outlets including the Wall Street Journal, BBC, and Bloomberg have cited the PandaDoc chief executive and other tech sources as saying the whole sector could shut down.

Though—so far—there is no evidence of a mass exodus, there are reports of some techies leaving Belarus. There are protests every week, but people also go back to work, in a tense and somewhat murky standoff.

“I talk a lot to people in Belarusian IT. It looks like everyone is outraged,” says Sergei Guriev, a political economist at Paris’s Sciences Po Institute. “Even people who do not speak out support the opposition quietly with resources and technology.” Yuri Gurski, chief executive of the co-founding firm Palta and VC investor Haxus, announced he would help employees of the companies Haxus invests in—including the makers of image editors Fabby and Fabby Look, and the ovulation tracker app Flo—to temporarily relocate outside of Belarus if they fear for life and health.

But Youheni Preiherman, a Minsk-based analyst and director of the Minsk Dialogue Council on International Relations, hears a lot of uncertainty. “Some people ask their managers to let them go for the time being until the situation calms down a bit,” he says. “Some companies, on the contrary, are now saying no, we want to make sure we stay—we feel some change is in the air and we want to be here.”

Meanwhile, the Digital Transformation Ministry in Ukraine is already looking to snap up talent sitting on the fence. Former Bloomberg Moscow bureau chief James Brook reported on his Ukraine Business News site that In September, Ukraine retained the Belarusian lawyer who developed the Minsk Hi-Tech Park concept to do the same there. The Ukrainians are sweetening the pot by opening a Web portal to help Belarusian IT specialists wanting to make the move.

The standoff in Belarus could move into a deliberative state with brokered talks over a new constitution and eventual exit for Lukashenko, but analysts say it could also be prone to fast and unexpected moves—for good or for ill. The future direction for Belarus is being written by the week. But with AI engineers and augmented reality developers who had been content in Minsk no longer sure whether to stay or go, the outcome will be about more than just who runs the government. And the results will resound for years to come.

The Problem of Old Code and Older Coders

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/computing/it/the-problem-of-old-code-and-older-coders

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

The coronavirus pandemic has exposed any number of weaknesses in our technologies, business models, medical systems, media, and more. Perhaps none is more exposed than what my guest today calls, “The Hidden World of Legacy IT.” If you remember last April’s infamous call for volunteer COBOL programmers by the governor of New Jersey, when his state’s unemployment and disability benefits systems needed to be updated, that turned out to be just the tip of a ubiquitous multi-trillion-dollar iceberg—yes, trillion with ‘t’—of outdated systems. Some of them are even more important to us than getting out unemployment checks—though that’s pretty important in its own right. Water treatment plants, telephone exchanges, power grids, and air traffic control are just a few of the systems controlled by antiquated code.

In 2005, Bob Charette wrote a seminal article, entitled “Why Software Fails.” Now, fifteen years later, he strikes a similar nerve with another cover story that shines a light at the vast and largely hidden problem of legacy IT. Bob is a 30-year veteran of IT consulting, a fellow IEEE Spectrum contributing editor, and I’m happy to say my good friend as well as my guest today. He joins us by Skype.

Bob, welcome to the podcast.

Bob Charette Thank you, Steven.

Steven Cherry Bob, legacy software, like a middle child or your knees meniscus, isn’t something we think about much until there’s a problem. You note that we know more about government systems because the costs are a matter of public record. But are the problems of the corporate world just as bad?

Bob Charette Yes. There’s really not a lot of difference between what’s happening in government and what’s happening in industry. As you mentioned, government is more visible because you have auditors who are looking at failures and [are] publishing reports. But there’s been major problems in airlines and banks and insurance companies—just about every industry that has IT has a problem with legacy systems in one way or another.

Steven Cherry Bob, the numbers are staggering. In the past 10 years, at least $2.5 trillion has been spent trying to replace legacy IT systems, of which some seven hundred and twenty billion dollars was utterly wasted on failed replacement efforts. And that’s before the last of the COBOL generation retires. Just how big a problem is this?

Bob Charette That’s a really good question. The size of the problem really is unknown. We have no clear count of the number of systems that are legacy in government where we should be able to have a pretty good idea. We have really no insight into what’s happening in industry. The only thing that we that we do know is that we’re spending trillions of dollars annually in terms of operations and maintenance of these systems, and as you mentioned, we’re spending hundreds of billions per year in trying to modernize them with large numbers failing. This is this is one of the things that when I was doing the research and you try to find some authoritative number, there just isn’t any there at all.

In fact, a recent report by the Social Security Administration’s Inspector General basically said that even they could not figure out how many systems were actually legacy in Social Security. And in fact, the Social Security Administration didn’t know itself either.

Steven Cherry So some of that is record-keeping problems, some of that is secrecy, especially on the corporate side. A little bit of that might be definitional. So maybe we should step back and ask what the philosophers call the the ti esti [τὸ τί ἐστι] question—the what-is question. Does everybody agree on what legacy IT is? What counts as legacy?

Bob Charette No. And that’s another problem. What happens is there’s different definitions in different organizations and in different government agencies, even in the US government. And no one has a standard definition. About the closest that we come to is that it’s a system that does not meet the business need for some reason. Now, I want to make it very clear: The definition doesn’t say that it has to be old, or past a certain point in time Nor does it mean that it’s COBOL. There are systems that have been built and are less than 10 years old that are considered legacy because they no longer meet the business need. So the idea is, is that there’s lots of reasons why it may not meet the business needs—there may be obsolescent hardware, the software software may not be usable or feasible to be improved. There may be bugs in the system that just can’t be fixed at any reasonable cost. So there’s a lot of reasons why a system may be termed legacy, but there’s really no general definition that everybody agrees with.

Steven Cherry Bob, states like your Virginia and New York and every state in the Union keep building new roads, seemingly without a thought. A few years ago, a Bloomberg article noted that every mile of fresh new road will one day become a mile of crumbling old road that needs additional attention. Less than half of all road budgets go to maintenance. A Texas study found that the 40-year cost to maintain a $120 million  highway was about $800 million. Do we see the same thing in IT? Do we keep building new systems, seemingly without a second thought that we’re going to have to maintain them?

Bob Charette Yes, and for good reason. When we build a system and it actually works, it works usually for a fairly long time. There’s kind of an irony and a paradox. The irony is that the longer these systems live, the harder they are to replace. Paradoxically, because they’re so important, they also don’t receive any attention in terms of spend. Typically, for every dollar that’s spent on developing a system, there’s somewhere between eight and 10 dollars that’s being spent to maintain it over its life. But very few systems actually are retired before their time. Almost every system that I know of, of any size tends to last a lot longer than what the designers ever intended.

Steven Cherry The Bloomberg article noted that disproportionate spending by states on road expansion, at the expense of regular repair—again, less than half of state road budgets are spent on maintenance—has left many roads in poor condition. IT spent a lot of money on maintenance, but a GAO study found that a big part of IT budgets are for operations and maintenance at the expense of modernization or replacement. And in fact, that ratio is getting higher, that less and less money is available for upgrades.

Bob Charette  Well, there’s two factors at play. One is, it’s easier to build new systems, so there’s money to build new systems, and that’s what we we constantly do. So we’re building new IT systems over time, which has again, proliferated the number of systems that we need to maintain. So as we build more systems, we’re going to eat up more of our funding so that when it comes time to actually modernize these, there’s less money available. The other aspect is, as we build these systems, we don’t build them a standalone systems. These systems are interconnected with others. And so when you interconnect lots of different systems, you’re not maintaining just an individual s— you’re maintaining this system of systems. And that becomes more costly. Because the systems are interconnected, and because they are very costly to replace, we tend to hold onto these systems longer. And so what happens is that the more systems that you build and interconnect, the harder it is later to replace them, because the cost of replacement is huge. And the probability of failure is also huge.

Steven Cherry Finally—and I promise to get off the highway comparison after this—there seems to be a point at which roads, even when well maintained, need to be reconstructed, not just maintained and repaved. And that point for roads is typically the 40-year mark. Are we seeing something like that in IT?

Bob Charette Well, we’re starting to see a radical change. I think that one of the real changes in IT development and maintenance and support has been the notion of what’s called DevOps, this notion of having development and operations being merged into one.

Since the beginning almost of IT systems development, we’ve thought about it as kind of being in two phases. There is the development phase, and then there was a separate maintenance phase. And a maintenance phase could last anywhere from 8, 10, some systems now are 20, 30, 40 years old. The idea now is to say when you develop it, you have to think about how you’re going to support it and therefore, development and maintenance are rolled into one. It’s kind of this idea that software is never done and therefore, hopefully in the future, this problem of legacy in many ways will go away. We’ll still have to worry about at some point where you can’t really support it anymore. But we should have a lot fewer failures, at least in the operational side. And our costs hopefully will also go down.

Steven Cherry So we can have the best of intentions, but we build roads and bridges and tunnels to last for 40 or 50 years, and then seventy-five years later, we realize we still need them and will for the foreseeable future. Are we still going to need COBOL programmers in 2030? 2050? 2100?

Bob Charette Probably. There’s so much coded in COBOL. And a lot of them work extremely well. And it’s not the software so much that that is the problem. It’s the hardware obsolescence. I can easily foresee COBOL systems being around for another 30, 40, maybe even 50 years. And even that I may be underestimating the longevity of these systems. What’s true in the military, where aircraft like to be 50 to, which was supposed to have about a 20 to 25 year life, is now one hundred years old, replacing everything in the aircraft over a period of time.

There is research being done by DARPA and others to look at how to extend systems and possibly have a system be around for 100 years. And you can do that if you’re extremely clever in how you design it. And also have this idea of how I’m going to constantly upgrade and constantly repair the system and make it easy to move both the data and the hardware. And so I think, again, we’re starting to see the realization that IT, which at one time—again, systems designers were always thinking about 10 years is great, twenty years is fantastic—that maybe now that these system’s, core systems, may be around for one hundred years,.

Steven Cherry Age and complexity have another consequence: Unlike roads, there’s a cybersecurity aspect to all of this as well.

Bob Charette Yeah, that’s probably the biggest weakness that that occurs in new systems, as well as with legacy systems. Legacy systems were never really built with security in mind. And in fact, one of the common complaints even today with new systems is that security isn’t built in; it’s bolted on afterwards, which makes it extremely difficult.

I think security has really come to the fore, especially in the last two or three years where we’ve had this … In fact last year we had over 100 government systems in the United States—local, state and federal systems—that were subject to ransomware attacks and successful ransomware attacks because the attackers focused in on legacy systems, because they were not as well maintained in terms of their security practices as well as the ability to be made secure. So I think security is going to be an ongoing issue into the foreseeable future.

Steven Cherry The distinction between development and operations brings to mind another one, and that is we think of executable software and data as very separate things. That’s the basis of computing architectures ever since John von Neumann. But legacy IT has a problem with data as well as software, doesn’t it?

Bob Charette One of the areas that we didn’t get to explore very deeply in the story, mostly because of space limitations, is is the problem of data. Data is one of the most difficult things to move from one system to another. In the story, we talked about a Navy system, a payroll system … The Navy was trying to consolidate 55 systems into one and they use dozens of programing languages. They have multiple databases. The formats are different. How the data is accessed—what business processes, how they use the data—is different. And when you try to think about how you’re going to move all that information and make sure that the information is relevant, it’s correct. We want to make sure we don’t have dirty data. Those things all need to come to be so that when we move to a new system, the data actually is what we want. And in fact, if you take a look at the IRS, the IRS has 60-year-old systems and the reason they have 60-year-old systems is because they have 60-year-old data on millions of companies and millions of—or hundreds of millions of—taxpayers and trying to move that data to new systems and make sure that you don’t lose it and you don’t corrupt it has been a decades-long problem that they’ve been trying to solve.

Steven Cherry Making sure you don’t lose individuals or duplicate individuals across databases when you merge them.

Bob Charette One of the worst things that you can do is have not only duplicate data, but have data that actually is incorrect and then you just move that incorrect data into a new system.

Steven Cherry Well, Bob, as I said, you did it before with why software fails and you’ve done it again with this detailed investigation. Thanks for publishing “The Hidden World of Legacy IT,” and thanks for joining us today.

Bob Charette My pleasure. Steven.

We’ve been speaking with IT consultant Bob Charette about the enormous and still-growing problem of legacy IT systems.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

 

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

How Estonia’s Management of Legacy IT Has Helped It Weather the Pandemic

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/computing/it/estonia-manages-legacy-it

In “The Hidden World of IT Legacy Systems,” I describe how the Covid-19 pandemic spotlighted the problems legacy IT systems pose for companies, and how they especially affect governments. A recent Bloomberg News story that discussed how government legacy IT systems in Japan are holding back that country’s economic recovery further illustrates the magnitude of the problem.

Japanese economist Yukio Noguchi is quoted in the story as warning that the country is “behind the world by at least 20 years” in administrative technology. The poor shape of Japan’s governmental IT hinders the country’s global competitiveness by restraining private sector technological advances.

This helps explain why, despite being the world’s third-largest economy, Japan is now ranked only 23rd out of 63 countries when it comes to digital competitiveness as measured by the IMD World Competitiveness Center, a Swiss business school. In addition, the frayed IT infrastructure has diminished the potential benefits from the government’s $2.2 trillion pandemic fiscal stimulus, the Bloomberg story states.

One the other hand, Estonia’s IT systems that have weathered the pandemic well. According to an article from the World Economic Forum, during the country’s pandemic lock-down, the country’s online government services continued to be readily available. Further, its schools experienced little difficulty supporting digital learning, remote working seems to have been a non-issue, and its health information systems were able to be quickly reconfigured to provide information about newly diagnosed Covid-19 cases in near real-time; contact tracers were able to be updated with new case information five times a day, for example.

Why has Estonia’s IT systems been able to handle the pandemic so well? In the words of David Eaves from the Harvard Kennedy School of Government, by being both “lucky and good.” Eaves says Estonia was “lucky” because, after the Baltic country won back its independence from Russia in August 1991 and the last Russian troops left in 1994, the country of 1.3 million people found itself desperately poor. The Russians, Eaves notes, “took everything,” leaving essentially a clean-slate IT and telecommunications environment.

Eaves states Estonia was “good” in that its political leadership was savvy enough to recognize how important modern technology was not only to its future economy but its political stability and independence. Eaves said being poor meant that the country’s leadership could not “afford to make bad decisions,” like richer countries. Estonia began by modernizing its telecommunications infrastructure—mobile first because it was easiest and cheapest—followed by a fiber-optic backbone, and then beginning in 2001, public Wi-Fi areas, which were set up across the country.

The Estonian government realized early on that world-class communications, along with universal access to the Internet, were key to quickly modernizing industry as well as government. Estonia embarked on a program to digitalize government operations and planned the extensive use of the Internet to allow its citizens to communicate and interact with the government. Eaves states that Estonia’s political leadership also understood that to do so successfully required the creation of the necessary legal infrastructure. For example, this meant ensuring the protection of individual privacy, safeguarding personal information and providing total transparency regarding on how personal data would be used. Without these, Estonians would not trust a totally wired government or society, especially after its unpleasant experience of being part of the Soviet Union.

Estonia also wanted to avoid being encumbered with old technologies to reduce IT system maintenance costs, so it decided to adopt an eliminate “legacy IT” principle. In other words, for systems in the public sector, the government decreed that, “IT solutions of material importance must never be older than 13-years-old.” Once a system reaches ten-years-old, it must be reviewed and then replaced before it becomes a teenager. While the 13-year period seems arbitrary, it serves the purpose of a forcing function to ensure existing systems don’t fall into the prevailing twilight world of technology maintenance.

Estonia proudly proclaims that along with 99 percent of its public services being online 24/7, “E-services are only impossible for marriages, divorces and real-estate transactions—you still have to get out of the house for those.” Everything else, from voting to filing taxes, can easily be done online securely and quickly.

This is possible because since 2007, there has been an information “once only” policy, where the government cannot ask citizens to re-enter the same information twice. If your personal information was collected by the census bureau, the hospital will not ask for the same information again, for example. This policy meant different government ministries had to figure out how to share and protect citizen information, which had an added benefit of making their operations very efficient. Estonia’s online tax system is reported to allow people to file their taxes in as little as three minutes.

Scaling up Estonia’s e-Governance ecosystem [PDF] to larger countries might not be easy. However, there is still much to be learned about what an e-government approach can achieve, and which IT legacy modernization strategies might be quickly implemented, Eaves argues.

Yet, even in Estonia, there are a few dark clouds forming in the distance over its IT systems. The government’s Chief Information Officer Siim Sikkut has repeatedly warned that while there has been funding available to build new online capabilities, the country’s IT infrastructure has been chronically under-funded for several years. Sikkut argues that it may be time to start shifting the balance of funding away from new IT initiatives to supporting and replacing existing systems. A September 2019 Organization for Economic Cooperation and Development report indicates that Estonia needs to spend approximately 1.5% of the state budget on its digitalization efforts, but is only currently spending around 1.1% to 1.3%.

Finding more IT modernization funding to make up the shortfall may not be easy, given the pandemic’s economic impacts on Estonia and other competing government e-governance objectives. Given its seeming e-governance prowess, Estonia is surprisingly only ranked 29th by the IMD in global competitiveness, a position the government wants to rectify, which will require even more funding of new IT initiatives.

It will be interesting to see over the long run whether Estonia will be able to find the funding for both new IT initiatives and IT modernizations, or if it will choose to fund the former over the latter, and end up stumbling into the legacy IT system trap like so many other countries have.

Inside the Hidden World of Legacy IT Systems

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/computing/it/inside-hidden-world-legacy-it-systems

“Fix the damn unemployment system!”

This past spring, tens of millions of Americans lost their jobs due to lockdowns aimed at slowing the spread of the SARS-CoV-2 virus. And untold numbers of the newly jobless waited weeks for their unemployment benefit claims to be processed, while others anxiously watched their bank accounts for an extra US $600 weekly payment from the federal government.

Delays in processing unemployment claims in 19 states—Alaska, Arizona, Colorado, Connecticut, Hawaii, Iowa, Kansas, Kentucky, New Jersey, New York, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Texas, Vermont, Virginia, and Wisconsin—are attributed to problems with antiquated and incompatible state and federal unemployment IT systems. Most of those systems date from the 1980s, and some go back even further.

Things were so bad in New Jersey that Governor Phil Murphy pleaded in a press conference for volunteer COBOL programmers to step up to fix the state’s ­Disability Automated Benefits ­System. A clearly exasperated Murphy said that when the pandemic passed, there would be a post mortem focused on the question of “how the heck did we get here when we literally needed cobalt [sic] programmers?”

Similar problems have emerged at the federal level. As part of the federal government’s pandemic relief plan, eligible U.S. taxpayers were to receive $1,200 payments from the Internal Revenue Service. However, it took up to 20 weeks to send out all the payments because the IRS computer systems are even older than the states’ unemployment systems, some dating back almost 60 years.

As the legendary investor Warren Buffett once said, “It’s only when the tide goes out that you learn who’s been swimming naked.” The pandemic has acted as a powerful outgoing tide that has exposed government’s dependence on aging legacy IT systems.

But governments aren’t the only ones struggling under the weight of antiquated IT. It is equally easy to find airlines, banks, insurance companies, and other commercial entities that continue to rely on old IT, contending with software or hardware that is no longer supported by the supplier or has defects that are too costly to repair. These systems are prone to outages and errors, vulnerable to cyber­intrusions, and progressively more expensive and difficult to maintain.

Since 2010, corporations and governments worldwide have spent an estimated $35 trillion on IT products and services. Of this amount, about three-quarters went toward operating and maintaining existing IT systems. And at least $2.5 trillion was spent on trying to replace legacy IT systems, of which some $720 billion was wasted on failed replacement efforts.

But it’s astonishing how seldom people notice these IT systems, even with companies and public institutions spending hundreds of billions of dollars every year on them. From the time we get up until we go to bed, we interact, often unknowingly, with ­dozens of IT systems. Our voice-activated digital assistants read the headlines to us before we hop into our cars loaded with embedded processors, some of which help us drive, others of which entertain us as we guzzle coffee brewed by our own robotic baristas. Infrastructure like wastewater treatment plants, power grids, air traffic control, telecommunications services, and government administration depends on hundreds of thousands of unseen IT systems that form another, hidden infrastructure. Commercial organizations rely on IT systems to manage payroll, order supplies, and approve cashless sales, to name but three of thousands of automated tasks necessary to the smooth functioning of a modern economy. Though these systems run practically every aspect of our lives, we don’t give them a second thought because, for the most part, they function. It doesn’t even occur to us that IT is something that needs constant attention to be kept in working order.

In his landmark study The Shock of the Old: Technology and Global History Since 1900 (Oxford University Press, 2007), British historian David Edgerton claims that although maintenance and repair are central to our relationship with technology, they are “matters we would rather not think about.” As a result, technology maintenance “has lived in a twilight world, hardly visible in the formal accounts societies make of themselves.”

Indeed, the very invisibility of legacy IT is a kind of testament to how successful these systems are. Except, of course, when they’re not.

There’s no formal definition of “legacy system,” but it’s commonly understood to mean a critical system that is out of date in some way. It may be unable to support future business operations; the vendors that supplied the application, operating system, or hardware may no longer be in business or support their products; the system architecture may be fragile or complex and therefore unsuitable for upgrades or fixes; or the finer details of how the system works are no longer understood.

To modernize a computing system or not is a question that bedevils nearly every organization. Given the many problems caused by legacy IT systems, you’d think that modernization would be a no-brainer. But that decision isn’t nearly as straightforward as it appears. Some legacy IT systems end up that way because they work just fine over a long period. Others stagger along because the organization either doesn’t want to or can’t afford to take on the cost and risk associated with modernization.

Obviously, a legacy system that’s critical to day-to-day operations cannot be replaced or enhanced without major disruption. And so even though that system contributes mightily to the organization’s operations, management tends to ignore it and defer modernization. On most days, nothing goes catastrophically wrong, and so the legacy system remains in place.

This “kick the can” approach is understandable. Most IT systems, whether new or modernized, are expensive affairs that go live late and over budget, assuming they don’t fail partially or completely. These situations are not career-enhancing experiences, as many former chief information officers and program managers can attest. Therefore, once an IT system is finally operating reliably, there’s little motivation to plan for its eventual retirement.

What management does demand, however, is for any new IT system to provide a return on investment and to cost as little as possible for as long as possible. Such demands often lead to years of underinvestment in routine maintenance. Of course, those same executives who approved the investment in the new system probably won’t be with the organization a decade later, when that system has legacy status.

Similarly, the developers of the system, who understand in detail how it operates and what its limitations are, may well have moved on to other projects or organizations. For especially long-lived IT systems, most of the developers have likely retired. Over time, the system becomes part of the routine of its users’ daily life, like the office elevator. So long as it works, no one pays much attention to it, and eventually it recedes into the organization’s operational shadows.

Thus does an IT system quietly age into legacy status.

Millions of people every month experience the frustrations and inconveniences of decrepit legacy IT.

U.K. bank customers know this frustration only too well. According to the U.K. Financial Conduct Authority, the nation’s banks reported nearly 600 IT operational and security incidents between October 2017 and September 2018, an increase of 187 percent from a year earlier. Government regulators point to the banks’ reliance on decades-old IT systems as a recurring cause for the incidents.

Airline passengers are equally exasperated. Over the past several years, U.S. air carriers have experienced on average nearly one IT-related outage per month, many of them attributable to legacy IT. Some have lasted days and caused the delay or cancellation of thousands of flights.

Poorly maintained legacy IT systems are also prone to cybersecurity breaches. At the credit reporting agency Equifax, the complexity of its legacy systems contributed to a failure to patch a critical vulnerability in the company’s Automated Consumer Interview System, a custom-built portal developed in the 1970s to handle consumer disputes. This failure led, in 2017, to the loss of 146 million individuals’ sensitive personal information.

Aging IT systems also open the door to crippling ransomware attacks. In this type of attack, a cyberintruder hacks into an IT system and encrypts all of the system data until a ransom is paid. In the past two years, ransomware attacks have been launched against the cities of Atlanta and Baltimore as well as the Florida municipalities of Riviera Beach and Lake City. The latter two agreed to pay their attackers $600,000 and $500,000, respectively. Dozens of state and local governments, as well as school systems and hospitals, have experienced ransomware attacks.

Even if they don’t suffer an embarrassing and costly failure, organizations still have to contend with the steadily climbing operational and maintenance costs of legacy IT. For instance, a recent U.S. Government Accountability Office report found that of the $90 billion the U.S. government spent on IT in fiscal year 2019, nearly 80 percent went toward operation and maintenance of existing systems. Furthermore, of the 7,000 federal IT investments the GAO examined in detail, it found that 5,233 allocated all their funding to operation and maintenance, leaving no monies to modernize. From fiscal year 2010 to 2017, the amount spent on IT modernization dropped by $7.3 billion, while operation and maintenance spending rose by 9 percent. Tony Salvaggio, founder and CEO of CAI, an international firm that specializes in supporting IT systems for government and commercial firms, notes that ever-growing IT legacy costs will continue to eat government’s IT modernization “seed corn.”

While not all operational and maintenance costs can be attributed to legacy IT, the GAO noted that the rise in spending is likely due to supporting obsolete computing hardware—for example, two-thirds of the Internal Revenue Service’s hardware is beyond its useful life—as well as “maintaining applications and systems that use older programming languages, since programmers knowledgeable in these older languages are becoming increasingly rare and thus more expensive.”

Take COBOL, a programming language that dates to 1959. Computer science departments stopped teaching COBOL some decades ago. And yet the U.S. Social Security Administration reportedly still runs some 60 million lines of COBOL. The IRS has nearly as much COBOL ­programming, along with 20 million lines of assembly code. And, according to a 2016 GAO report, the departments of Commerce, Defense, Treasury, Health and Human Services, and ­Veterans Affairs are still “using 1980s and 1990s Microsoft operating systems that stopped being supported by the vendor more than a decade ago.”

Given the vast amount of outdated software that’s still in use, the cost of maintaining it will likely keep climbing not only for government, but for commercial organizations, too.

The first step in fixing a massive problem is to admit you have one. At least some governments and companies are finally starting to do just that. In December 2017, for example, President Trump signed the Modernizing Government Technology Act into law. It allows federal agencies and departments to apply for funds from a $150 million Technology Modernization Fund to accelerate the modernization of their IT systems. The Congressional Budget Office originally indicated the need was closer to $1.8 billion per year, but politicians’ concerns over whether the money would be well spent resulted in a significant reduction in funding.

Part of the modernization push by governments in the United States and abroad has been to provide more effective administrative controls, increase the reliability and speed of delivering benefits, and improve customer service. In the commercial sector, by contrast, IT modernization is being driven more by competitive pressures and the availability of newer computing technologies like cloud computing and machine learning.

“Everyone understands now that IT drives organization innovation,” Salvaggio told IEEE Spectrum. He believes that the capabilities these new technologies will create over the next few years are “going to blow up 30 to 40 percent of [existing] business models.” Companies saddled with legacy IT systems won’t be able to compete on the expected rapid delivery of improved features or customer service, and therefore “are going to find themselves forced into a box canyon, unable to get out,” Salvaggio says.

This is already happening in the banking industry. Existing firms are having a difficult time competing with new businesses that are spending most of their IT budgets on creating new offerings instead of supporting legacy systems. For example, Starling Bank in the United Kingdom, which began operations in 2014, offers only mobile banking. It uses Amazon Web Services to host its services and spent a mere £18 million ($24 million) to create its infrastructure. In comparison, the U.K.’s TSB bank, a traditional full-service bank founded in 1810, spent £417 million ($546 million) moving to a new banking platform in 2018.

Starling maintains all its own code and does an average of one software release per day. It can do this because it doesn’t have the intricate connections to myriad legacy IT systems, where every new software release carries a measurable risk of operational failure, according to the U.K.’s bank regulators. Simpler systems mean fewer and shorter IT-related outages. Starling has had only one major outage since it opened, whereas each of the three largest U.K. banks has had at least a dozen apiece over the same period.

Modernization creates its own problems. Take the migration of legacy data to a new system. When TSB moved to its new IT platform in 2018, some 1.9 million online and mobile customers discovered they were locked out of their accounts for nearly two weeks. And modernizing one legacy system often means having to upgrade other interconnecting systems, which may also be legacy. At the IRS, for instance, the original master tax file systems installed in the 1960s have become buried under layers of more modern, interconnected systems, each of which made it harder to replace the preceding system. The agency has been trying to modernize its interconnected legacy tax systems since 1968 at a cumulative cost of at least $20 billion in today’s money, so far with very little success. It plans to spend up to another $2.7 billion on modernization over the next five years.

Another common issue is that legacy systems have duplicate functions. The U.S. Navy is in the process of installing its $167 million Navy Pay and Personnel system, which aims to consolidate 223 applications residing in 55 separate IT systems, including 10 that are more than 30 years old and a few that are more than 50 years old. The disparate systems used 21 programming languages, executing on nine operating systems ranging across 73 data centers and networks.

Such massive duplication and data silos sound ridiculous, but they are shockingly common. Here’s one way it often happens: The government issues a new mandate that includes a requirement for some type of automation, and the policy comes with fresh funding to implement it. Rather than upgrade an existing system, which would be disruptive, the department or agency finds it easier to just create a new IT system, even if some or most of the new system duplicates what the existing system is doing. The result is that different units within the same organization end up deploying IT systems with overlapping functions.

“The shortage of thinking about systems engineering” along with the lack of coordinating IT developments to avoid duplication have long plagued government and corporations alike, Salvaggio says.

The best way to deal with legacy IT is to never let IT become legacy. Growing recognition of legacy IT systems’ many costs has sparked a rethinking of the role of software maintenance. One new approach was recently articulated in Software Is Never Done, a May 2019 report from the U.S. Defense Innovation Board. It argues that software should be viewed “as an enduring capability that must be supported and continuously improved throughout its life cycle.” This includes being able to test, integrate, and deliver improvements to software systems within short periods of time and on an ongoing basis.

Here’s what that means in practice. Currently, software development, operations, and support are considered separate activities. But if you fuse those activities into a single integrated activity—employing what is called DevOps—the operational system is then always “under development,” continuously and incrementally being improved, tested, and deployed, sometimes many times a day.

DevOps is just one way to keep core IT systems from turning into legacy systems. The U.S. Defense Advanced Research Projects Agency has been exploring another, potentially more effective way, recognizing the longevity of IT systems once implemented.

Since 2015, DARPA has funded research aimed at making software that will be viable for more than 100 years. The Building Resource Adaptive Software Systems (BRASS) program is trying to figure out how to build “long-lived software systems that can dynamically adapt to changes in the resources they depend upon and environments in which they operate,” according to program manager ­Sandeep Neema.

Creating such timeless systems will require a “start from scratch” approach to software design that doesn’t make assumptions about how an IT system should be designed, coded, or maintained. That will entail identifying the logical (libraries, data formats, structures) and physical resources (processing, storage, energy) a software program needs for execution. Such analyses could use advanced AI techniques that discover and make visible an application’s operations and interactions with other applications and systems. By doing so, changes to resources or interactions with other systems, which account for many system failures or inefficient operations, can be actively managed before problems occur. Developers will also need to create a capability, again possibly using AI, to monitor and repair all elements of the execution environment in which the application resides.

The goal is to be able to update or upgrade applications without the need for extensive intervention by a human programmer, Neema told Spectrum, thereby “buying down the cost of maintenance.”

The BRASS program has funded nine projects, each of which represents different aspects of what a resource-adaptive software system will need to do. Some of the projects involve UAVs, mobile robots, and high-performance computing. The final results of the effort are expected later this year, when the technologies will be released to open-source repositories, industry, and the Defense Department.

Neema says no one should expect BRASS to deliver “a general-purpose software repair capability.” A more realistic outcome is an approach that can work within specific data, software, and system parameters to help the maintainers who oversee those systems to become more efficient and effective. He of course hopes that private companies and other government organizations will build on the BRASS program’s results.

The COVID-19 pandemic has exposed the debilitating consequences of relying on antiquated IT systems for essential services. Unfortunately, that dependence, along with legacy IT’s enormous and increasing costs, will still be with us long after the pandemic has ended. For the U.S. government alone, even a concerted and well-executed effort would take decades to replace the thousands of existing legacy systems. Over that time, current IT systems will also become legacy and themselves require replacement. Given the budgetary impacts of the pandemic, even less money for legacy system modernization may be available in the future across all government sectors.

The problems associated with legacy systems will only worsen as the Internet of Things, with its billions of interconnected computing devices, matures. These devices are already being connected to legacy IT, which will make it even more difficult to replace and modernize those systems. And eventually the IoT devices will become legacy. Just as with legacy systems today, those devices likely won’t be replaced as long as they continue to work, even if they are no longer supported. The potential cybersecurity risk of vast numbers of obsolete but still operating IoT devices is a huge unknown. Already, many IoT devices have been deployed without basic cybersecurity built into them, and this shortsightedness is taking a toll. Cybersecurity concerns compelled the U.S. Food and Drug Administration to recall implantable pacemakers and insulin pumps and the National Security Agency to warn about IoT-enabled smart furniture, among other things of the Internet.

Now imagine a not-too-distant future where hundreds of millions or even billions of legacy IoT devices are deeply embedded into government and commercial offices, schools, hospitals, factories, homes, and even people. Further imagine that their cybersecurity or technical flaws are not being fixed and remain connected to legacy IT systems that themselves are barely supported. In such a world, the pervasive dependence upon increasing numbers of interconnected, obsolete systems will have created something far grimmer and murkier than Edgerton’s twilight world.

This article appears in the September 2020 print issue as “The Hidden World of Legacy IT.”

About the Author

As a risk consultant for businesses and a slew of three-lettered U.S. government agencies, Contributing Editor Robert N. Charette has seen more than his share of languishing legacy IT systems. As a civilian, he’s also been a casualty of a legacy system gone berserk. A few years ago, his bank’s IT system, which he later found out was being upgraded, made an error that was most definitely not in his favor.

He’d gone to an ATM to withdraw some weekend cash. The machine told him that his account was overdrawn. Puzzled, because he knew he had sufficient funds in his account to cover the withdrawal, he had to wait until Monday to contact the bank for an explanation. When he called, the customer service representative insisted that he was indeed overdrawn. This was an understatement, considering that the size of the alleged overdraft might have caused a person less versed in software debacles to have a stroke.

“ ‘You know, you’re overdrawn by [US] $1,229,200,’ ” Charette recalls being told. “I was like, well, that’s interesting, because I don’t have that much money in my bank account.”

The customer service rep then acknowledged it could be an error caused by a computer glitch during a recent systems upgrade. Two days later he received the letter pictured above from his bank, apparently triggered by a check he had written for $55.80. Charette notes that it wasn’t the million-dollar-plus overdraft that triggered the letter, just that last double-nickel drop in the bucket.

The bank never did send a letter apologizing for the inconvenience or explaining the problem, which he believes likely affected other customers. And like so many of the failed legacy-system upgrades—some costing billions, which Charette describes here—it never made the news, either.

Anti-Phishing Testers Put Themselves on the Hook

Post Syndicated from Robert Charette original https://spectrum.ieee.org/riskfactor/computing/it/antiphishing-testers-put-themselves-on-the-hook

IEEE COVID-19 coverage logo, link to landing page

Do you want to break into computer networks or steal money from people’s bank accounts without doing all the tedious hard work of defeating security systems directly? Then phishing is for you, where a convincing email can be all that’s required to have victims serve up their passwords or personal information on a platter. With so many people working from home and doing business online thanks to Covid-19, this year is proving to be a phisher’s paradise, with a myriad of new opportunities to scam the unsuspecting. Solicitations from fake charities, along with emails purporting to be from government organizations like state unemployment agencies, health agencies, and tax collection agencies are flooding into people’s inboxes.

it’s not just individuals who have to worry. Because so many organizations have shifted their employees to remote work, cybercrime targeting “has shown a significant target shift from individuals and small businesses to major corporations, governments and critical infrastructure,” according to Interpol. In response, IT departments have ramped up their efforts to stop staffers giving the store away. But sometimes these efforts have caused unexpected collateral damage.

The urgency IT departments feel is understandable. Interpol predicts that phishing attacks—which already made up 59 percent of Covid-related threats reported to it by member countries—will be ramped up even more in the coming months. And the nature of the threat is evolving: for example, false invitations to videoconference meetings are a phisher’s new favorite for trying to steal network credentials, says the Wall Street Journal.

The human element is central to phishing, so government agencies and corporations have increased their employee phishing-training, including the use of phishing tests. These tests use mock phishing emails and websites, often using the same information contained in real phishing emails as a template, to see how their employees respond. When done well “for educational purposes and not as a punitive ‘gotcha’ exercise, employees can improve their ability to spot” and properly report phishing attacks, states Ryan Hardesty, President of PhishingBox, an anti-phishing training and testing company.

But, Hardesty acknowledges, a delicate balance is required to make the phishing lure attractive without causing knock-on problems.  This can be seen in two incidents in 2009 and again in 2014 involving U.S. federal employees who contributed to the government’s Thrift Savings Plan, which is the government’s version of a 401(k) plan.

The employees received emails, ostensibly from the TSP, claiming that their accounts were at risk unless they submitted their personal account information to a designated website. However, both times the emails were actually part of phishing security tests conducted by different government agencies.

The first time it was the U.S. Department of Justice who sent out the email, while the second time, it was a U.S. Army command. In both cases, multiple employees who received the phishing test emails sent them to friends and family in numerous government agencies, which caused widespread concerns if not panic. Furthermore, in both cases, the people involved in the phishing tests did not let the TSP know what they had done, either before of after. Indeed, if they’d given advance warning, TSP lawyers would have immediately sent them cease and desist letters. And the lack of candor afterwards meant that it took weeks of effort by furious TSP officials and multiple government agencies to unravel what happened, who was responsible, and to put an end to it.

The 2014 U.S. Army phishing test was especially successful in stoking fears because TSP had suffered a breach exposing the personal information of 123,000 members in 2012. The email claimed that TSP accounts had been breached again and members needed to change their passwords.

Other similar incidents have also occurred, including one in 2015 where a Belgian regional government phishing exercise used a supposed booking from the French-Belgian high-speed train operator Thalys as bait (Thalys was unamused,) and another where the Michigan Democratic Party conducted a phishing exercise in 2018 that involved a highly sensitive voter database, but did not inform the Democratic National Committee, which was also, funny enough, not amused either.

Mark Henderson, a security specialist with the Internal Revenue Service’s Online Fraud Detection and Prevention department told me that the problem of phishing email tests “going awry” seems to be proliferating. The IRS, for example, has seen an uptick in reports of phishing emails purporting to be from the IRS or Department of the Treasury that are not actual phishing attacks but mock attacks from organizations conducting internal phishing tests. On top of being illegal— Henderson points out that phishing emails are prohibited from using the IRS name, logo, or insignia in a manner that suggests association with or endorsement by the IRS—they can cause undue distress to those being tested, as well as increase the administrative workload for the IRS and Department of the Treasury and so divert attention from real threats.

While there aren’t publicly available statistics on phishing exercises that create collateral damage, I suspect that many other U.S. government organizations, such as the Centers for Disease Control, the Department of Homeland Security, and the Federal Drug Administration, are also experiencing the same problem. State and local governments are almost certainly dealing with phishing test spillover effects, as well as governmental organizations abroad.

Admittedly, it is highly tempting to use Covid-19 related issues for phishing lures as these issues are on everyone’s mind. If you are a U.S. taxpayer still awaiting your economic impact payment, any email that looks like it might be from the IRS will immediately get your attention (NOTE:  no one from the IRS will reach out to you by phone, email, mail or in person asking for any kind of information to complete their economic impact payment).

The security industry has not come to a consensus on the sensitivities regarding pandemic-related bait. Cofense, an anti-phishing company, declared in March that it decided to remove all Covid-19-themed phishing templates from its repositories, and called on the anti-phishing industry to do the same. However, other anti-phishing companies took issue with that request. Perry Carpenter, KnowBe4’s strategy officer, wrote that with the rapid acceleration of phishing attacks they were seeing, phishing security testing needed to be ramped up. In fact, Carpenter argued that “not conducting phishing training during this time amounts to negligence.”

There are no accepted industry standards yet for conducting phishing exercises, although the UK National Security Centre has published an abundance of practical guidance on what to do and what not to do in conducting these sorts of tests.  It also published very useful information on how organizations can reduce phishing attacks, which is really the first line of defense.

In speaking with Ryan Hardesty at PhishingBox, he also believes that conducting phishing security tests should continue, but only if they are well-considered in light of Covid-19 sensibilities and have an objective of education, not shaming. Hardesty makes it clear to PhishingBox clients about appropriate rules of engagement, like not using the IRS as bait in their phishing exercises. Most clients are careful, but when they’re not, it can spark a call from the IRS.  As Hardesty states, “you never want a call from the IRS concerning a phishing exercise that originated on your platform.”

What Open Source Technology Can and Can’t Do to Fix Elections

Post Syndicated from Lucas Laursen original https://spectrum.ieee.org/tech-talk/computing/it/what-open-source-technology-can-cant-do-fix-elections

Los Angeles voters in last month’s primary election found themselves in a familiar situation: stuck in traffic. But they weren’t in the comfort of their cars on the freeway. Instead, due to mishaps with new ballot-marking devices, some voters stood in line for more than two hours and others didn’t get to vote at all.

Los Angeles County is one of a growing number of public authorities testing open source components for voting systems. It began work on its US $300 million Voting Solutions for All People (VSAP) project in 2009, announcing at the time that its source code would be open and belong to the county. Most American voting systems belong to one of a handful of certified commercial suppliers, who keep their source code close to their chests.

Countries Debate Openness of Future National IDs

Post Syndicated from Lucas Laursen original https://spectrum.ieee.org/tech-talk/computing/it/countries-debate-openness-of-future-national-ids

Kenya’s High Court ruled Thursday that a recent amendment requiring citizens to register for a national biometric digital identification system overreached on some counts, such as allowing for links to DNA or GPS records, and failed to guarantee sufficient inclusion of Kenyan residents. 

The ID system, called the National Integrated Identity Management System (NIIMS), was a homegrown answer to India’s pioneering Aadhaar system, which two years ago faced its own Indian Supreme Court ruling that upheld some components while modifying others. 

More than half of African countries are developing some form of biometric or digital national ID in response to major international calls to establish legal identification for the almost 1 billion people who now lack it. But this ID boom, also taking place outside Africa, often gets ahead of data protection laws, as occurred in Kenya. 

For countries that take the plunge, unscrupulous vendors can lock them into their products. For example, the Kenyan software was accessible only to government agencies and contractors such as Idemia, something Open Society Foundations senior managing legal officer Laura Bingham calls “concerning.” In contrast, an open-source outgrowth of Aadhaar called the Modular Open Source Identity Platform (MOSIP) is showing that there is another way to do it. 

“Our major contribution is to develop the identity platform in open source and modular,” says S. Rajagopalan, MOSIP technology chair and an information researcher at India Institute of Information Technology in Bangalore, so “a country can configure its own ID system by picking and choosing modules. For example, a country can have an ID system without biometrics.”

About 146 countries require citizens and residents to have a national ID of some kind, at least 35 of which include biometric  information beyond just a photograph, but dozens more are in talks with biometrics providers.

MOSIP has signed agreements with the governments of Morocco and the Philippines so far. Its open, modular approach on its own probably won’t solve all the security issues associated with early national ID ecosystems such as Aadhaar, Mozilla Foundation researchers wrote in a January 2020 white paper on open ID, but it could empower governments to expect more from the vendors that support future national IDs.

The Philippines, for example, is separating its ID device reader, biometrics recognition systems, and platform integration into three separate tenders. The device reader and biometrics system will integrate with MOSIP. But the separate operators and open source code of the underlying platform would make it easier for the government to replace any one vendor if necessary.

The model is attracting biometrics companies hoping to tap into a large international market for identification at a relatively lower cost. Electrical engineer Rahul Parthe, the co-founder and chairman of the first company to adapt an Automated Biometric Identification System (ABIS) to integrate with MOSIP, says that his company, TECH5, wants its identification technology to include as many people as possible. 

“We as TECH5 see this as a good opportunity, as our ABIS  platform completes the solution and ensures the usage of the technology in the intended way,” Parthe says. When enough governments are on board with open ID platforms, he adds, every tech vendor will be compelled to adapt.

A major incentive for vendors is knowing that their innovations can reach the kinds of scale offered by Aadhaar and the next generation of national IDs across many countries. TECH5, for example, has developed a more information-rich, scannable biometric barcode and smartphone software that would enable people to verify someone’s identity with biometrics even offline. That could empower people in the least connected places.

As it is, a lack of legal ID and restrictive government policies prevent some people from accessing food aid, healthcare, voting, and more. That is why half a dozen civil society groups took the Kenyan government to court over shortcomings they saw in NIIMS and the rushed process by which it was passed. While the court ruled that the passage of the overall law was valid, it agreed with some of the petitioner’s criticisms. Among other changes, the court ordered the government to enact new regulations that ensure that NIIMS does not exclude Kenyan residents due to paperwork or biometric irregularities. 

However, the court avoided the question of whether NIIMS’ software should have been more open—one of the points raised by the petitioners—to allow for external scrutiny. Lawyers Hub, a legal tech group, tweeted that: “Design would have been a useful conversation to have [because] architecture determines security.”

Other countries working on their own biometric national IDs will have to pick up that conversation.

It will also take engineers, says Open Societies Foundations’ Bingham, and they need to get involved from the start: “Engineers who develop technology don’t necessarily think about [inclusiveness] because they’re not necessarily involved that early in the process.”

AI and the Future of Work: What to look out for

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/computing/it/ai-and-the-future-of-work-what-to-look-out-for

The robots have come for our jobs. This is the fear that artificial intelligence increasingly stokes with both the tech and policy elite and the public at large. But how worried should we really be?

To consider what impact AI will have on employment, a conference at MIT titled The Future of Work is convening this week, bringing together some leading thinkers and analysts. In advance of the conference, IEEE Spectrum talked with R. David Edelman, director of MIT’s Project on Technology, Economy & National Security about his take on AI’s coming role.

Edelman says he’s lately seen AI-related worry (mixed with economic anxiety) ramp up in much the same way that cybersecurity worries began ratcheting up ten or fifteen years ago.

“Increasingly, issues like the implications of AI on the future of work are table stakes for understanding our economic future and our ability to deliver prosperity for Americans,” he says. “That’s why you’re seeing broad interest, not just from the Department of Labor, not just from the Council of Economic Advisors, but across the government and, in turn, across society.”

Before coming to MIT, Edelman worked in the White House from 2010-’17 under a number of titles, including Special Assistant to the President on Economic and Technology Policy. Edelman also organizes a related conference in the spring at MIT, the MIT AI Policy Congress.

At this week’s Future of Work conference though, Edelman say he’ll be keeping his ears open for a number of issues that he thinks are not quite on everyone’s radar yet. But they may be soon.

For starters, Edelman says, not enough attention in mainstream conversations concerns the boundary between AI-controlled systems and human-controlled ones.

“We need to figure out when people are comfortable handing decisions over to robots, and when they’re not,” he says. “There is a yawning gap between the percentage of Americans who are willing to turn their lives over to autopilot on a plane, and the percentage of Americans willing to turn their lives over to autopilot on a Tesla.”

Which, to be clear, Edelman is not saying represents any sort of unfounded fear. Just that public discussion over self-driving or self-piloting systems is very either/or. Either a self-driving system is seen as 100 percent reliable for all situations, or it’s seen as immature and never to be used.

Second, not enough attention has yet been devoted to the question of metrics we can put in place to understand when an AI system has earned public trust and when it has not.

AI systems are, Edelman points out, only as reliable as the data that created them. So questions about racial and socioeconomic bias in, for instance, AI hiring algorithms are entirely appropriate.“Claims about AI-driven hiring are careening evermore quickly forward,” Edelman says. “There seems to be a disconnect. I’m eager to know, Are we in a place where we need to pump the brakes on AI-influenced hiring? Or do we have some models in a technical or legal context that can give us the confidence we lack today that these systems won’t create a second source of bias?”

A third area of the conversation that Edelman says deserves more media and policy attention is the question of what industries does AI threaten most. While there’s been discussion about jobs that have been put in the AI cross hairs, less discussed, he says, is the bias inherent in the question itself.

A 2017 study by Yale University and the University of Oxford’s Future of Humanity Institute surveyed AI experts for their predictions about, in part, the gravest threats AI poses to jobs and economic prosperity. Edelman points out that the industry professionals surveyed all tipped their hands a bit in the survey: The very last profession AI researchers said would ever be automated was—surprise, surprise—AI researchers.

“Everyone believes that their job will be the last job to be automated, because it’s too complex for machines to possibly master,” Edelman says.

“It’s time we make sure we’re appropriately challenging this consensus that the only sort of preparation we need to do is for the lowest-wage and lowest-skilled jobs,” he says. “Because it may well be that what we think of as good middle-income jobs, maybe even requiring some education, might be displaced or have major skills within them displaced.”

Last is the belief that AI’s effect on industries will be to eliminate jobs and only to eliminate jobs. When, Edelman says, the evidence suggests any such threats could be more nuanced.

AI may indeed eliminate some categories of jobs but may also spawn hybrid jobs that incorporate the new technology into an old format. As was the case with the rollout of electricity at the turn of the 20th century, new fields of study spring up too. Electrical engineers weren’t really needed before electricity became something more than a parlor curiosity, after all. Could AI engineering one day be a field unto its own? (With, surely, its own categories of jobs and academic fields of study and professional membership organizations?)

“We should be doing the hard and technical and often untechnical and unglamorous work of designing systems to earn our trust,” he says. “Humanity has been spectacularly unsuccessful in placing technological genies back in bottles. … We’re at the vanguard of a revolution in teaching technology how to play nice with humans. But that’s gonna be a lot of work. Because it’s got a lot to learn.”

Using Ethernet-Based Synchronization on the USRP™ N3xx Devices

Post Syndicated from National Instruments original https://spectrum.ieee.org/computing/it/using-ethernetbased-synchronization-on-the-usrp-n3xx-devices

The USRP N3xx product family supports three different methods of baseband synchronization: external clock and time reference, GPSDO module, and Ethernet-based timing protocol

USRP N3xx Synchronization Options

The USRP N3xx product family supports three different methods of baseband synchronization: external clock and time reference, GPSDO module, and Ethernet-based timing protocol. Using an external clock and time reference source, such as the CDA-2990 accessory, offers a precise and convenient method of baseband synchronization for high channel count systems where devices are located near each other, such as in a rackmount configuration. Using the GPSDO module enables synchronization when the devices are physically separated by large distances such as in small cell, RF sensor, TDOA, and distributed testbed applications. However, the GPSDO method typically has more skew than the other two methods and requires line of sight to satellites. Therefore, indoor, urban, or hostile environments restrict the use of GPSDO. Ethernet-based synchronization enables precise baseband synchronization over large distances in GPS-denied environments. However, this method consumes one of the SFP+ ports of the USRP N3xx devices and therefore reduces the number of connectors available for IQ streaming. This application note provides instructions for synchronizing multiple USRP N3xx devices using the Ethernet-based method.

Ethernet-Based Synchronization Overview

The USRP N3xx product family supports Ethernet-based synchronization using an open source protocol known as White Rabbit. White Rabbit is a fully deterministic Ethernet-based network protocol for general purpose data transfer and synchronization[1]. This project is supported by a collaboration of academic and industry experts such as CERN and GSI Helmholtz Centre for Heavy Ion Research.

White Rabbit is an extension of the IEEE 1588 Precision Time Protocol (PTP) standard, which distributes time references over Ethernet networks. In addition, White Rabbit uses Synchronous Ethernet (SyncE) to distribute a common clock reference over the network across the Ethernet physical layer to ensure frequency syntonization between all nodes. This combination of SyncE and PTP, in addition to further measurements, provides sub-nanosecond synchronization over distances of up to 10 km. The White Rabbit extension of the IEEE 1588-2008 standard is in the final stages of becoming generalized as the IEEE 1588 High Accuracy profile[2].

The USRP N3xx product family implements the White Rabbit protocol using a combination of the FPGA and dedicated clocking resources. The USRP N3xx operates as a slave node, a White Rabbit master node is required in the network. Seven Solutions provides White Rabbit hardware that works with the USRP N3xx devices to create synchronous clock and time references that are precisely aligned across all devices in the network. See the “Required Accessories” section for details on the required external hardware. The USRP N3xx devices do not support IQ sample streaming over this protocol. Therefore, only one of SFP+ ports is available for streaming when using White Rabbit synchronization.

For more information on the White Rabbit project, visit the links below:

White Rabbit documentation:

Standardization as IEEE1588 High Accuracy:

Required Accessories

White Rabbit synchronization utilizes specific optical SFP transceivers and single mode fiber optic cables to achieve precise time alignment, as documented on the project website. The USRP N3xx was tested to work as a White Rabbit slave using the AXGE-1254-0531 SFP transceiver marked in blue, the AXGE-3454-0531 SFP transceiver marked in purple, and a G652 type single mode fiber optic cable.

Seven Solutions is a provider of White Rabbit equipment, including the WR-LEN and the White Rabbit Switch (WRS). The USRP N3xx was tested to work with both the WR-LEN and the WRS products. All accessories required for White Rabbit operation can be purchased directly from the Seven Solutions website. The AXGE SFP transceivers and fiber optic cables are only listed on the website as part of the “KIT WR-LEN” product, but they can also be purchased individually by contacting Seven Solutions.

For more information on White Rabbit accessories, visit the links below:

White Rabbit SFP wiki:

Seven Solutions WR-LEN:

Seven Solutions KIT WR-LEN:

Seven Solutions WRS:

System Configuration

The White Rabbit feature of the USRP N3xx product family is based on standard networking technology, therefore many system topologies are possible. However, the USRP N3xx device only works as a downstream slave node and must receive its synchronization reference from an upstream master node. This section shows examples of typical configurations used to synchronize a network of multiple USRP N3xx devices.

Figure 1 shows a WRS operating as the master node connected to several USRP N3xx devices. Note that a master SFP port requires the purple SFP transceiver mentioned in the previous section, and a slave SFP port requires the blue SFP transceiver. The USRP N3xx use the SFP+ 0 port for White Rabbit and SFP+ 1 port for IQ streaming. This port configuration requires the White Rabbit “WX” FPGA bitfile.

Download all FPGA images for the version of the USRP Hardware Driver (UHD) installed on the host PC by running the following command in a terminal:

uhd_images_downloader

Load the WX bitfile by running:

uhd_image_loader –args type=n3xx,addr=ni-n3xx-<DEVICE_SERIAL>-fpga-path=”<UHD_INSTALL_DIRECTORY>/share/uhd/images/usrp_n310_fpga_WX.bit

Using the UHD API, configure the USRP application to use “internal” clock source and “sfp0” time source:

usrp->set_clock_source(“internal”)

usrp->set_time_source(“sfp0”)

The White Rabbit IP running on the FPGA disciplines the internal VCXO of the USRP N3xx to the clock reference from the upstream master node in the network. See the USRP N3xx block diagram for reference.

The WRS/WR-LEN device needs to be configured as a master on the ports connected to the USRP N3xx modules. Users can make this configuration with the WR-GUI application provided by Seven Solutions, or with a serial console connection to the WRS/WR-LEN device. See the WRS/WR-LEN manual for detailed instructions. After White Rabbit lock is achieved, the standard USRP N3xx synchronization process completes and the devices are ready for use.

In addition to operating as a master, the WRS and WR-LEN devices can operate as a grandmaster by receiving clock and time references from an external source. This feature is useful for situations where the entire White Rabbit network needs to be disciplined to GPS or other high accuracy synchronization equipment such as a rubidium source. See the WRS/WR-LEN documentation for more information on grandmaster mode.

Synchronization Example

This section provides an example measurement of the timing alignment between multiple USRP N3xx devices synchronized using White Rabbit, with varying fiber cable lengths. As shown in Figure 3, a White Rabbit Switch in master mode is connected to one USRP N3xx device using a 5 km spool of fiber, and to another USRP N3xx device using 1 m of fiber. The synchronization performance was measured by probing the exported PPS signal, which is in the sample clock domain on both USRP N3xx devices thereby demonstrating sample clock and timestamp alignment. The time difference between each PPS edge was measured with an oscilloscope at room temperature in a laboratory environment. As shown in Figure 4, the resulting measurement shows about 222 ps of skew between the two USRP N3xx devices, thereby demonstrating the sub-nanosecond synchronization of White Rabbit over long distances.

The frequency accuracy of the internal oscillator of each USRP N3xx slave node is derived from the frequency accuracy of the upstream master node, in a manner similar to disciplining to an external clock reference source connected to the REF IN port. By connecting a high accuracy frequency source such as a rubidium reference to the master White Rabbit device in grandmaster mode, all USRP N3xx devices in the White Rabbit network would inherit this frequency accuracy.

References

NI Announces Test UE Offering for 5G Lab and Field Trials

Post Syndicated from National Instruments original https://spectrum.ieee.org/computing/it/ni-announces-test-ue-offering-for-5g-lab-and-field-trials

Release 15 compliant 5G New Radio non-standalone system can help customers deliver 5G commercial wireless to market faster

 NI

NI (Nasdaq: NATI), the provider of platform-based systems that help engineers and scientists solve the world’s greatest engineering challenges, today announced a real-time 5G New Radio (NR) test UE offering. The NI Test UE offering features a fully 3GPP Release 15 non-standalone (NSA) compliant system capable of emulating the full operation of end-user devices or user equipment (UE).

With the 5G commercial rollout this year, engineers must validate the design and functionality of 5G NR infrastructure equipment before productization and release. Based on the rugged PXI Express platform, the NI Test UE offering helps customers test prototypes in the lab and in the field to evaluate them on service operators’ networks. In addition, customers can perform InterOperability Device Testing (IoDT), which is a critical part of the commercialization process to ensure that network equipment works with UE from any vendor and vice versa. The NI Test UE offering can also be used to perform benchmark testing to evaluate the full capabilities of commercial and precommercial micro-cell, small-cell and macro-cell 5G NR gNodeB equipment.

Spirent has worked with NI to add 5G NR support to its existing portfolio of products. “As 5G was picking up steam, we looked to find a world-class 5G NR platform that would outperform the market today and continue to do so as the 5G market matures,” said Clarke Ryan, senior director of Product Development at Spirent. “As a leader in SDR-based radios since 2011, NI was the natural choice to ensure we have the best radio with the best testing capabilities to stay ahead of the curve for our customers.”

The NI Test UE offering provides a flexible system for evaluating 5G technology. Customers can use the SDR front ends to select the sub-6 GHz frequency of their choice. The system scales up to one 100 MHz bandwidth component carrier and can be configured for up to 4×2 MIMO to achieve a maximum throughput of 2.3 Gb/s. The 5G NR Release 15 software includes complete protocol stack software that can connect with a 5G gNodeB while providing real-time diagnostic information. Customers can log diagnostic information to a disk for post-test analysis and debugging and can view it on the software front panel for a real-time visualization of the link’s performance.

“The industry is on the cusp of 5G commercial deployments and mobile operators need to ensure that their infrastructure is 5G enabled in a virtualized, programmable, open and cost-efficient way,” said Neeraj Patel, Vice President and General Manager, Software and Services, Radisys. “NI is leveraging our first-to-market 5G Software Suite as the engine for its Test UE offering. Our complete 5G source code solution for UE, gNB and 5GCN represents a disruptive end-to-end enabling technology for customers to build 5G NR solutions. By powering such first to market test applications together with NI and Spirent, we are accelerating 5G commercialization that will change how the world connects.”

Find more information about the NI Test UE offering for 5G NR at ni.com/5g-test-ue.

About NI

NI (ni.com) develops high-performance automated test and automated measurement systems to help you solve your engineering challenges now and into the future. Our open, software-defined platform uses modular hardware and an expansive ecosystem to help you turn powerful possibilities into real solutions.