Tag Archives: Telecom/Security

Bellingcat Crowdsources Spycraft, Scales Up Sleuthing

Post Syndicated from Mark Harris original https://spectrum.ieee.org/tech-talk/telecom/security/qa-the-founder-of-bellingcat-on-how-social-media-enables-new-forms-of-investigation

Spies, gumshoes and hard-driving investigative reporters—no matter how amazing they may be at their jobs—all suffer one inescapable drawback: They don’t scale. 

By contrast, consider the case of Eliot Higgins. In 2012, this business administrator began blogging about videos and other social media feeds he was following on the conflicts in Libya and Syria. By systematically and minutely analyzing the locations, people, and weapons he saw, Higgins was able to identify the use of cluster bombs and chemical weapons, and eventually uncover a weapons smuggling operation.

In 2014, he used a Kickstarter campaign to launch Bellingcat, an online platform for citizen journalists to verify and publish research that taps into the hive mind approach to worldwide, Internet-era collaboration and open-source sleuthing. Whereas traditional investigative journalism—commissioned and published by newspapers or foundations or blogs—is only as expansive and wide-ranging as the team of reporters assigned to any given project, Bellingcat is more like an online meritocracy of ideas à la Wikipedia: Start a thread or page or investigation or project, and if it yields something good, some will read while others will contribute. Get enough people involved, and things can even start to snowball. 

Bellingcat has gone on to conduct important investigations into the downing of the MH17 airliner over Ukraine, atrocities in Cameroon and Tigray, and the poisonings of political rivals by the Russian government. The organization has collaborated with human rights agencies, the International Criminal Court, and traditional news outlets, including the BBC and The New York Times. Bellingcat now has 22 staff and a host of volunteers.

Many of the tools Bellingcat uses in its investigations are available online, for anyone to use or improve upon, including software to determine the time of day from shadows in photographs, identify the locations of Instagram posts, or find cloud-free areas in satellite imagery.

The following is a condensed version of a phone interview IEEE Spectrum conducted with Higgins in June.

Spectrum: What’s the story behind the name “Bellingcat”?

Higgins: I wanted part of the website to be dedicated to showing people how to do open source investigations, and a friend suggested the fable of “Belling the Cat.” This is about group of mice who are afraid of a ferocious cat, so they come up with the idea of putting a bell around its neck to protect them. We’re teaching people how to bell the cat.

Spectrum: It’s interesting that with their phones and digital activity inadvertently documenting their movements, some of the “cats” you investigate are now belling themselves.

Higgins: As smartphone technology has become more available, people are recording and sharing every aspect of their lives. They give away a huge amount of information, everything from their day-to-day activities to war crimes and some of the most horrific acts you can imagine. Some of that is done on purpose, and sometimes it’s just accidental or incidental. But because that’s all online, it’s all information that we can use to piece together what happened around a wide variety of events.

Spectrum: So how does a typical investigation come together?

Higgins: We break it down into three steps: identify, verify and amplify. Identify is finding information, such as videos and photographs from a specific incident. We then verify that by figuring out if we can confirm details. We don’t have any special high-end tools—it’s all stuff that’s free or very cheap for anyone to access. Google is probably one of the most useful tools, just having a search engine that can help you trawl massive amounts of material. Satellite imagery from Google Earth is a very important part of the geolocation verification process, as is Google Street View.

Amplification might be a blog on Bellingcat or a collaboration with a media outlet or NGO. One of the reasons that these organizations work with us is that we use stuff that’s on the internet already. They don’t have to worry whether this information is actually correct. There are concerns that Wikileaks, for example, is being sent stuff by intelligence organizations. But with us, we can show the social media posts or the videos that make our point, and then tell them how we analyzed it.

For Bellingcat, building an audience isn’t just about reaching more people, it’s about involving more people in the investigation. Getting more eyeballs on the things we’re trying to discover is often something that’s quite important.

Spectrum: How does that audience engagement work in practice?

Higgins: Europol had a Stop Child Abuse campaign where it was asking members of the public to identify objects from abuse imagery. We amplified that to our audience and just through sharing these object images, children have been rescued and perpetrators arrested. It doesn’t have to be a particularly complex task, but if you can get half a million people working on it, you often get a correct answer. It’s about a community built around Bellingcat, not just Bellingcat doing stuff itself.

Spectrum: Talk me through one of your recent investigations, that uncovered sensitive details of security procedures at US nuclear bases.

Higgins: One of our contributors saw on social media a photo of what appeared to be a nuclear weapon at a foreign military base. As he was doing keyword searches on terms related to nuclear weapons, he started getting results from flashcard apps [smartphone apps people use to prepare themselves for school and work tests].

He discovered that people were using these apps to save quizzes about the security at nuclear bases, seemingly unaware that they were discoverable on Google. And he searched for the names of more bases believed to have nuclear weapons, he found more and more profiles, and more and more information about the storage of nuclear devices.

Open source investigation is often like this—choosing the right words to find the thing that you’re looking for and then digging through all the information.

Spectrum: But then you had to decide how much of that information to make public.

Higgins: Right. We published only a very small selection that was older and out of date, and gave them a good chance to take precautions, like changing their protocols and passwords.

One of the paradoxes of our work is that we want as much information available online for our investigations but we also recognize that it can pose a risk to society. It’s interesting but also probably quite a bad thing that it is all out there.

Spectrum: How much do you worry about misinformation, such as fake social media profiles, misleading geocoding, or other information created with the intention of driving investigators in the wrong direction?

Higgins: We often deal with actors who will put out fake information and part of the verification process is looking out for that. Russia put out a massive amounts of falsified images during our MH17 investigation, for example, all of which we could check and verify using open source investigation techniques. Anything that we’re using or publishing is triangulated with other open source evidence, or sources independent from the original.

So if a really important video appears on a brand new YouTube account, you are immediately suspicious. And it’s actually very hard to make a convincing fake. It’s one thing to post a single fake image but it’s not just the thing you’re posting, it’s also the social media account itself. Where and when did it originate? Who is it following, who follows it? If something’s fake, it doesn’t exist in the network that genuine information exists in.

When Russia presented satellite injury with MH17, we were able to show it was actually taken six weeks before they claimed, based on various details we found on [publicly available] satellite imagery.

Spectrum: I’m interested in whether the cat ever listens to its bell? Does Russian intelligence, for instance, learn from your investigations, or are they making the same mistakes over and over again?

Higgins: Open-source investigation is so new that even if one nation starts figuring this stuff out and [takes] steps to stop us, the rest of the world isn’t so aware. And no matter what restrictions there are, there’s always something else to investigate.

There are a million worthy things we could be looking into, any day of the week. That’s partly why Bellingcat does so much training and involves such a big audience. The more people who are doing it, the more stuff gets investigated.

And because so much of open source investigations is online, you don’t have to be in the same country you’re writing about. Most of the work that has been done on Syria, for example, is by people who are not Syrian and who don’t live in Syria,

We have a sense of the internationalism where people in the UK and Germany support people in distant countries. They’re on the ground gathering the evidence and we’re the ones piecing it together and analyzing it.  

Spectrum: What’s your take on digital technologies like social media now? Are they a force for good that can create positive change, or a malign influence where misinformation spreads and polarizes communities?

Higgins: There’s good and bad in all of it. Some of the work we’re doing now is focused on teaching students and schools to do open source investigations on issues in their area and working with local media.

I’ve been helping a community in the UK that rescues stolen dogs by using license plate analysis from security camera footage. Often the plates are too blurred to read but we’ve developed technology as part of our investigations into things like murders, to help them track down these missing dogs.

It’s not Syrian war crimes or Russian assassinations but having your pet stolen is a big issue. We’re doing tech development as well, creating new tech platforms for volunteers to organize on, and new ways for evidence to be gathered for justice and accountability.

If you can teach people on a large scale and start building communities around that, you’ll also build resilience to people being drawn into conspiracy theories, where they think they’re finding answers but they’re just getting a false sense of power.

What our students are finding is a way to do the work themselves and discover something for themselves, and that’s having an impact which has measurable results.

2021 Cybersecurity and IT Failures Roundup

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/telecom/security/2021-cybersecurity-roundup

The pandemic year just passed once again demonstrates that IT-related failures are universally unprejudiced. Companies large and small, sectors private and public, reputations stellar and scorned: none are exempt. Herewith, the failures, interruptions, crimes and other IT-related setbacks that made the news in 2020.

Aviation

Automakers

Cloud Computing

Communications

Cybercrime

Financial Institutions and Markets

Government IT

Health IT

Policing

Rail Transport


Aviation: The Year Without Airline Grinches (Almost)

Over the past several years, airline flight delays and cancellations [PDF] related to IT issues have averaged about one per month. The year 2020 kicked off with “technical issues” affecting British Airways’ computerized check-in at London’s Heathrow Airport, which caused more than 100 flight cancellations with numerous others being delayed. The outage impacted at least 10,000 passengers’ travel plans over two days in February. Then in March, as Covid-19 related government travel bans started to take hold, Delta Air Lines reported, “intermittent technical difficulties” for bookings and ticket changes.

Once the travel bans firmly took hold and flying trimmed back to a minimum, however, there has not been a major IT outage reported since Delta’s. I suspect this hiatus will not last long, as airline flight schedules start returning closer to some semblance of “normal,” perhaps (here’s hoping!) later this year.

Probably the biggest airline IT-related news of the year is the U.S. Federal Aviation Administration’s announcement that the Boeing 737 Max 8 aircraft can resume passenger service once a number of changes [PDF] are made. This may take up to a year to complete for all 450 aircraft that were grounded. The FAA Airworthiness Directive requires “installing new flight control computer (FCC) software, revising the existing [Airplane Flight Manual] to incorporate new and revised flight crew procedures, installing new MAX display system (MDS) software, changing the horizontal stabilizer trim wire routing installations, completing an angle of attack (AOA) sensor system test, and performing an operational readiness flight.” While both Brazil’s and European Union’s country’s civil aviation administration organizations have given their approval for the 737 Max to return to flight, some others like Canada’s, may mandate that additional requirements be met. 

Boeing’s myriad problems with the Max 8’s software (itself attributed to the crashes of both Lion Air JT610 and Ethiopian Airlines Flight 302) can be reviewed in both the June FAA Inspector General’s report as well as the final report of the U.S. House Committee on Transportation and Infrastructure investigation. (See also software executive and airplane enthusiast Gregory Travis’s comprehensive 2019 analysis of the 737 Max fiasco for Spectrum.) Whether Boeing or the airlines can convince the public to board the Max remains to be seen, even with American Airlines beginning flights with the aircraft in late December

Automakers: Software Potholes

Software and electronic-related recalls show no signs of slowing from their 2019 record levels. The year started off with GM issuing a second software recall to remedy problems caused by its first software recall issued in December 2019. The original recall and its software fix were aimed at correcting an error that could disable 463,995 2019 Chevrolet Silverado, GMC Sierra and Cadillac CT6 vehicles’ electronic stability control or antilock brake systems without warnings appearing on the dashboard. Unfortunately, the update was flawed. If an owner remotely started their vehicle using GM’s OnStar app, the brakes were disabled—although warnings were shown on the dash. About 162,000 vehicles received the original fix. The new software update seems to have done the trick.

Both Hyundai and Kia Motors, which Hyundai owns a 34% stake in, issued a number of recalls in 2020 involving moisture problems involving electronic circuits that could cause vehicle fires. Owners of many of the vehicles involved were warned to park their vehicles outside until the repairs were made. Hyundai also had to issue a recall to update its Remote Smart Parking Assistant software for the 2020 Sonata and Nexo models. A software error could allow a vehicle to continue moving after a system malfunction.

Other auto manufacturers had their share of recalls as well. Fiat Chrysler Automobiles recalled 318,537 2019 and 2020 cars and trucks because a software error could allow the backup camera to stay on when a vehicle is moving forward. Toyota recalled 700,000 Prius and Prius V models for a software problem that would prevent the cars from entering a failsafe driving mode as intended, while 735,000 Honda Motors 2018-2020 Accord and 2019-2020 Insight vehicles were recalled for software updates to its Body Control Module to prevent the malfunction of one or more electronic components including the rear-view camera display, turn signals and windshield wipers. Volkswagen had to slip its rollout of its new all-electric ID.3 models by several months due to software issues. 

Given the increasing amount and importance of vehicle software, Toyota launched two new software companies in July under an umbrella company called Woven Planet Holdings to increase the capability and reliability of its vehicles’ automation. Volkswagen created its own software business unit in 2019. Meanwhile, GM announced in November that it would hire another 3,000 workers before the end of the first quarter of 2021 to increase its engineering and software development capabilities. 

Cloud Computing: Intermittent Showers 

While cloud computing is generally reliable, when it is not, the impacts can be widespread and consequential, especially when so many people working or schooling from home. This truism was highlighted by several cloud computing outages this year. In March, Microsoft Azure experienced a six-hour outage attributed to a cooling system failure and another caused by VM capacity constraints. The same month, Google Cloud went down for about 90 minutes, which was ascribed to issues with infrastructure components. In April, GitHub (owned by Microsoft) experienced several disruptions related to multiple different system misconfiguration issues. In June, the IBM Cloud went down for over three hours due to problems linked to an external network provider—and once more later in the month, this time with little explanation. Amazon’s East Region U.S. AWS center suffered disruptions for over six hours in November for a large number of clients, from Adobe to Roku to The Wall Street Journal, that was caused by an operating system configuration issue. Multiple Google Cloud services suffered back to back service disruptions in December, the first that lasted for about an hour and affected Gmail, Google Classroom, Nest, and YouTube, among others. The outage was blamed on storage issues with Google’s authentication system. The second unscheduled downtime affected Gmail for nearly seven hours. An email configuration update issue was the culprit this time. 

Communications: Hello? Hello? Anyone There?

Recurrent communication problems continued throughout 2020. Several emergency service systems went offline, including Arizona’s 911 system in June that left 1 million people without service. Hampshire, England’s £39m new 999-system collapsed in July. Meanwhile, September saw 911 outages across 14 states for about an hour. 

T-Mobile wireless services, the second largest in the U.S., were unavailable to many of its customers for nearly 12 hours after the introduction of a new network router in June, causing 250 million nation-wide calls and 23,621 emergency calls to 911 in several states not to connect. Vodafone in Germany experienced equipment failure that kept 100,000 mobile phone users from making calls for three hours in November.

Disruptions also hit users of the Internet. In May, users of the videoconferencing platform Zoom across the globe experienced trouble logging into their meetings for about two hours, messaging platform Slack suffered an outage for nearly three hours, while Adobe Creative Cloud users were locked out for most of a day. A configuration error in Internet service company Cloudfare’s backbone network disrupted world-wide online services for about an hour in July. Then in August, Internet service provider CenturyLink went down, taking dozens of online services and a big chunk of world-wide Internet traffic down with it, while in Australia, a DNS issue affected Telstra’s Internet service for a few hours. In September, a problem with Microsoft’s Azure Active Directory kept users in the North America from their Microsoft Office 365 accounts and other services for five hours, while in October, a network infrastructure update issue again caused difficulties for North American Microsoft Office 365 and other service users for over four hours. And in December, Google suffered consecutive day outages. The first was caused by an internal administrative system storage issue and affected more than a dozen Google services, including Docs, Gmail, Nest, YouTube and its cloud services for about an hour. The next day, Gmail services were down for up to four hours by an email configuration issue.

Social media companies suffered their own outages, like Spotify and Tinder (caused by a Facebook issue) in July, Twitter in February and again in October, as well as Facebook across Europe in December.

Cybercrime: The Targets and Costs Increase

The number of records exposed by data breaches and especially unsecured databases continues to skyrocket, with at least 36 billion records exposed as of the end of September 2020. While the number of data breaches seems to have gone down, the number of large unsecured databases discovered seems to be climbing. StealthLabs has a comprehensive compilation of 25 major data breaches by month.

Ransomware attacks increased significantly in 2020, especially targeting governmentaleducational and hospital systems. Typical were the attacks against the City of Pensacola, Florida, the University of Utah, and the University of Vermont Medical Center. Businesses have not been immune either, with ransomware woes plaguing the likes of electronic company Foxconn, hospital and healthcare services company Universal Health Services, and cybersecurity company Cygilant.

The U.S. Treasury Department’s Office of Foreign Assets Control issued a five-page advisory [PDF] in October warning against paying ransomware demands, stating that it not only encourages more attacks, but it also may run afoul of OFAC regulations and result in civil penalties. Whether the advisory has any impact remains to be seen. Delaware County, Pennsylvania agreed to pay a $500,000 ransom in December, for example.

Nation-state sponsored intrusions have also been prevalent in 2020, such as those against Israel and the UAE. The Russia-attributed “SolarWinds” attack against the U.S. that was initially disclosed in December and then developed into a bigger story has especially caused alarm, with the amount of damage still being unraveled.

In light of how often ransomware attacks are initiated by phishing emails, government agencies and corporations have increased their employee phishing-training, including the use of phishing tests using mock phishing emails and websites. These tests frequently use the same information contained in real phishing emails as a template in order to see how their employees respond. Unfortunately, some of these tests have backfired, causing undue panic or rage among employees as a result. Both Tribune Publishing Co. and GoDaddy recently found out about the latter when their tests were less than well thought out.

Financial Institutions and Markets: Trading Will Resume Tomorrow

The year saw the continuation of bank outages in the UK beginning on New Year’s Day with millions of customers of Lloyds Banking Group unable to access online and mobile banking services. A few days later, computer problems at Clydesdale and Yorkshire banks kept wages and other payments from reaching customer accounts. Lloyds had another online problem in June, and other UK banks like SantanderNatWest, and Barclays experienced their own IT problems in late summer.

Other notable bank IT problems involved U.S. Chase Bank, where “technical issues” created incorrect customer balances in June and Nigerian First City Monument Bank, where up to 5.1 million customers had trouble accessing their online accounts for four days in July. Also in July, Australian Commonwealth Bank customers suffered a nine-hour online and banking outage, while National Australia Bank customers experienced a similar situation in October. A power outage at a data center took out India’s HDFC Bank, which interrupted its services for two days in November. HDFC’s November outage, along with previous incidents, caused the Reserve Bank of India in December to require HDFC to slow down its modernization efforts to ensure that its banking infrastructure was sufficiently reliable and resilient

IT problems at stock exchanges and trading platforms have been especially abundant this past year. In February, a hardware error halted trading at the Toronto Stock Exchange for two hours in February, while a software issue caused the Moscow Exchange to suspend trading for 42 minutes in May. Then in July, stock exchanges in Frankfurt, Vienna, Ljubljana, Prague, Budapest, Zagreb, Malta and Sofia were offline for three hours because of a “technical issue” with the German electronic trading platform Xetra T7 system that each exchange used. In October, a technical issue in third-party middleware software was blamed for the trading halt on Euronext exchanges in Amsterdam, Brussels, Dublin, Lisbon and Portugal. The same month, a hardware failure and subsequent failure of the back-up system took down the Tokyo Stock Exchange for a whole day, the worst electronic outage ever experienced. The problems led to the resignation of TSE Chief Executive Officer Koichiro Miyahara. In November, a software issue caused trading to be suspended on the Australian Stock Exchange for nearly the entire day, its worst outage in more than a decade.

Trading platforms also experienced numerous IT problems. In March, the trading platform Robinhood faced, according to the company’s founders, “stress on our infrastructure.” That stress resulted in three outages in the space of one week, alongside others in JuneAugustNovember and DecemberJ.P. Morgan endured a trading platform problem in March, while Charles SchwabE-TradeFidelityMerrill LynchTD Ameritrade, and Vanguard all had trading system technical issues of their own in November. Charles Schwab, Fidelity, TD Ameritade, and Interactive Brokers Group joined Robinhood with more outages in December as well.

Government IT: Anyone know COBOL?

The pandemic highlighted the dependence of governments everywhere on legacy IT systems, particularly in regard to state unemployment systems. The rapid increase in demand for unemployment benefits and the changes in the amount of benefits paid coupled with the inability to reprogram quickly the benefit systems affected unemployment systems in CaliforniaOregon and Washington State especially hard. On the other hand, nearly every state experienced technical problems, including rampant fraud. Computer issues also affected the Internal Revenue Services ability to send out Congressional approved stimulus checks in April as well.

Legacy IT system worries did not just affect the United States. In February, Canadian Prime Minister Justin Trudeau received a report warning that many mission-critical systems were “rusting out and at risk of failure.” Japan’s government pledged in June to modernize its administrative systems, which were criticized for being “behind the world by at least 20 years.” South Korea’s government also promised in June to accelerate its transition to a digital economy.

While unemployment system problems dominated government system woes, there were others in the news as well. Pittsburgh’s new state of the art employee payroll system had an inauspicious start at the beginning of the year. Meanwhile, Ohio’s Cuyahoga County is still awaiting its new $35 million computer system, which is $10 million over budget and already two years late. It may be ready by 2022. 

Pay issues involving the infamous Canadian Phoenix government payroll system that went live in 2016 continue to be resolved, with its replacement moving to early testing likely next year. Unfortunately, there still is no resolution to those tens of thousands of innocent unemployed Michigan workers falsely accused of employment fraud by Michigan’s Integrated Data Automated System (MiDAS) between October 2013 and September 2015. The state has been forcefully fighting without success to quash a class-action lawsuit for compensation; the case is now with the Michigan’s Supreme Court again for hopefully a final resolution in 2021. Finally, a review of Ohio’s $1.2 billion benefits system that went live in 2013 was still riddled with 1,100 defects and was partially responsible for up to $455 million in benefit overpayments and 24,000 backlog cases in the past year.

Health IT

Medicine’s shift to electronic health records continues to be a bumpy one. In January, the UK government pledged to provide £40 million to streamline logging into National Health Service IT systems. Some staff reportedly must log into as many as 15 different systems each shift. Also in January, it was reported that half of the 23 million records in Australia’s controversial national My Health Record system contain no information, showing that its perceived benefits are still convincing to most Australian patients or practitioners. In March, a research paper [PDF] published in the Mayo Clinic Proceedings indicated that U.S. physicians rated the usability of their EHRs an “F”, and that poorly implemented EHRs were contributing to physician burnout

In May, the U.K.’s National Audit Office reported that the now £8.1 billion IT modernization program being undertaken at the National Health Service is still a jumbled mess that hasn’t learned the lessons from its previous failure. Originally a £4.2 billion program in 2016 that promised a “paperless” NHS by 2020, the target date keeps getting pushed back, with a final cost likely to be much higher than currently projected. Additionally in May, a study published in JAMA Network Open that indicated hospital EHRs were failing to catch 33 percent of potentially harmful drug interactions and other medication errors, while in June, a study published in JAMA indicated more than 20 percent of patients were finding errors in their EHR notes.

In September, the U.S. Coast Guard began piloting its new EHR system that is based on the $4.4 billion Department of Defense Military Health System EHR effort called GENESIS that is planned to be fully deployed across DoD by 2024. The Coast Guard terminated its $67 million mismanaged EHR effort in 2018. In October, after a six month delay, the Department of Veteran Affairs finally rolled out its initial go-live EHR system at the Mann-Grandstaff Medical Center in Spokane, Washington. The $16.4 billion troubled EHR modernization project is scheduled to complete in 2028, although delays and increased costs are likely over the next 7 plus years.

Finally, in December, a briefing note written in October to Saskatchewan’s Minister of Health was made public by the Canadian Broadcasting Corporation warning that the province’s healthcare IT system was at growing risk of failure because of chronic underfunding. “A major equipment failure which may disrupt service and risk lives appears inevitable with the current funding model,” the note warned. When asked to comment about the note, Health Minister Paul Merriman said he “will be asking Ministry of Health officials to look into this matter and to find ways to improve the systems supported by eHealth.” Why the Minister did not ask in October when he received the note was not explained.

Policing: The Computer Wrongly ID’ed You

Issues with automated facial recognition (AFR) continue to dog law enforcement. In wake of social unrest in the U.S. and ongoing worries over AFR bias, Microsoft and Amazon announced in June that they would suspend selling face recognition software to police departments. IBM went one step further and announced in June that it would no longer work on the technology at all. In August, the use of AFR by British police was ruled unlawful by a Court of Appeals until the government officially approves its use. 

Along with the push against the use of AFR, there has been a backlash against the use of predictive policing software. For example, New Orleans, Louisiana and Los AngelesOakland and Santa Cruz, California have all moved to prohibit the use of predictive policing systems.

In December, the Massachusetts State Police announced that it would suspend using its automatic license plate readers until a time and date glitch found to be affecting five years-worth of data was corrected.

There were other police IT issues in the U.K. as well. In January, it was revealed that an error in the City of London Police’s new crime reporting service that was launched in 2018 kept information on over 300,000 fraud crime reports from being shared by the National Fraud Database with the London Police for 15-months. The database is used by major banks, financial institutions, law enforcement and government organizations to share information about fraud and to help the police with their criminal investigations. Then in October, the U.K.’s Police National Computer experienced a 10-hour outage blamed on a “human error,” with one senior police official saying the outage had caused “absolute chaos” across the country’s police forces.

Troubles with iOPS—the late, costly, and controversial new computer system installed by the Greater Manchester Police in the U.K. in late 2019—also persisted unabated throughout the year. The latest crash occurred just last month. Operational difficulties with the system have been linked to a staggering inability by the GMP to record accurate crime data as well. 

Rail Transport: Positive Train Control At Last

Few train or subway IT-related problems were reported this past year. In July, computer and other issues were reported to continue to plague Ottawa’s light rail transit system, while in September, service on the San Francisco area BART system was shut down for about four hours because of a failure of “one of a dozen field network devices.”

The biggest news was that after 12-years, 41 U.S. freight and passenger railroads have met (with two-days to spare) the federal mandate for deploying positive train control to prevent train accidents, such as train-to-train collisions, derailments caused by excessive train speed, train movements through misaligned track switches, and unauthorized train entry into work zones. Vehicle-train and track or equipment failures can still cause train accidents, however. The original deadline was the end of 2015, but that was date shifted back five years as it became clear most railroads would not be able to meet the mandate. 

It should be noted that the National Transportation Safety Board first recommended a form of automatic train control back in 1969. 

The Dog That Didn’t Bark

Finally, a note on what did not seem to happen. Typically, every year there are several memorable IT project failures or cancellations or other major dumpster fires. However, for all its legendary failures and disappointments, 2020 is marked by a dearth of this particular breed of IT catastrophe. There was Australian government’s Visa Processing Platform outsourcing plan failure that cost AU$92 million, the decision by Nacogdoches (Texas) Memorial Hospital to terminate its $20 million EHR contract for Cerner’s Community Works platform, and the Cyberpunk 2077 launch fiasco, but there have been few others that made the news. Whether there indeed were fewer IT project failures (and maybe more successes?), or just fewer reported, should be clearer a year from now at our next review.

How the U.S. Can Apply Basic Engineering Principles To Avoid an Election Catastrophe

Post Syndicated from Jonathan Coopersmith original https://spectrum.ieee.org/tech-talk/telecom/security/engineering-principles-us-election

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

The 2020 primary elections and caucuses in the United States earlier this year provide a textbook case of how technology and institutions can fail in a crisis, when we need them most. To recap: Americans looked on, many of them with incredulity, as people hoping to vote were forced to wait hours at the height of the initial wave of the COVID-19 pandemic. Elsewhere, there were delayed elections, tens of thousands of undelivered or uncounted mail-in ballots, and long delays in counting and releasing results. This in what is arguably the world’s most technologically advanced industrialized nation.

Given the persistence of COVID in the United States and domestic and foreign efforts to delegitimize its November presidential election, a repeat of the primaries could easily produce a massive visible disaster that will haunt the United States for decades. Indeed, doomsday scenarios and war-gamingwhat ifs” have become almost a cottage industry. Fortunately, those earlier failures in the primaries provide a road map to a fair, secure, and accessible election—if U.S. officials are willing to learn and act quickly.

What happens in the 2020 U.S. election will reverberate all over the world, and not just for the usual reasons. Every democracy will face the challenge of organizing and ensuring safe and secure elections.  People in most countries still vote in person by paper ballot, but like the United States, many countries are aggressively expanding options for how their citizens can vote

Compared with the rest of the world, though, the United States stands apart in one fundamental aspect:  No single federal law governs elections. The 50 states and the District of Columbia each conduct their own elections under their own laws and regulations. The elections themselves are actually administered by 3,143 counties and equivalents within those states, which differ in resources, training, ballots, and interpretation of regulations. In Montana, for example, counties can automatically mail ballots to registered voters but are not required to. 

A similar diversity applies to the actual voting technology. In 2016, half of registered voters in the United States lived in areas that only optically scan paper ballots; a quarter lived in areas with direct-recording electronic (DRE) equipment, which only creates an electronic vote; and the remainder lived in areas that use both types of systems or where residents vote entirely by mail, using paper ballots that are optically scanned. Over 1,800 small counties still collectively counted a million paper ballots by hand. 

The failures during the primaries grew from a familiar litany: poor organization, untried technology, inadequate public information, and an inability to scale quickly. Counties that had the worst problems were usually ones that had introduced new voting software and hardware without adequately testing it, or without training operators and properly educating users.

There were some early warnings of trouble ahead. In February, with COVID not yet an issue in the United States, the Iowa Democratic caucus was thrown into chaos when the inadequately vetted IowaReporter smartphone app by Shadow, which was used to tabulate votes, broke down. The failure was compounded by malicious jamming of party telephone lines by American trolls. It took weeks to report final results. Georgia introduced new DRE equipment with inadequate training and in some cases, the wrong proportion of voting equipment to processing equipment, which delayed tabulation of the results.  

As fear of COVID spread, many states scaled up their absentee-voting options and reduced the number of polling places. In practice, “absentee voting” refers to the use of a paper ballot that is mailed to a voter, who fills it out and then returns it.  A few states like Kentucky, with good coordination between elected leaders and election administrators, executed smooth mail-in primaries earlier this year. More common, however, were failures of the U.S. Post Office and many state- and local-election offices to handle a surge of ballots.  In New York, some races remained undecided three weeks after its primary because of slow receipt and counting of ballots. Election officials in 23 states rejected over 534,000 primary mail-in ballots, compared with 319,000 mail-in ballots for the 2016 general election. 

The post office, along with procrastinating voters, has emerged as a critical failure node for absentee voting. Virginia rejected an astonishing 6 percent of ballots for lateness, compared with half of a percent for Michigan. Incidentally, these cases of citizens losing their vote due to known difficulties with absentee ballots far, far outweigh the incidences of voter fraud connected to absentee voting, contrary to the claims of certain politicians. 

At this point, a technologically savvy person could be forgiven for wondering, why can’t we vote over the Internet? The Internet has been in widespread public use in developed countries for more than a quarter century. It’s been more than 40 years since the introduction of the personal computer, and about 20 since the first smartphones came out. And yet we still have no easy way to use these nearly ubiquitous tools to vote.

A subset of technologists has long dreamed of Internet voting, via an app. But in most of the world, it remains just that: a dream. The main exception is Estonia, where Internet voting experiments began 20 years ago. The practice is now mainstream there—in the country’s 2019 parliamentary elections, nearly 44 percent of voters voted over the Internet without any problems. In the United States, over the past few years, some 55 elections have used app-based Internet voting as an option for absentee voting. However, that’s a very tiny percentage of the thousands of elections conducted by municipalities during that period. Despite Estonia’s favorable experiences with what it calls “i-voting,” in much of the rest of the world concerns about security, privacy, and transparency have kept voting over the Internet in the realm of science fiction. 

The rise of blockchain, a software-based system for guaranteeing the validity of a chain of transactions, sparked new hopes for Internet voting. West Virginia experimented with blockchain absentee voting in 2018, but election technology experts worried about possible vulnerabilities in recording, counting, and storing an auditable vote without violating the voter’s privacy.  The lack of transparency by the system provider, Voatz, did not dispel these worries. After a report from MIT’s Internet Policy Research Initiative reinforced those concerns, West Virginia canceled plans to use blockchain voting in this year’s primary.    

We can argue all we want about the promise and perils of Internet voting, but it won’t change the fact that this option won’t be available for this November’s general election in the United States. So officials will have to stick with tried-and-true absentee-voting techniques, improving them to avoid the fiascoes of the recent past. Fortunately, this shouldn’t be hard. Think of shoring up this election as an exercise involving flow management, human-factors engineering, and minimizing risk in a hostile (political) environment—one with a low signal-to-noise ratio. 

This coming November 3 will see a record voter turnout in the United States, an unprecedented proportion of which will be voting early and by mail, all during an ongoing pandemic in an intensely partisan political landscape with domestic and foreign actors trying to disrupt or discredit the election. To cope with such numbers, we’ll need to “flatten the curve.” A smoothly flowing election will require encouraging as many people as possible to vote in the days and weeks before Election Day and changing election procedures and rules to accommodate those early votes.  

That tactic will of course create a new challenge: handling the tens of millions of people voting by mail in a major acceleration of the U.S. trend of voting before Election Day. Historically, U.S. voters could cast an absentee ballot by mail only if they were out of state or had another state-approved excuse. But in 2000, Oregon pioneered the practice of voting by mail exclusively. There is no longer any in-person voting in Oregon—and, it is worth noting, Oregon never experienced any increases in fraud as it transitioned to voting by mail.

Overall, mail-in voting in U.S. presidential elections doubled from 12 percent (14 million) of all votes in 2004 to 24 percent (33 million) votes cast in 2016. Those numbers, however, hide great diversity: Ninety-seven percent of voters in the state of Washington but only 2 percent of West Virginians voted by mail in 2016. Early voting–in person at a polling station open before Election Day–also expanded from 8 percent to 17 percent of all votes during that same 12-year period, 2004 to 2016. 

Today, absentee voting and vote-by-mail are essentially equivalent as more states relax restrictions on mail-in voting. In 2020, five more states–Washington, Colorado, Utah, Hawaii, and California–will join Oregon to vote by mail exclusively. A COVID-induced relaxation of absentee-ballot rules means that over 190 million Americans, not quite two-thirds of the total population, will have a straightforward vote-by-mail option this fall. 

Whether they will be able to do so confidently and successfully is another question. The main concern with voting by mail is rejected ballots. The overall rejection rate for ballots at traditional, in-person voting places in the United States is 0.01 percent. Compare that with a 1 percent rejection rate for mail-in ballots in Florida in 2018 and a deeply dismaying 6 percent rate for Virginia in its 2020 primary.

Nearly all of the Virginia ballots were rejected because they arrived late, reflecting the lack of experience for many voting by mail for the first time. Forgetting to sign a ballot was another common reason for rejection. But votes were also refused because of regulations that some might deem overly strict—a tear in an envelope is enough to get a mail-in vote nixed in some districts. These verification procedures are less forgiving of errors, and first-time voters, especially ethnic and racial minorities, have their ballots rejected more frequently. Post office delivery failures also contributed.    

We already know how to deal with all this and thereby minimize ballot rejection. States could automatically send ballots to registered voters weeks before the actual election date. Voters could fill out their ballot, seal it in an envelope, and place that envelope inside a larger envelope, which they would sign. A bar code on that outside envelope would allow the voter and election administrators to track its location. It is vitally important for voters to have feedback that confirms their vote has been received and counted. 

This ballot could be mailed, deposited in a secure dropbox, or returned in person to the local election office for processing. In 2016, more than half the voters in vote-by-mail states returned their ballots, not by mail but by secure drop-off boxes or by visiting their local election offices. 

The signature on the outer envelope would be verified against a signature on file, either from a driver’s license or on a voting app, to guard against fraud. If the signature appeared odd or if some other problem threatened the ballot’s rejection, the election office would contact the voter by text, email, or phone to sort out the problem. The voter could text a new signature, for example.  Once verified, the ballots could be promptly counted, either before or on Election Day.

The problem with these best practices is that they are not universal. A few states, including Arizona, already employ such procedures and enjoy very low rates of rejected mail-in ballots: Maricopa County, Ariz. had a mail-in ballot rejection rate of just 0.03 percent in 2018, roughly on a par with in-person voting. Most states, however, lack these procedures and infrastructure: Only 13 percent of mail-in ballots this primary season had bar codes.

The ballots themselves could stand some better human-factors engineering. Too often, it is too challenging to correctly fill out a ballot or even an application for a ballot. In 2000, a poorly designed ballot in Florida’s Palm Beach County may have deprived Al Gore of Florida’s 29 electoral votes, and therefore the presidency. And in Travis County, Texas, a complex, poorly designed application to vote by mail was incorrectly filled out by more than 4,500 voters earlier this year. Their applications rejected, they had to choose on Election Day between not voting or going to the poll and risking infectionAnd yet help is readily available: Groups like the Center for Civic Design can provide best practices.

Training on signature verification also widely varies within and among states. Only 20 states now require that election officials give voters an opportunity to correct a disqualified mail-in ballot. 

Timely processing is the final mail-in challenge. Eleven states do not start processing absentee ballots until Election Day, three start the day after, and three start the day before. In a prepandemic election, mail-in ballots made up a smaller share of all votes, so the extra time needed for processing and counting was relatively minor. Now with mail-in ballots potentially making up over half of all votes in the United States, the time needed to process and count ballots may delay results for days or weeks. In the current political climate of suspicion and hyper-partisanship, that could be disastrous—unless people are informed about it and expecting it.

The COVID-19 pandemic is strongly accelerating a trend toward absentee voting that began a couple of decades ago. Contrary to what many people were anticipating five or 10 years ago, though, most of the world is not moving toward Internet voting but rather to a more advanced version of what they’re using already. That Estonia has done so well so far with i-voting offers a tantalizing glimpse of a possible future. For 99.998 percent of the world’s population, though, the paper ballot will reign for the foreseeable future. Fortunately, major technological improvements envisioned over the next decade will increase the security, reliability, and speedy processing of paper ballots, whether they’re cast in person or by mail. 

Jonathan Coopersmith is a Professor at Texas A&M University, where he teaches the history of technology.  He is the author of FAXED: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015).  His current interests focus on the importance of froth, fraud, and fear in emerging technologies.  For the last decade, he has voted early and in person. 

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/for-the-iot-user-anonymity-shouldnt-be-an-afterthought-it-should-be-baked-in-from-the-start

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

How to Improve Threat Detection and Hunting in the AWS Cloud Using the MITRE ATT&CK Matrix

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_improve_threat_detection_and_hunting_in_the_aws_cloud

SANS and AWS Marketplace will discuss the exercise of applying MITRE’s ATT&CK Matrix to the AWS Cloud. They will also explore how to enhance threat detection and hunting in an AWS environment to maintain a strong security posture.

APTs Use Coronavirus as a Lure

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/apts_use_coronavirus_as_a_lure

Malwarebytes

Threat actors are closely monitoring public events happening around the world, and quickly employing those themes in attack vectors to take advantage of the opportunity. That said, various Advanced Persistent Threat (APT) groups are using the coronavirus pandemic as a theme in several malicious campaigns.

By using social engineering tactics such as spam and spear phishing with COVID-19 as a lure, cybercriminals and threat actors increase the likelihood of a successful attack. In this paper, we:

  • Provide an overview of several different APT groups using coronavirus as a lure.
  • Categorize APT groups according to techniques used to spam or send phishing emails.
  • Describe various attack vectors, timeline of campaigns, and malicious payloads deployed.
  • Analyze use of COVID-19 lure and code execution.
  • Get ready to dig into the details of each APT group, their origins, what they’re known for and their latest strike. 

Twitter’s Direct Messages Is a Bigger Headache Than the Bitcoin Scam

Post Syndicated from Fahmida Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/twitters-direct-messages-is-a-bigger-headache-than-the-bitcoin-scam

Twitter has re-enabled the ability for verified accounts to post new messages and restored access to locked accounts after Wednesday’s unprecedented account takeover attack. The company is still investigating what happened in the attack, which resulted in accounts belonging to high-profile individuals posting similar messages asking people to send Bitcoins to an unknown cryptocurrency wallet. 

Twitter said about 130 accounts were affected in this attack, and they included high-profile individuals such as Tesla CEO Elon Musk, former president Barack Obama, presumptive Democratic candidate for president Joe Biden, former New York City mayor Michael Bloomberg, and Amazon CEO Jeff Bezos. While there was “no evidence” the attackers had obtained account passwords, Twitter has not yet provided any information about anything else the attackers may have accessed, such as users’ direct messages. If attackers had harvested the victims’ direct messages for potentially sensitive information, the damage is far worse than the thousands of dollars the attackers made off the scam.

Messages can contain a lot of valuable information. Elon Musk’s public messages have impacted Tesla’s stock price, so it is possible that something he said in a direct message could also move markets. Even if confidential information was not shared over direct messages, just the knowledge of who these people have spoken to could be dangerous in the wrong hands. An attacker could know about the next big investment two CEOs were discussing, or learn what politicians discussed when they thought they were on a secure communications channel,  says Max Heinemeyer, director of threat hunting at security company Darktrace.

“It matters a lot if DMs were accessed: Imagine what kind of secrets, extortion material and explosive news could be gained from reading the private messages of high-profile, public figures,”  said Heinemeyer.

The attackers used social engineering to access internal company tools, but it’s not known if the tools provided full access or if there were limitations in what the attackers could do at that point. The fact that Twitter does not offer end-to-end encryption for direct messages increases the likelihood that attackers were able to see the contents of the messages. End-to-end encryption is a way to protect the data as it travels from one location to another. The message’s contents are encrypted on a user’s device, and only the intended recipient can decrypt the message to read it. If end-to-end encryption had been in place for direct messages, the attackers may been able to see in the internal tool that there were messages, but not know what the messages actually said. 

“We don’t know the full extent of the attack, but Twitter wouldn’t have to worry about whether or not the attacker read, changed, or exfiltrated DMs if they had end-to-end encryption for DMs like we’ve asked them to,” the Electronic Frontier Foundation (EFF) said in an emailed statement. Eva Galperin, EFF’s director of cybersecurity said the EFF asked Twitter to begin encrypting DMs as part of the EFF’s Fix It Already campaign in 2018. 

“They did not fix it,” Galperin said.

Providing end-to-end encryption for direct messages is not an unsurmountable challenge for Twitter, says Richard White, adjunct professor of cybersecurity at University of Maryland Global Campus. Encrypting data in motion can be complex, as it takes a lot of resources and memory for the devices to perform real-time decryption. But many messaging platforms have successfully implemented end-to-end encryption. There are also services that have addressed the challenge of having encrypted messages accessible from multiple devices. The real issue is the magnitude of Twitter’s reach, complexity of infrastructure, and the sheer number of global users, White says. Scaling up what has worked in other cases is not straightforward because the issues become more complex, making the changes “more time-consuming and costly,” White said.

Twitter was working on end-to-end encrypted direct messages back in 2018, Sen. Ron Wyden in a statement. It is not clear if the project was still underway at the time of the hack or if it had been shuttered.

“If hackers gained access to users’ DMs, this breach could have a breathtaking impact for years to come, Wyden said  

It is possible the Bitcoin scam was a “head-turning attack” that acted as a smokescreen to hide the attackers’ true objectives, says White. There is precedent for this kind of subterfuge, such as the distributed denial-of-service attack against Sony in 2011, during which attackers compromised 101 million user accounts. Back in 2013,  Gartner analyst Avivah Litan warned that criminals were using DDoS attacks to distract bank security staff from detecting fraudulent money transfers. 

“Attackers making a lot of noise in one area while secretly coming in from another is a very effective tactic,” White said.

White says it’s unlikely that this attack was intended as a distraction because it was too noisy. Being that obvious undermines the effectiveness of the diversion as it doesn’t give attackers time to carry out their activities. A diversion should not attract attention to the very accounts being targeted.

However, that doesn’t mean the attackers didn’t access any of the direct messages belonging to the victims, and that doesn’t mean the attackers won’t do something with the direct messages now, even if that hadn’t been their primary goal. 

“It is unclear what other nefarious activities the attackers may have done behind the scenes,” Heinemeyer said.

More Worries over the Security of Web Assembly

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly

In 1887, Lord Acton famously wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” He was, of course, referring to people who wield power, but the same could be said for software.

As Luke Wagner of Mozilla described in these pages in 2017, the Web has recently adopted a system that affords software running in browsers much more power than was formerly available—thanks to something called Web Assembly, or Wasm for short. Developers take programs written, say, in C++, originally designed to run natively on the user’s computer, and compile them into Wasm that can then be sent over the Web and run on a standard Web browser. This allows the browser-based version of the program to run nearly as fast as the native one—giving the Web a powerful boost. But as researchers continue to discover, with that additional power comes additional security issues.

One of the earliest concerns with Web Assembly was its use for running software that would mine cryptocurrency using people’s browsers. Salon, to note a prominent example, began in February of 2018 to allow users to browse its content without having to view advertisements so long as they allowed Salon to make use of their spare CPU cycles to mine the cryptocurrency Monero. This represented a whole new approach to web publishing economics, one that many might prefer to being inundated with ads.

Salon was straightforward about what it was doing, allowing readers to opt in to cryptomining or not. Its explanation of the deal it was offering could be faulted perhaps for being a little vague, but it did address such questions as “Why are my fans turning on?”

To accomplish this in-browser crypto-mining, Salon used a software developed by a now-defunct operation called CoinHive, which made good use of Web Assembly for the required number crunching. Such mining could also have been carried out in the Web’s traditional in-browser programming language, Javascript, but much less effectively.

Although there was debate within the computer-security community for a while about whether such cryptocurrency mining really constituted malware or just a new model for monetizing websites, in practice it amounted to malware, with most sites involved not informing their visitors that such mining was going on. In many cases, you couldn’t fault the website owners, who were oblivious that mining code had been sneaked onto their websites.

A 2019 study conducted by researchers at the Technical University of Braunschweig in Germany investigated the top 1 million websites and found Web Assembly to be used in about 1,600 of them. More than half of those instances were for mining crytocurrency. Another shady use of Web Assembly they found, though far less prevalent, was for code obfuscation: to hide malicious actions running in the browser that would be more apparent if done using Javascript.

To make matters even worse, security researchers have increasingly been finding vulnerabilities in Web Assembly, some that had been known and rectified for native programs years ago. The latest discoveries in this regard appear in a paper posted online by Daniel Lehmann and Michael Pradel of the University of Stuttgart, and Johannes Kinder of Bundeswehr University Munich, submitted to the 2020 Usenix Security Conference, which is to take place in August. These researchers show that Web Assembly, as least as it is now implemented, contains vulnerabilities that are much more subtle than just the possibility that it could be used for surreptitious cryptomining or for code obfuscation.

One class of vulnerabilities stems fundamentally from how Web Assembly manages memory compared with what goes on natively. Web Assembly code runs on a virtual machine, one the browser creates. That virtual machine includes a single contiguous block of memory without any holes. That’s different from what takes place when a program runs natively, where the virtual memory provided for a program has many gaps—referred to as unmapped pages. When code is run natively, a software exploit that tries to read or write to a portion of memory that it isn’t supposed to access could end up targeting an unmapped page, causing the malicious program to halt. Not so with Web Assembly.

Another memory-related vulnerability of Web Assembly arises from the fact that an attacker can deduce how a program’s memory will be laid out simply by examining the Wasm code. For a native application, the computer’s operating system offers what’s called address space layout randomization, which makes it harder for an attacker to target a particular spot in program memory.

To help illustrate the security weaknesses of Web Assembly, these authors describe a hypothetical Wasm application that converts images from one format to another. They imagine that somebody created such a service by compiling a program that uses a version of the libpgn library that contains a known buffer-overflow vulnerability. That wouldn’t likely be a problem for a program that runs natively because modern compilers include what are known as stack canaries—a protection mechanism that prevents exploitation of this kind of vulnerability. Web Assembly includes no such protections and thus would inherit a vulnerability that was truly problematic.

Although the creators of Web Assembly took pains to make it safe, it shouldn’t come as a great surprise that unwelcome applications of its power and unexpected vulnerabilities of its design have come to light. That’s been the story of networked computers from the outset, after all.

Researchers Demo a New Polarization-Based Encryption Strategy

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/security/researchers-demo-a-new-polarizationbased-encryption-strategy

Telecommunications can never be too secure. That’s why researchers continue to explore new encryption methods. One emerging method is called ghost polarization communication, or GPC. Researchers at the Technical University of Darmstadt, in Germany, recently demonstrated this approach in a proof-of-principle experiment.

GPC uses unpolarized light to keep messages safe. Sunlight is unpolarized light, as are fluorescent and incandescent lights and LEDs. All light waves are made up of an electric field and a magnetic field propagating in the same direction through space. In unpolarized light, the orientation of the electric-field component of a light wave fluctuates randomly. That’s the opposite of polarized light sources such as lasers, in which this orientation is fixed.

It’s often easiest to imagine unpolarized light as having no specific orientation at all, since it changes on the order of nanoseconds. However, according to Wolfgang Elsaesser, one of the Darmstadt researchers who developed GPC, there’s another way to look at it: “Unpolarized light can be viewed as a very fast distribution on the Poincaré sphere.” (The Poincaré sphere is a common mathematical tool for visualizing polarized light.)

In other words, unpolarized light could be a source of rapidly generated random numbers that can be used to encode a message—if the changing polarization can be measured quickly enough and decoded at the receiver.

Suppose Alice wants to send Bob a message using GPC. Using the proof of principle that the Darmstadt team developed, Alice would pass unpolarized light through a half-wave plate—a device that alters the polarization of a beam of light—to encode her message. In this specific case, the half-wave plate would alter the polarization according to the specific message being encoded.

Bob can decode the message only when he receives Alice’s altered beam as well as a reference beam, and then correlates the two. Anyone attempting to listen in on the conversation by intercepting the altered beam would be stymied, because they’d have no reference against which to decode the message.

GPC earned its name because a message may be decoded only by using both the altered beam and a reference beam. “Ghost” refers to the entangled nature of the beams; separately, each one is useless. Only together can they transmit a message. And messages are sent via the beams’ polarizations, hence “ghost polarization.”

Elsaesser says GPC is possible with both wired and wireless communications setups. For the proof-of-principle tests, they relied largely on wired setups, which were slightly easier to measure than over-the-air tests. The group used standard commercial equipment, including fiber-optic cable and 1,550-nanometer light sources (1,550 nanometers is the most common wavelength of light used for fiber communications).

The Darmstadt group confirmed GPC was possible by encoding a short message by mapping 0 bits and 1 bits using polarization angles agreed upon by the sender and receiver. The receiver could decode the message by comparing the polarization angles of the reference beam with those of the altered beam containing the message. They also confirmed the message could not be decoded by an outside observer who did not have access to the reference beam.

However, Elsaesser stresses that the tests were preliminary. “The weakness at the moment,” he says, “is that we have not concentrated on the modulation speed or the transmission speed.” What mattered was proving that the idea could work at all. Elsaesser says this approach is similar to that taken for other forms of encryption, like chaos communications, which started out with low transmission speeds but have seen rates tick up as the techniques have been refined.

Robert Boyd, a professor of optics at the University of Rochester, in N.Y., says that the most important question to answer about GPC is how it compares to the Rivest-Shamir-Adleman (RSA) system commonly used to encode messages today. Boyd suspects that the Darmstadt approach, like the RSA system, is not absolutely secure. But he says that if GPC turned out to be more secure or more efficient to implement—even by a factor of two—it would have a tremendous advantage over RSA.

Moving forward, Elsaesser already has ideas on how to simplify the Darmstadt system. And because the team has now demonstrated GPC with commercial components, Elsaesser expects that a refined GPC system could simply be plugged into existing networks for an easy security upgrade.

This article appears in the July 2020 print issue as “New Encryption Strategy Passes Early Test.”

5G Networks Will Inherit Their Predecessors’ Security Issues

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/5g-networks-will-juggle-legacy-security-issues-for-years

There is a lot of excitement over 5G’s promise of blazing speeds, lower latencies, and more robust security than 3G and 4G networks. However, the fact that each network operator has its own timetable for rolling out the next-generation cellular technology means early 5G will actually be a patchwork of 2G, 3G, 4G, and 5G networks. The upshot: For the next few years, 5G won’t be able to fully deliver on its promises.

The fact that 5G networks will have to interoperate with legacy networks means these networks will continue to be vulnerable to attacks such as spoofing, fraud, user impersonation, and denial-of-service. Network operators will continue to rely on GPRS Tunneling Protocol (GTP), which is designed to allow data packets to move back and forth between different operators’ wireless networks, as may happen when a user is roaming (GPRS itself stands for General Packet Radio Service, a standard for mobile data packets.) Telecom security company Positive Technologies said in a recent report that as long as GTP is in use, the protocol’s security issues will impact 5G networks.

The Internet of Things Has a Consent Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/the-internet-of-things-has-a-consent-problem

Consent has become a big topic in the wake of the Me Too movement. But consent isn’t just about sex. At its core, it’s about respect and meeting people where they are at. As we add connected devices to homes, offices, and public places, technologists need to think about consent.

Right now, we are building the tools of public, work, and home surveillance, and we’re not talking about consent before we implement those tools. Sensors used in workplaces and homes can track sound, temperature, occupancy, and motion to understand what a person is doing and what the surrounding environment is like. Plenty of devices have cameras and microphones that feed back into a cloud service.

In the cloud, images, conversations, and environmental cues could be accessed by hackers. Beyond that, simply by having a connected device, users give the manufacturer’s employees a clear window into their private lives. While I personally may not mind if Google knows my home temperature or independent contractors at Amazon can accidentally listen in on my conversations, others may.

For some, the issue with electronic surveillance is simply that they don’t want these records created. For others, getting picked up by a doorbell camera might represent a threat to their well-being, given the U.S. government’s increased use of facial recognition and attempts to gather large swaths of electronic data using broad warrants.

How should companies think about IoT consent? Transparency is important—any company selling a connected device should be up-front about its capabilities and about what happens to the device data. Informing the user is the first step.

But the company should encourage the user to inform others as well. It could be as simple as a sticker alerting visitors that a house is under video surveillance. Or it might be a notification in the app that asks the user to explain the device’s capabilities to housemates or loved ones. Such a notification won’t help those whose partners use connected devices as an avenue for abuse and control, but it will remind anyone setting up a device in their home that it could have the potential for almost surveillance-like access to their family members.

In professional settings, consent can build trust in a connected product or automated system. For example, AdventHealth Celebration, a hospital in the Orlando, Fla., area has implemented a tracking solution for nurses that monitors their movements during a shift to determine the optimal workflows. Rather than just turning the system loose, however, Celebration informed nurses before bringing in the system and since then has worked with them to interpret results.

So far, the hospital has shifted how it allocates patients to rooms to make sure high-needs patients aren’t next to one another and assigned to the same nurse. But getting the nurses involved at the start was crucial to success. Cities deploying facial recognition in schools or in airports without asking citizens for input would do well to pay attention to the success of Celebration’s system. A failure to ask for input or to inform citizens shows a clear lack of concern around consent.

Which in turn implies that our governments aren’t keen on respect and meeting people where they are at. Even if that’s true for governments, is that the message that tech companies want to send to customers?

This article appears in the July 2020 print issue as “The IoT’s Consent Problem.”

Q&A: The Pioneers of Web Cryptography on the Future of Authentication

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/pioneers-web-cryptography-future-authentication

Martin Hellman is one of the inventors of public-key cryptography. His work on public key distribution with Whitfield Diffie is now known as the Diffie–Hellman key exchange. The method, which allows two parties that have no prior knowledge of each other to establish a shared secret key, is the basis for many of the security protocols in use today.

Taher Elgamal, who was once Hellman’s student at Stanford, is known as the “father of SSL” for developing the Secure Sockets Layer protocol used to secure transactions over the Internet. His 1985 paper “A Public key Cryptosystem and A Signature Scheme based on discrete Logarithms” outlined the ideas that made secure ecommerce possible. Elgamal shared the 2019 Marconi Prize with Paul Kocher for the development.

Tom “TJ” Jermoluk a former Bell Labs engineer, is now the CEO of Beyond Identity, a new identity management platform. Beyond Identity “stands on the shoulders of giants,” Jermoluk says, referring in part to the work of Hellman and Elgamal, as its platform brings together public-key cryptography and X509 certificates to solve the authentication problem—that is, how to determine whether someone is who they say they are on the Internet.

Elgamal, Hellman, and Jermoluk talked about how recent advances in technology made it possible to change how we handle authentication, and what the future would look like.

How to Protect Privacy While Mining Millions of Patient Records

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/covid19-study-privacy-mining-millions-patient-records

When people in the United Kingdom began dying from COVID-19, researchers saw an urgent need to understand all the possible factors contributing to such deaths. So in six weeks, a team of software developers, clinicians, and academics created an open-source platform designed to securely analyze millions of electronic health records while protecting patient privacy. 

Sneakier and More Sophisticated Malware Is On the Loose

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/security/analysis-shows-malware-evolving-encrypted-sophisticated

A new study analyzing more than a million samples of Android malware illustrates how malicious apps have evolved over time. The results, published 30 March in IEEE Transactions on Dependable and Secure Computing, show that malware coding is becoming more cleverly hidden, or obfuscated.

“Malware in Androids is still a huge issue, despite the abundance of research,” says Guillermo Suarez-Tangil, a researcher at King’s College London who co-led the study. “A central challenge is dealing with malware that is repackaged.”

Tracking COVID-19 With the IoT May Put Your Privacy at Risk

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/tracking-covid19-with-the-iot-may-put-your-privacy-at-risk

IEEE COVID-19 coverage logo, link to landing page

The Internet of Things makes the invisible visible. That’s the IoT’s greatest feature, but also its biggest potential drawback. More sensors on more people means the IoT becomes a visible web of human connections that we can use to, say, track down a virus.

Track-and-trace programs are already being used to monitor outbreaks of COVID-19 and its spread. But because they would do so through easily enabled mass surveillance, we need to put rules in place about how to undertake any attempts to track the movements of people.

In April, Google and Apple said they would work together to build an opt-in program for Android or iOS users. The program would use their phones’ Bluetooth connection to deliver exposure notifications—meaning that transmissions are tracked by who comes into contact with whom, rather than where people spend their time. Other proposals use location data provided by phone applications to determine where people are traveling.

All of these ideas have slightly different approaches, but at their core they’re still tracking programs. Any such program that we implement to track the spread of COVID-19 should follow some basic guidelines to ensure that the data is used only for public health research. This data should not be used for marketing, commercial gain, or law enforcement. It shouldn’t even be used for research outside of public health.

Let’s talk about the limits we should place around this data. A tracking program for COVID-19 should be implemented only for a prespecified duration that’s associated with a public health goal (like reducing the spread of the virus). So, if we’re going to collect device data and do so without requiring a user to opt in, governments need to enact legislation that explains what the tracking methodology is, requires an audit for accuracy and efficacy by a third party, and sets a predefined end.

Ethical data collection is also critical. Apple and Google’s Bluetooth method uses encrypted tokens to track people as they pass other people. The Bluetooth data is people-centric, not location-centric. Once a person uploads a confirmation that they’ve been infected, their device can issue notifications to other devices that were recently nearby, alerting users—anonymously—that they may have come in contact with someone who’s infected.

This is good. And while it might be possible to match a person to a device, it would be difficult. Ultimately, linking cases anonymously to devices is safer than simply collecting location data on infected individuals. The latter makes it easy to identify people based on where they sleep at night and work during the day, for example.

Going further, this data must be encrypted on the device, during transit and when stored on a cloud or government server, so that random hackers can’t access it. Only the agency in charge of track-and-trace efforts should have access to the data from the device. This means that police departments, immigration agencies, or private companies can’t access that data. Ever.

However, researchers should have access to some form of the data after a few years have passed. I don’t know what that time limit should be, but when that time comes, institutional review boards, like those that academic institutions use to protect human research subjects, should be in place to evaluate each request for what could be highly personal data.

If we can get this right, we can use the lessons learned during COVID-19 not only to protect public health but also to promote a more privacy-centric approach to the Internet of Things.

This article appears in the June 2020 print issue as “Pandemic vs. Privacy.”

Coronavirus Pandemic Prompts Privacy-Conscious Europe to Collect Phone Data

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/how-coronavirus-pandemic-europe-collecting-phone-data

Amid the coronavirus pandemic, even privacy-conscious European governments have asked telecom companies for residents’ phone location data in hopes of understanding whether national social distancing measures such as stay-at-home orders and business closures are having any effect on the spread of COVID-19.

Some of the hardest-hit countries, including Italy and Spain, are now open to proposals for mobile apps that can make contact tracing more efficient and alert people who have come into contact with someone infected by the novel coronavirus.

Privacy in the Time of COVID-19

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/telecom/security/privacy-in-the-time-of-covid19

Even though I understand how it works, I consistently find Google’s ability to know how long it will take me to drive somewhere nothing less than magical. GPS signals stream location and speed data from the legion of smartphones in vehicles on the roads between my origin and my destination; it takes only a bit of math to come up with an estimate accurate to within a few minutes.

Lately, researchers have noted that this same data can be used to pinpoint serious traffic accidents minutes before any calls get placed to emergency responders—extra time within that “golden hour” vital to the saving of lives. That result points to a hidden upside to the beast Shoshana Zuboff termed surveillance capitalism. That is, all of this data about our activities being harvested by our devices could be put to work to serve the public good.

We need that now like never before, as the entire planet confronts a pandemic. Fortunately, we’ve been exceptionally clever at making smartphones—more than 4 billion of them in every nation on earth—and they offer an unprecedented opportunity to harness their distributed sensing and intelligence to provide a greater degree of safety than we might have had without them.

Taiwan got into this game early, combining the lessons of SARS with the latest in tracking and smartphone apps to deliver an integrated public health response. As of this writing, that approach has kept the country’s infection rate among the lowest in the world. The twin heads of surveillance capitalism, Google and Facebook, will spend the next year working with public health authorities to provide insights that can guide both the behavior of individuals and public policy. That’s going to give some angina to the advocates of strong privacy policies (I count myself among them), but in an emergency, public good inevitably trumps private rights.

This relaxation of privacy boundaries mustn’t mean the imposition of a surveillance state—that would only result in decreased compliance. Instead, our devices will be doing our utmost to remind us how to stay healthy, much like our smartwatches already do but more pointedly and with access to far greater data. Both data and access are what we must be most careful with, looking for the sweet spot between public health and private interest, with an eye to how we can wind back to a world with greater privacies after the crisis abates.

A decade ago I quit using Facebook, because even then I had grave suspicions that my social graph could be weaponized and used against me. Yet this same capacity to know so much about people—their connections, their contacts, even their moods—means we also have a powerful tool to track the spread of outbreaks both of disease and deadly misinformation. While firms like Cambridge Analytica have used this power to sway the body politic, we haven’t used such implements yet for the public good. Now, I’d argue, we have little choice.

We technologists are going to need to get our hands dirty, and only with transparency, honesty, and probity can we do so safely. Yes, use the data, use the tools, use the analytics and the machine learning, bring it all to bear. But let us all know how, when, and why it’s being used. Because it appears that making a turn toward a surveillance society will protect us now—and, in particular, help our most vulnerable to stay safe—we need to be honest about our needs, transparent about our uses, and completely clear about our motives.

New Approach Could Protect Control Systems From Hackers

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/security/new-approach-protects-power-plants-other-major-control-systems-from-hacking

Some of the most important industrial control systems (ICSs), such as those that support power generation and traffic control, must accurately transmit data at the milli- or even mirco-second range. This means that hackers need interfere with the transmission of real-time data only for the briefest of moments to succeed in disrupting these systems. The seriousness of this type of threat is illustrated by the Stuxnet incursion in 2010, when attackers succeeded in hacking the system supporting Iran’s uranium enrichment factory, damaging more than 1000 centrifuges.

Now a trio of researchers has disclosed a novel technique that could more easily identify when these types of attacks occur, triggering an automatic shutdown that would prevent further damage.

The problem was first brought up in a conversation over coffee two years ago. “While describing the security measures in current industrial control systems, we realized we did not know any protection method on the real-time channels,” explains Zhen Song, a researcher at Siemens Corporation. The group began to dig deeper into the research, but couldn’t find any existing security measures.

Part of the reason is that traditional encryption techniques do not account for time. “As well, traditional encryption algorithms are not fast enough for industry hard real-time communications, where the acceptable delay is much less than 1 millisecond, even close to 10 microsecond level,” explains Song. “It will often take more than 100 milliseconds for traditional encryption algorithms to process a small chunk of data.”

However, some research has emerged in recent years about the concept of “watermarking” data during transmission, a technique that can indicate when data has been tampered with. Song and his colleagues sought to apply this concept to ICSs, in a way that would be broadly applicable and not require details of the specific ICS. They describe their approach in a study published February 5 in IEEE Transactions on Automation Science and Engineering. Some of the source code is available here

The approach involves the transmission of real-time data over an unencrypted channel, as conventionally done. In the experiment, a specialized algorithm in the form of a recursive watermark (RWM) signal is transmitted at the same time. The algorithm encodes a signal that is similar to “background noise,” but with a distinct pattern. On the receiving end of the data transmission, the RWM signal is monitored for any disruptions, which, if present, indicate an attack is taking place. “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately,” Song says.

Critically, a special “key” for deciphering the RWM algorithm is transmitted through an encrypted channel from the sender to the receiver before the data transmission takes place.

Tests show that this approach works fast to detect attacks. “We found the watermark-based approach, such as the RWM algorithm we proposed, can be 32 to 1375 times faster than traditional encryption algorithms in mainstream industrial controllers. Therefore, it is feasible to protect critical real-time control systems with new algorithms,” says Song.

Moving forward, he says this approach could have broader implications for the Internet of Things, which the researchers plan to explore more. 

New Cryptography Method Promising Perfect Secrecy Is Met With Skepticism

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/new-cryptography-method-promises-perfect-secrecy-amidst-skepticism

In the ongoing race to make and break digital codes, the idea of perfect secrecy has long hovered on the horizon like a mirage. A recent research paper has attracted both interest and skepticism for describing how to achieve perfect secrecy in communications by using specially-patterned silicon chips to generate one-time keys that are impossible to recreate.

Personal Virtual Networks Could Give Everyone More Control Over Their Data

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/personal-virtual-networks-control-data

To keep us connected, our smartphones constantly switch between networks. They jump from cellular networks to public Wi-Fi networks in cafes, to corporate or university Wi-Fi networks at work, and to our own Wi-Fi networks at home. But we rarely have any input into the security and privacy settings of the networks to which we connect. In many cases, it would be tough to even figure out what those settings are.

A team at Northeastern University is developing personal virtual networks to give everyone more control over their online activities and how their information is shared. Those networks would allow a person’s devices to connect to cellular and Wi-Fi networks—but only on their terms.