Tag Archives: Telecom/Security

How the U.S. Can Apply Basic Engineering Principles To Avoid an Election Catastrophe

Post Syndicated from Jonathan Coopersmith original https://spectrum.ieee.org/tech-talk/telecom/security/engineering-principles-us-election

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

The 2020 primary elections and caucuses in the United States earlier this year provide a textbook case of how technology and institutions can fail in a crisis, when we need them most. To recap: Americans looked on, many of them with incredulity, as people hoping to vote were forced to wait hours at the height of the initial wave of the COVID-19 pandemic. Elsewhere, there were delayed elections, tens of thousands of undelivered or uncounted mail-in ballots, and long delays in counting and releasing results. This in what is arguably the world’s most technologically advanced industrialized nation.

Given the persistence of COVID in the United States and domestic and foreign efforts to delegitimize its November presidential election, a repeat of the primaries could easily produce a massive visible disaster that will haunt the United States for decades. Indeed, doomsday scenarios and war-gamingwhat ifs” have become almost a cottage industry. Fortunately, those earlier failures in the primaries provide a road map to a fair, secure, and accessible election—if U.S. officials are willing to learn and act quickly.

What happens in the 2020 U.S. election will reverberate all over the world, and not just for the usual reasons. Every democracy will face the challenge of organizing and ensuring safe and secure elections.  People in most countries still vote in person by paper ballot, but like the United States, many countries are aggressively expanding options for how their citizens can vote

Compared with the rest of the world, though, the United States stands apart in one fundamental aspect:  No single federal law governs elections. The 50 states and the District of Columbia each conduct their own elections under their own laws and regulations. The elections themselves are actually administered by 3,143 counties and equivalents within those states, which differ in resources, training, ballots, and interpretation of regulations. In Montana, for example, counties can automatically mail ballots to registered voters but are not required to. 

A similar diversity applies to the actual voting technology. In 2016, half of registered voters in the United States lived in areas that only optically scan paper ballots; a quarter lived in areas with direct-recording electronic (DRE) equipment, which only creates an electronic vote; and the remainder lived in areas that use both types of systems or where residents vote entirely by mail, using paper ballots that are optically scanned. Over 1,800 small counties still collectively counted a million paper ballots by hand. 

The failures during the primaries grew from a familiar litany: poor organization, untried technology, inadequate public information, and an inability to scale quickly. Counties that had the worst problems were usually ones that had introduced new voting software and hardware without adequately testing it, or without training operators and properly educating users.

There were some early warnings of trouble ahead. In February, with COVID not yet an issue in the United States, the Iowa Democratic caucus was thrown into chaos when the inadequately vetted IowaReporter smartphone app by Shadow, which was used to tabulate votes, broke down. The failure was compounded by malicious jamming of party telephone lines by American trolls. It took weeks to report final results. Georgia introduced new DRE equipment with inadequate training and in some cases, the wrong proportion of voting equipment to processing equipment, which delayed tabulation of the results.  

As fear of COVID spread, many states scaled up their absentee-voting options and reduced the number of polling places. In practice, “absentee voting” refers to the use of a paper ballot that is mailed to a voter, who fills it out and then returns it.  A few states like Kentucky, with good coordination between elected leaders and election administrators, executed smooth mail-in primaries earlier this year. More common, however, were failures of the U.S. Post Office and many state- and local-election offices to handle a surge of ballots.  In New York, some races remained undecided three weeks after its primary because of slow receipt and counting of ballots. Election officials in 23 states rejected over 534,000 primary mail-in ballots, compared with 319,000 mail-in ballots for the 2016 general election. 

The post office, along with procrastinating voters, has emerged as a critical failure node for absentee voting. Virginia rejected an astonishing 6 percent of ballots for lateness, compared with half of a percent for Michigan. Incidentally, these cases of citizens losing their vote due to known difficulties with absentee ballots far, far outweigh the incidences of voter fraud connected to absentee voting, contrary to the claims of certain politicians. 

At this point, a technologically savvy person could be forgiven for wondering, why can’t we vote over the Internet? The Internet has been in widespread public use in developed countries for more than a quarter century. It’s been more than 40 years since the introduction of the personal computer, and about 20 since the first smartphones came out. And yet we still have no easy way to use these nearly ubiquitous tools to vote.

A subset of technologists has long dreamed of Internet voting, via an app. But in most of the world, it remains just that: a dream. The main exception is Estonia, where Internet voting experiments began 20 years ago. The practice is now mainstream there—in the country’s 2019 parliamentary elections, nearly 44 percent of voters voted over the Internet without any problems. In the United States, over the past few years, some 55 elections have used app-based Internet voting as an option for absentee voting. However, that’s a very tiny percentage of the thousands of elections conducted by municipalities during that period. Despite Estonia’s favorable experiences with what it calls “i-voting,” in much of the rest of the world concerns about security, privacy, and transparency have kept voting over the Internet in the realm of science fiction. 

The rise of blockchain, a software-based system for guaranteeing the validity of a chain of transactions, sparked new hopes for Internet voting. West Virginia experimented with blockchain absentee voting in 2018, but election technology experts worried about possible vulnerabilities in recording, counting, and storing an auditable vote without violating the voter’s privacy.  The lack of transparency by the system provider, Voatz, did not dispel these worries. After a report from MIT’s Internet Policy Research Initiative reinforced those concerns, West Virginia canceled plans to use blockchain voting in this year’s primary.    

We can argue all we want about the promise and perils of Internet voting, but it won’t change the fact that this option won’t be available for this November’s general election in the United States. So officials will have to stick with tried-and-true absentee-voting techniques, improving them to avoid the fiascoes of the recent past. Fortunately, this shouldn’t be hard. Think of shoring up this election as an exercise involving flow management, human-factors engineering, and minimizing risk in a hostile (political) environment—one with a low signal-to-noise ratio. 

This coming November 3 will see a record voter turnout in the United States, an unprecedented proportion of which will be voting early and by mail, all during an ongoing pandemic in an intensely partisan political landscape with domestic and foreign actors trying to disrupt or discredit the election. To cope with such numbers, we’ll need to “flatten the curve.” A smoothly flowing election will require encouraging as many people as possible to vote in the days and weeks before Election Day and changing election procedures and rules to accommodate those early votes.  

That tactic will of course create a new challenge: handling the tens of millions of people voting by mail in a major acceleration of the U.S. trend of voting before Election Day. Historically, U.S. voters could cast an absentee ballot by mail only if they were out of state or had another state-approved excuse. But in 2000, Oregon pioneered the practice of voting by mail exclusively. There is no longer any in-person voting in Oregon—and, it is worth noting, Oregon never experienced any increases in fraud as it transitioned to voting by mail.

Overall, mail-in voting in U.S. presidential elections doubled from 12 percent (14 million) of all votes in 2004 to 24 percent (33 million) votes cast in 2016. Those numbers, however, hide great diversity: Ninety-seven percent of voters in the state of Washington but only 2 percent of West Virginians voted by mail in 2016. Early voting–in person at a polling station open before Election Day–also expanded from 8 percent to 17 percent of all votes during that same 12-year period, 2004 to 2016. 

Today, absentee voting and vote-by-mail are essentially equivalent as more states relax restrictions on mail-in voting. In 2020, five more states–Washington, Colorado, Utah, Hawaii, and California–will join Oregon to vote by mail exclusively. A COVID-induced relaxation of absentee-ballot rules means that over 190 million Americans, not quite two-thirds of the total population, will have a straightforward vote-by-mail option this fall. 

Whether they will be able to do so confidently and successfully is another question. The main concern with voting by mail is rejected ballots. The overall rejection rate for ballots at traditional, in-person voting places in the United States is 0.01 percent. Compare that with a 1 percent rejection rate for mail-in ballots in Florida in 2018 and a deeply dismaying 6 percent rate for Virginia in its 2020 primary.

Nearly all of the Virginia ballots were rejected because they arrived late, reflecting the lack of experience for many voting by mail for the first time. Forgetting to sign a ballot was another common reason for rejection. But votes were also refused because of regulations that some might deem overly strict—a tear in an envelope is enough to get a mail-in vote nixed in some districts. These verification procedures are less forgiving of errors, and first-time voters, especially ethnic and racial minorities, have their ballots rejected more frequently. Post office delivery failures also contributed.    

We already know how to deal with all this and thereby minimize ballot rejection. States could automatically send ballots to registered voters weeks before the actual election date. Voters could fill out their ballot, seal it in an envelope, and place that envelope inside a larger envelope, which they would sign. A bar code on that outside envelope would allow the voter and election administrators to track its location. It is vitally important for voters to have feedback that confirms their vote has been received and counted. 

This ballot could be mailed, deposited in a secure dropbox, or returned in person to the local election office for processing. In 2016, more than half the voters in vote-by-mail states returned their ballots, not by mail but by secure drop-off boxes or by visiting their local election offices. 

The signature on the outer envelope would be verified against a signature on file, either from a driver’s license or on a voting app, to guard against fraud. If the signature appeared odd or if some other problem threatened the ballot’s rejection, the election office would contact the voter by text, email, or phone to sort out the problem. The voter could text a new signature, for example.  Once verified, the ballots could be promptly counted, either before or on Election Day.

The problem with these best practices is that they are not universal. A few states, including Arizona, already employ such procedures and enjoy very low rates of rejected mail-in ballots: Maricopa County, Ariz. had a mail-in ballot rejection rate of just 0.03 percent in 2018, roughly on a par with in-person voting. Most states, however, lack these procedures and infrastructure: Only 13 percent of mail-in ballots this primary season had bar codes.

The ballots themselves could stand some better human-factors engineering. Too often, it is too challenging to correctly fill out a ballot or even an application for a ballot. In 2000, a poorly designed ballot in Florida’s Palm Beach County may have deprived Al Gore of Florida’s 29 electoral votes, and therefore the presidency. And in Travis County, Texas, a complex, poorly designed application to vote by mail was incorrectly filled out by more than 4,500 voters earlier this year. Their applications rejected, they had to choose on Election Day between not voting or going to the poll and risking infectionAnd yet help is readily available: Groups like the Center for Civic Design can provide best practices.

Training on signature verification also widely varies within and among states. Only 20 states now require that election officials give voters an opportunity to correct a disqualified mail-in ballot. 

Timely processing is the final mail-in challenge. Eleven states do not start processing absentee ballots until Election Day, three start the day after, and three start the day before. In a prepandemic election, mail-in ballots made up a smaller share of all votes, so the extra time needed for processing and counting was relatively minor. Now with mail-in ballots potentially making up over half of all votes in the United States, the time needed to process and count ballots may delay results for days or weeks. In the current political climate of suspicion and hyper-partisanship, that could be disastrous—unless people are informed about it and expecting it.

The COVID-19 pandemic is strongly accelerating a trend toward absentee voting that began a couple of decades ago. Contrary to what many people were anticipating five or 10 years ago, though, most of the world is not moving toward Internet voting but rather to a more advanced version of what they’re using already. That Estonia has done so well so far with i-voting offers a tantalizing glimpse of a possible future. For 99.998 percent of the world’s population, though, the paper ballot will reign for the foreseeable future. Fortunately, major technological improvements envisioned over the next decade will increase the security, reliability, and speedy processing of paper ballots, whether they’re cast in person or by mail. 

Jonathan Coopersmith is a Professor at Texas A&M University, where he teaches the history of technology.  He is the author of FAXED: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015).  His current interests focus on the importance of froth, fraud, and fear in emerging technologies.  For the last decade, he has voted early and in person. 

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/for-the-iot-user-anonymity-shouldnt-be-an-afterthought-it-should-be-baked-in-from-the-start

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

How to Improve Threat Detection and Hunting in the AWS Cloud Using the MITRE ATT&CK Matrix

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_improve_threat_detection_and_hunting_in_the_aws_cloud

SANS and AWS Marketplace will discuss the exercise of applying MITRE’s ATT&CK Matrix to the AWS Cloud. They will also explore how to enhance threat detection and hunting in an AWS environment to maintain a strong security posture.

APTs Use Coronavirus as a Lure

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/apts_use_coronavirus_as_a_lure


Threat actors are closely monitoring public events happening around the world, and quickly employing those themes in attack vectors to take advantage of the opportunity. That said, various Advanced Persistent Threat (APT) groups are using the coronavirus pandemic as a theme in several malicious campaigns.

By using social engineering tactics such as spam and spear phishing with COVID-19 as a lure, cybercriminals and threat actors increase the likelihood of a successful attack. In this paper, we:

  • Provide an overview of several different APT groups using coronavirus as a lure.
  • Categorize APT groups according to techniques used to spam or send phishing emails.
  • Describe various attack vectors, timeline of campaigns, and malicious payloads deployed.
  • Analyze use of COVID-19 lure and code execution.
  • Get ready to dig into the details of each APT group, their origins, what they’re known for and their latest strike. 

Twitter’s Direct Messages Is a Bigger Headache Than the Bitcoin Scam

Post Syndicated from Fahmida Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/twitters-direct-messages-is-a-bigger-headache-than-the-bitcoin-scam

Twitter has re-enabled the ability for verified accounts to post new messages and restored access to locked accounts after Wednesday’s unprecedented account takeover attack. The company is still investigating what happened in the attack, which resulted in accounts belonging to high-profile individuals posting similar messages asking people to send Bitcoins to an unknown cryptocurrency wallet. 

Twitter said about 130 accounts were affected in this attack, and they included high-profile individuals such as Tesla CEO Elon Musk, former president Barack Obama, presumptive Democratic candidate for president Joe Biden, former New York City mayor Michael Bloomberg, and Amazon CEO Jeff Bezos. While there was “no evidence” the attackers had obtained account passwords, Twitter has not yet provided any information about anything else the attackers may have accessed, such as users’ direct messages. If attackers had harvested the victims’ direct messages for potentially sensitive information, the damage is far worse than the thousands of dollars the attackers made off the scam.

Messages can contain a lot of valuable information. Elon Musk’s public messages have impacted Tesla’s stock price, so it is possible that something he said in a direct message could also move markets. Even if confidential information was not shared over direct messages, just the knowledge of who these people have spoken to could be dangerous in the wrong hands. An attacker could know about the next big investment two CEOs were discussing, or learn what politicians discussed when they thought they were on a secure communications channel,  says Max Heinemeyer, director of threat hunting at security company Darktrace.

“It matters a lot if DMs were accessed: Imagine what kind of secrets, extortion material and explosive news could be gained from reading the private messages of high-profile, public figures,”  said Heinemeyer.

The attackers used social engineering to access internal company tools, but it’s not known if the tools provided full access or if there were limitations in what the attackers could do at that point. The fact that Twitter does not offer end-to-end encryption for direct messages increases the likelihood that attackers were able to see the contents of the messages. End-to-end encryption is a way to protect the data as it travels from one location to another. The message’s contents are encrypted on a user’s device, and only the intended recipient can decrypt the message to read it. If end-to-end encryption had been in place for direct messages, the attackers may been able to see in the internal tool that there were messages, but not know what the messages actually said. 

“We don’t know the full extent of the attack, but Twitter wouldn’t have to worry about whether or not the attacker read, changed, or exfiltrated DMs if they had end-to-end encryption for DMs like we’ve asked them to,” the Electronic Frontier Foundation (EFF) said in an emailed statement. Eva Galperin, EFF’s director of cybersecurity said the EFF asked Twitter to begin encrypting DMs as part of the EFF’s Fix It Already campaign in 2018. 

“They did not fix it,” Galperin said.

Providing end-to-end encryption for direct messages is not an unsurmountable challenge for Twitter, says Richard White, adjunct professor of cybersecurity at University of Maryland Global Campus. Encrypting data in motion can be complex, as it takes a lot of resources and memory for the devices to perform real-time decryption. But many messaging platforms have successfully implemented end-to-end encryption. There are also services that have addressed the challenge of having encrypted messages accessible from multiple devices. The real issue is the magnitude of Twitter’s reach, complexity of infrastructure, and the sheer number of global users, White says. Scaling up what has worked in other cases is not straightforward because the issues become more complex, making the changes “more time-consuming and costly,” White said.

Twitter was working on end-to-end encrypted direct messages back in 2018, Sen. Ron Wyden in a statement. It is not clear if the project was still underway at the time of the hack or if it had been shuttered.

“If hackers gained access to users’ DMs, this breach could have a breathtaking impact for years to come, Wyden said  

It is possible the Bitcoin scam was a “head-turning attack” that acted as a smokescreen to hide the attackers’ true objectives, says White. There is precedent for this kind of subterfuge, such as the distributed denial-of-service attack against Sony in 2011, during which attackers compromised 101 million user accounts. Back in 2013,  Gartner analyst Avivah Litan warned that criminals were using DDoS attacks to distract bank security staff from detecting fraudulent money transfers. 

“Attackers making a lot of noise in one area while secretly coming in from another is a very effective tactic,” White said.

White says it’s unlikely that this attack was intended as a distraction because it was too noisy. Being that obvious undermines the effectiveness of the diversion as it doesn’t give attackers time to carry out their activities. A diversion should not attract attention to the very accounts being targeted.

However, that doesn’t mean the attackers didn’t access any of the direct messages belonging to the victims, and that doesn’t mean the attackers won’t do something with the direct messages now, even if that hadn’t been their primary goal. 

“It is unclear what other nefarious activities the attackers may have done behind the scenes,” Heinemeyer said.

More Worries over the Security of Web Assembly

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly

In 1887, Lord Acton famously wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” He was, of course, referring to people who wield power, but the same could be said for software.

As Luke Wagner of Mozilla described in these pages in 2017, the Web has recently adopted a system that affords software running in browsers much more power than was formerly available—thanks to something called Web Assembly, or Wasm for short. Developers take programs written, say, in C++, originally designed to run natively on the user’s computer, and compile them into Wasm that can then be sent over the Web and run on a standard Web browser. This allows the browser-based version of the program to run nearly as fast as the native one—giving the Web a powerful boost. But as researchers continue to discover, with that additional power comes additional security issues.

One of the earliest concerns with Web Assembly was its use for running software that would mine cryptocurrency using people’s browsers. Salon, to note a prominent example, began in February of 2018 to allow users to browse its content without having to view advertisements so long as they allowed Salon to make use of their spare CPU cycles to mine the cryptocurrency Monero. This represented a whole new approach to web publishing economics, one that many might prefer to being inundated with ads.

Salon was straightforward about what it was doing, allowing readers to opt in to cryptomining or not. Its explanation of the deal it was offering could be faulted perhaps for being a little vague, but it did address such questions as “Why are my fans turning on?”

To accomplish this in-browser crypto-mining, Salon used a software developed by a now-defunct operation called CoinHive, which made good use of Web Assembly for the required number crunching. Such mining could also have been carried out in the Web’s traditional in-browser programming language, Javascript, but much less effectively.

Although there was debate within the computer-security community for a while about whether such cryptocurrency mining really constituted malware or just a new model for monetizing websites, in practice it amounted to malware, with most sites involved not informing their visitors that such mining was going on. In many cases, you couldn’t fault the website owners, who were oblivious that mining code had been sneaked onto their websites.

A 2019 study conducted by researchers at the Technical University of Braunschweig in Germany investigated the top 1 million websites and found Web Assembly to be used in about 1,600 of them. More than half of those instances were for mining crytocurrency. Another shady use of Web Assembly they found, though far less prevalent, was for code obfuscation: to hide malicious actions running in the browser that would be more apparent if done using Javascript.

To make matters even worse, security researchers have increasingly been finding vulnerabilities in Web Assembly, some that had been known and rectified for native programs years ago. The latest discoveries in this regard appear in a paper posted online by Daniel Lehmann and Michael Pradel of the University of Stuttgart, and Johannes Kinder of Bundeswehr University Munich, submitted to the 2020 Usenix Security Conference, which is to take place in August. These researchers show that Web Assembly, as least as it is now implemented, contains vulnerabilities that are much more subtle than just the possibility that it could be used for surreptitious cryptomining or for code obfuscation.

One class of vulnerabilities stems fundamentally from how Web Assembly manages memory compared with what goes on natively. Web Assembly code runs on a virtual machine, one the browser creates. That virtual machine includes a single contiguous block of memory without any holes. That’s different from what takes place when a program runs natively, where the virtual memory provided for a program has many gaps—referred to as unmapped pages. When code is run natively, a software exploit that tries to read or write to a portion of memory that it isn’t supposed to access could end up targeting an unmapped page, causing the malicious program to halt. Not so with Web Assembly.

Another memory-related vulnerability of Web Assembly arises from the fact that an attacker can deduce how a program’s memory will be laid out simply by examining the Wasm code. For a native application, the computer’s operating system offers what’s called address space layout randomization, which makes it harder for an attacker to target a particular spot in program memory.

To help illustrate the security weaknesses of Web Assembly, these authors describe a hypothetical Wasm application that converts images from one format to another. They imagine that somebody created such a service by compiling a program that uses a version of the libpgn library that contains a known buffer-overflow vulnerability. That wouldn’t likely be a problem for a program that runs natively because modern compilers include what are known as stack canaries—a protection mechanism that prevents exploitation of this kind of vulnerability. Web Assembly includes no such protections and thus would inherit a vulnerability that was truly problematic.

Although the creators of Web Assembly took pains to make it safe, it shouldn’t come as a great surprise that unwelcome applications of its power and unexpected vulnerabilities of its design have come to light. That’s been the story of networked computers from the outset, after all.

Researchers Demo a New Polarization-Based Encryption Strategy

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/security/researchers-demo-a-new-polarizationbased-encryption-strategy

Telecommunications can never be too secure. That’s why researchers continue to explore new encryption methods. One emerging method is called ghost polarization communication, or GPC. Researchers at the Technical University of Darmstadt, in Germany, recently demonstrated this approach in a proof-of-principle experiment.

GPC uses unpolarized light to keep messages safe. Sunlight is unpolarized light, as are fluorescent and incandescent lights and LEDs. All light waves are made up of an electric field and a magnetic field propagating in the same direction through space. In unpolarized light, the orientation of the electric-field component of a light wave fluctuates randomly. That’s the opposite of polarized light sources such as lasers, in which this orientation is fixed.

It’s often easiest to imagine unpolarized light as having no specific orientation at all, since it changes on the order of nanoseconds. However, according to Wolfgang Elsaesser, one of the Darmstadt researchers who developed GPC, there’s another way to look at it: “Unpolarized light can be viewed as a very fast distribution on the Poincaré sphere.” (The Poincaré sphere is a common mathematical tool for visualizing polarized light.)

In other words, unpolarized light could be a source of rapidly generated random numbers that can be used to encode a message—if the changing polarization can be measured quickly enough and decoded at the receiver.

Suppose Alice wants to send Bob a message using GPC. Using the proof of principle that the Darmstadt team developed, Alice would pass unpolarized light through a half-wave plate—a device that alters the polarization of a beam of light—to encode her message. In this specific case, the half-wave plate would alter the polarization according to the specific message being encoded.

Bob can decode the message only when he receives Alice’s altered beam as well as a reference beam, and then correlates the two. Anyone attempting to listen in on the conversation by intercepting the altered beam would be stymied, because they’d have no reference against which to decode the message.

GPC earned its name because a message may be decoded only by using both the altered beam and a reference beam. “Ghost” refers to the entangled nature of the beams; separately, each one is useless. Only together can they transmit a message. And messages are sent via the beams’ polarizations, hence “ghost polarization.”

Elsaesser says GPC is possible with both wired and wireless communications setups. For the proof-of-principle tests, they relied largely on wired setups, which were slightly easier to measure than over-the-air tests. The group used standard commercial equipment, including fiber-optic cable and 1,550-nanometer light sources (1,550 nanometers is the most common wavelength of light used for fiber communications).

The Darmstadt group confirmed GPC was possible by encoding a short message by mapping 0 bits and 1 bits using polarization angles agreed upon by the sender and receiver. The receiver could decode the message by comparing the polarization angles of the reference beam with those of the altered beam containing the message. They also confirmed the message could not be decoded by an outside observer who did not have access to the reference beam.

However, Elsaesser stresses that the tests were preliminary. “The weakness at the moment,” he says, “is that we have not concentrated on the modulation speed or the transmission speed.” What mattered was proving that the idea could work at all. Elsaesser says this approach is similar to that taken for other forms of encryption, like chaos communications, which started out with low transmission speeds but have seen rates tick up as the techniques have been refined.

Robert Boyd, a professor of optics at the University of Rochester, in N.Y., says that the most important question to answer about GPC is how it compares to the Rivest-Shamir-Adleman (RSA) system commonly used to encode messages today. Boyd suspects that the Darmstadt approach, like the RSA system, is not absolutely secure. But he says that if GPC turned out to be more secure or more efficient to implement—even by a factor of two—it would have a tremendous advantage over RSA.

Moving forward, Elsaesser already has ideas on how to simplify the Darmstadt system. And because the team has now demonstrated GPC with commercial components, Elsaesser expects that a refined GPC system could simply be plugged into existing networks for an easy security upgrade.

This article appears in the July 2020 print issue as “New Encryption Strategy Passes Early Test.”

5G Networks Will Inherit Their Predecessors’ Security Issues

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/5g-networks-will-juggle-legacy-security-issues-for-years

There is a lot of excitement over 5G’s promise of blazing speeds, lower latencies, and more robust security than 3G and 4G networks. However, the fact that each network operator has its own timetable for rolling out the next-generation cellular technology means early 5G will actually be a patchwork of 2G, 3G, 4G, and 5G networks. The upshot: For the next few years, 5G won’t be able to fully deliver on its promises.

The fact that 5G networks will have to interoperate with legacy networks means these networks will continue to be vulnerable to attacks such as spoofing, fraud, user impersonation, and denial-of-service. Network operators will continue to rely on GPRS Tunneling Protocol (GTP), which is designed to allow data packets to move back and forth between different operators’ wireless networks, as may happen when a user is roaming (GPRS itself stands for General Packet Radio Service, a standard for mobile data packets.) Telecom security company Positive Technologies said in a recent report that as long as GTP is in use, the protocol’s security issues will impact 5G networks.

The Internet of Things Has a Consent Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/the-internet-of-things-has-a-consent-problem

Consent has become a big topic in the wake of the Me Too movement. But consent isn’t just about sex. At its core, it’s about respect and meeting people where they are at. As we add connected devices to homes, offices, and public places, technologists need to think about consent.

Right now, we are building the tools of public, work, and home surveillance, and we’re not talking about consent before we implement those tools. Sensors used in workplaces and homes can track sound, temperature, occupancy, and motion to understand what a person is doing and what the surrounding environment is like. Plenty of devices have cameras and microphones that feed back into a cloud service.

In the cloud, images, conversations, and environmental cues could be accessed by hackers. Beyond that, simply by having a connected device, users give the manufacturer’s employees a clear window into their private lives. While I personally may not mind if Google knows my home temperature or independent contractors at Amazon can accidentally listen in on my conversations, others may.

For some, the issue with electronic surveillance is simply that they don’t want these records created. For others, getting picked up by a doorbell camera might represent a threat to their well-being, given the U.S. government’s increased use of facial recognition and attempts to gather large swaths of electronic data using broad warrants.

How should companies think about IoT consent? Transparency is important—any company selling a connected device should be up-front about its capabilities and about what happens to the device data. Informing the user is the first step.

But the company should encourage the user to inform others as well. It could be as simple as a sticker alerting visitors that a house is under video surveillance. Or it might be a notification in the app that asks the user to explain the device’s capabilities to housemates or loved ones. Such a notification won’t help those whose partners use connected devices as an avenue for abuse and control, but it will remind anyone setting up a device in their home that it could have the potential for almost surveillance-like access to their family members.

In professional settings, consent can build trust in a connected product or automated system. For example, AdventHealth Celebration, a hospital in the Orlando, Fla., area has implemented a tracking solution for nurses that monitors their movements during a shift to determine the optimal workflows. Rather than just turning the system loose, however, Celebration informed nurses before bringing in the system and since then has worked with them to interpret results.

So far, the hospital has shifted how it allocates patients to rooms to make sure high-needs patients aren’t next to one another and assigned to the same nurse. But getting the nurses involved at the start was crucial to success. Cities deploying facial recognition in schools or in airports without asking citizens for input would do well to pay attention to the success of Celebration’s system. A failure to ask for input or to inform citizens shows a clear lack of concern around consent.

Which in turn implies that our governments aren’t keen on respect and meeting people where they are at. Even if that’s true for governments, is that the message that tech companies want to send to customers?

This article appears in the July 2020 print issue as “The IoT’s Consent Problem.”

Q&A: The Pioneers of Web Cryptography on the Future of Authentication

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/pioneers-web-cryptography-future-authentication

Martin Hellman is one of the inventors of public-key cryptography. His work on public key distribution with Whitfield Diffie is now known as the Diffie–Hellman key exchange. The method, which allows two parties that have no prior knowledge of each other to establish a shared secret key, is the basis for many of the security protocols in use today.

Taher Elgamal, who was once Hellman’s student at Stanford, is known as the “father of SSL” for developing the Secure Sockets Layer protocol used to secure transactions over the Internet. His 1985 paper “A Public key Cryptosystem and A Signature Scheme based on discrete Logarithms” outlined the ideas that made secure ecommerce possible. Elgamal shared the 2019 Marconi Prize with Paul Kocher for the development.

Tom “TJ” Jermoluk a former Bell Labs engineer, is now the CEO of Beyond Identity, a new identity management platform. Beyond Identity “stands on the shoulders of giants,” Jermoluk says, referring in part to the work of Hellman and Elgamal, as its platform brings together public-key cryptography and X509 certificates to solve the authentication problem—that is, how to determine whether someone is who they say they are on the Internet.

Elgamal, Hellman, and Jermoluk talked about how recent advances in technology made it possible to change how we handle authentication, and what the future would look like.

How to Protect Privacy While Mining Millions of Patient Records

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/covid19-study-privacy-mining-millions-patient-records

When people in the United Kingdom began dying from COVID-19, researchers saw an urgent need to understand all the possible factors contributing to such deaths. So in six weeks, a team of software developers, clinicians, and academics created an open-source platform designed to securely analyze millions of electronic health records while protecting patient privacy. 

Sneakier and More Sophisticated Malware Is On the Loose

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/security/analysis-shows-malware-evolving-encrypted-sophisticated

A new study analyzing more than a million samples of Android malware illustrates how malicious apps have evolved over time. The results, published 30 March in IEEE Transactions on Dependable and Secure Computing, show that malware coding is becoming more cleverly hidden, or obfuscated.

“Malware in Androids is still a huge issue, despite the abundance of research,” says Guillermo Suarez-Tangil, a researcher at King’s College London who co-led the study. “A central challenge is dealing with malware that is repackaged.”

Tracking COVID-19 With the IoT May Put Your Privacy at Risk

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/tracking-covid19-with-the-iot-may-put-your-privacy-at-risk

IEEE COVID-19 coverage logo, link to landing page

The Internet of Things makes the invisible visible. That’s the IoT’s greatest feature, but also its biggest potential drawback. More sensors on more people means the IoT becomes a visible web of human connections that we can use to, say, track down a virus.

Track-and-trace programs are already being used to monitor outbreaks of COVID-19 and its spread. But because they would do so through easily enabled mass surveillance, we need to put rules in place about how to undertake any attempts to track the movements of people.

In April, Google and Apple said they would work together to build an opt-in program for Android or iOS users. The program would use their phones’ Bluetooth connection to deliver exposure notifications—meaning that transmissions are tracked by who comes into contact with whom, rather than where people spend their time. Other proposals use location data provided by phone applications to determine where people are traveling.

All of these ideas have slightly different approaches, but at their core they’re still tracking programs. Any such program that we implement to track the spread of COVID-19 should follow some basic guidelines to ensure that the data is used only for public health research. This data should not be used for marketing, commercial gain, or law enforcement. It shouldn’t even be used for research outside of public health.

Let’s talk about the limits we should place around this data. A tracking program for COVID-19 should be implemented only for a prespecified duration that’s associated with a public health goal (like reducing the spread of the virus). So, if we’re going to collect device data and do so without requiring a user to opt in, governments need to enact legislation that explains what the tracking methodology is, requires an audit for accuracy and efficacy by a third party, and sets a predefined end.

Ethical data collection is also critical. Apple and Google’s Bluetooth method uses encrypted tokens to track people as they pass other people. The Bluetooth data is people-centric, not location-centric. Once a person uploads a confirmation that they’ve been infected, their device can issue notifications to other devices that were recently nearby, alerting users—anonymously—that they may have come in contact with someone who’s infected.

This is good. And while it might be possible to match a person to a device, it would be difficult. Ultimately, linking cases anonymously to devices is safer than simply collecting location data on infected individuals. The latter makes it easy to identify people based on where they sleep at night and work during the day, for example.

Going further, this data must be encrypted on the device, during transit and when stored on a cloud or government server, so that random hackers can’t access it. Only the agency in charge of track-and-trace efforts should have access to the data from the device. This means that police departments, immigration agencies, or private companies can’t access that data. Ever.

However, researchers should have access to some form of the data after a few years have passed. I don’t know what that time limit should be, but when that time comes, institutional review boards, like those that academic institutions use to protect human research subjects, should be in place to evaluate each request for what could be highly personal data.

If we can get this right, we can use the lessons learned during COVID-19 not only to protect public health but also to promote a more privacy-centric approach to the Internet of Things.

This article appears in the June 2020 print issue as “Pandemic vs. Privacy.”

Coronavirus Pandemic Prompts Privacy-Conscious Europe to Collect Phone Data

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/how-coronavirus-pandemic-europe-collecting-phone-data

Amid the coronavirus pandemic, even privacy-conscious European governments have asked telecom companies for residents’ phone location data in hopes of understanding whether national social distancing measures such as stay-at-home orders and business closures are having any effect on the spread of COVID-19.

Some of the hardest-hit countries, including Italy and Spain, are now open to proposals for mobile apps that can make contact tracing more efficient and alert people who have come into contact with someone infected by the novel coronavirus.

Privacy in the Time of COVID-19

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/telecom/security/privacy-in-the-time-of-covid19

Even though I understand how it works, I consistently find Google’s ability to know how long it will take me to drive somewhere nothing less than magical. GPS signals stream location and speed data from the legion of smartphones in vehicles on the roads between my origin and my destination; it takes only a bit of math to come up with an estimate accurate to within a few minutes.

Lately, researchers have noted that this same data can be used to pinpoint serious traffic accidents minutes before any calls get placed to emergency responders—extra time within that “golden hour” vital to the saving of lives. That result points to a hidden upside to the beast Shoshana Zuboff termed surveillance capitalism. That is, all of this data about our activities being harvested by our devices could be put to work to serve the public good.

We need that now like never before, as the entire planet confronts a pandemic. Fortunately, we’ve been exceptionally clever at making smartphones—more than 4 billion of them in every nation on earth—and they offer an unprecedented opportunity to harness their distributed sensing and intelligence to provide a greater degree of safety than we might have had without them.

Taiwan got into this game early, combining the lessons of SARS with the latest in tracking and smartphone apps to deliver an integrated public health response. As of this writing, that approach has kept the country’s infection rate among the lowest in the world. The twin heads of surveillance capitalism, Google and Facebook, will spend the next year working with public health authorities to provide insights that can guide both the behavior of individuals and public policy. That’s going to give some angina to the advocates of strong privacy policies (I count myself among them), but in an emergency, public good inevitably trumps private rights.

This relaxation of privacy boundaries mustn’t mean the imposition of a surveillance state—that would only result in decreased compliance. Instead, our devices will be doing our utmost to remind us how to stay healthy, much like our smartwatches already do but more pointedly and with access to far greater data. Both data and access are what we must be most careful with, looking for the sweet spot between public health and private interest, with an eye to how we can wind back to a world with greater privacies after the crisis abates.

A decade ago I quit using Facebook, because even then I had grave suspicions that my social graph could be weaponized and used against me. Yet this same capacity to know so much about people—their connections, their contacts, even their moods—means we also have a powerful tool to track the spread of outbreaks both of disease and deadly misinformation. While firms like Cambridge Analytica have used this power to sway the body politic, we haven’t used such implements yet for the public good. Now, I’d argue, we have little choice.

We technologists are going to need to get our hands dirty, and only with transparency, honesty, and probity can we do so safely. Yes, use the data, use the tools, use the analytics and the machine learning, bring it all to bear. But let us all know how, when, and why it’s being used. Because it appears that making a turn toward a surveillance society will protect us now—and, in particular, help our most vulnerable to stay safe—we need to be honest about our needs, transparent about our uses, and completely clear about our motives.

New Approach Could Protect Control Systems From Hackers

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/security/new-approach-protects-power-plants-other-major-control-systems-from-hacking

Some of the most important industrial control systems (ICSs), such as those that support power generation and traffic control, must accurately transmit data at the milli- or even mirco-second range. This means that hackers need interfere with the transmission of real-time data only for the briefest of moments to succeed in disrupting these systems. The seriousness of this type of threat is illustrated by the Stuxnet incursion in 2010, when attackers succeeded in hacking the system supporting Iran’s uranium enrichment factory, damaging more than 1000 centrifuges.

Now a trio of researchers has disclosed a novel technique that could more easily identify when these types of attacks occur, triggering an automatic shutdown that would prevent further damage.

The problem was first brought up in a conversation over coffee two years ago. “While describing the security measures in current industrial control systems, we realized we did not know any protection method on the real-time channels,” explains Zhen Song, a researcher at Siemens Corporation. The group began to dig deeper into the research, but couldn’t find any existing security measures.

Part of the reason is that traditional encryption techniques do not account for time. “As well, traditional encryption algorithms are not fast enough for industry hard real-time communications, where the acceptable delay is much less than 1 millisecond, even close to 10 microsecond level,” explains Song. “It will often take more than 100 milliseconds for traditional encryption algorithms to process a small chunk of data.”

However, some research has emerged in recent years about the concept of “watermarking” data during transmission, a technique that can indicate when data has been tampered with. Song and his colleagues sought to apply this concept to ICSs, in a way that would be broadly applicable and not require details of the specific ICS. They describe their approach in a study published February 5 in IEEE Transactions on Automation Science and Engineering. Some of the source code is available here

The approach involves the transmission of real-time data over an unencrypted channel, as conventionally done. In the experiment, a specialized algorithm in the form of a recursive watermark (RWM) signal is transmitted at the same time. The algorithm encodes a signal that is similar to “background noise,” but with a distinct pattern. On the receiving end of the data transmission, the RWM signal is monitored for any disruptions, which, if present, indicate an attack is taking place. “If attackers change or delay the real-time channel signal a little bit, the algorithm can detect the suspicious event and raise alarms immediately,” Song says.

Critically, a special “key” for deciphering the RWM algorithm is transmitted through an encrypted channel from the sender to the receiver before the data transmission takes place.

Tests show that this approach works fast to detect attacks. “We found the watermark-based approach, such as the RWM algorithm we proposed, can be 32 to 1375 times faster than traditional encryption algorithms in mainstream industrial controllers. Therefore, it is feasible to protect critical real-time control systems with new algorithms,” says Song.

Moving forward, he says this approach could have broader implications for the Internet of Things, which the researchers plan to explore more. 

New Cryptography Method Promising Perfect Secrecy Is Met With Skepticism

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/new-cryptography-method-promises-perfect-secrecy-amidst-skepticism

In the ongoing race to make and break digital codes, the idea of perfect secrecy has long hovered on the horizon like a mirage. A recent research paper has attracted both interest and skepticism for describing how to achieve perfect secrecy in communications by using specially-patterned silicon chips to generate one-time keys that are impossible to recreate.

Personal Virtual Networks Could Give Everyone More Control Over Their Data

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/personal-virtual-networks-control-data

To keep us connected, our smartphones constantly switch between networks. They jump from cellular networks to public Wi-Fi networks in cafes, to corporate or university Wi-Fi networks at work, and to our own Wi-Fi networks at home. But we rarely have any input into the security and privacy settings of the networks to which we connect. In many cases, it would be tough to even figure out what those settings are.

A team at Northeastern University is developing personal virtual networks to give everyone more control over their online activities and how their information is shared. Those networks would allow a person’s devices to connect to cellular and Wi-Fi networks—but only on their terms.

How to Leverage Endpoint Detection and Response (EDR) in AWS Investigations

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_leverage_endpoint_detection_and_response_in_aws_investigations

Adding EDR capabilities into your AWS (Amazon Web Services) environment can inform investigations and provide actionable details for remediation. Attend this webinar to discover how to unpack and leverage the telemetry provided by endpoint security solutions using MITRE Cloud examples, such as Exploit Public-Facing Application (T1190) and Data Transfer to Cloud Account (T1537) by examining process trees. You will also find out how these solutions can help identify who has vulnerable software or configurations on their systems by leveraging indicators of compromise (IOC) to pinpoint the depth and breadth of malware (MD5).

Researchers Exploit Low Entropy of IoT Devices to Break RSA Certificates

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/low-entropy-iot-internet-of-things-devices-security-news-rsa-encryption

Many Internet of Things (IoT) devices rely on RSA keys and certificates to encrypt data before sending it to other devices, but these security tools can be easily compromised, new research shows.

Researchers from digital identity management company Keyfactor were able to compromise 249,553 distinct keys corresponding to 435,694 RSA certificates using a single virtual machine from Microsoft Azure. They described their work in a paper presented at the IEEE Conference on Trust, Privacy, and Security in Intelligent Systems and Applications in December.

“With under $3,000 of compute time in Azure, we were able to break 435,000 certificates,” says JD Kilgallin, Keyfactor’s senior integration engineer and researcher. “We showed that this attack is very easy to execute now.”