Tag Archives: Telecom

Apptricity Beams Bluetooth Signals Over 30 Kilometers

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/apptricity-beams-bluetooth-signals-over-20-miles

When you think about Bluetooth, you probably think about things like wireless headphones, computer mice, and other personal devices that utilize the short-range, low-power technology. That’s where Bluetooth has made its mark, after all—as an alternative to Wi-Fi, using unlicensed spectrum to make quick connections between devices.

But it turns out that Bluetooth can go much farther than the couple of meters for which most people rely on it. Apptricity, a company that provides asset and inventory tracking technologies, has developed a Bluetooth beacon that can transmit signals over 32 kilometers (20 miles). The company believes its beacon is a cheaper, secure alternative to established asset and inventory tracking technologies.

A quick primer, if you’re not entirely clear on asset tracking versus inventory tracking: There’s some gray areas in the middle, but by and large, “asset tracking” refers to an IT department registering which employee has which laptop, or a construction company keeping tabs on where its backhoes are on a large construction site. Inventory tracking refers more to things like a retail store keeping correct product counts on the shelves, or a hospital noting how quickly it’s going through its store of gloves.

Asset and inventory tracking typically use labor-intensive techniques like barcode or passive RFID scanning, which are limited both by distance (a couple of meters at most, in both cases) and the fact that a person has to be directly involved in scanning. Alternatively, companies can use satellite or LTE tags to keep track of stuff. While such tags don’t require a person to actively track items, they are far more expensive, requiring a costly subscription to either a satellite or LTE network.

So, the burning question: How does one send a Bluetooth signal over 30-plus kilometers? Typically, Bluetooth’s distance is limited because large distances would require a prohibitive amount of power, and its use of unlicensed spectrum means that the greater the distance, the more likely it will interfere with other wireless signals.

The key new wrinkle, according to Apptricity’s CEO Tim Garcia, is precise tuning within the Bluetooth spectrum. Garcia says it’s the same principle as a tightly-focused laser beam. A laser beam will travel farther without its signal weakening beyond recovery if the photons making up the beam are all as close to a specific frequency as possible. Apptricity’s Bluetooth beacons use firmware developed by the company to achieve such precise tuning, but with Bluetooth signals instead of photons. Thus, data can be sent and received by the beacons without interfering and without requiring unwieldy amounts of power.

Garcia says RFID tags and barcode scanning don’t actively provide information about assets or inventory. Bluetooth, however, can not only pinpoint where something is, it can send updates about a piece of equipment that needs maintenance or just requires a routine check-up.

By its own estimation, Apptricity’s Bluetooth beacons are 90 percent cheaper than LTE or satellite tags, specifically because Bluetooth devices don’t require paying for a subscription to an established network.

The company’s current transmission distance record for its Bluetooth beacons is 38 kilometers (23.6 miles). The company has also demonstrated non-commercial versions of the beacons for the U.S. Department of Defense with broadcast ranges between 80 and 120 kilometers.

How the U.S. Can Apply Basic Engineering Principles To Avoid an Election Catastrophe

Post Syndicated from Jonathan Coopersmith original https://spectrum.ieee.org/tech-talk/telecom/security/engineering-principles-us-election

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

The 2020 primary elections and caucuses in the United States earlier this year provide a textbook case of how technology and institutions can fail in a crisis, when we need them most. To recap: Americans looked on, many of them with incredulity, as people hoping to vote were forced to wait hours at the height of the initial wave of the COVID-19 pandemic. Elsewhere, there were delayed elections, tens of thousands of undelivered or uncounted mail-in ballots, and long delays in counting and releasing results. This in what is arguably the world’s most technologically advanced industrialized nation.

Given the persistence of COVID in the United States and domestic and foreign efforts to delegitimize its November presidential election, a repeat of the primaries could easily produce a massive visible disaster that will haunt the United States for decades. Indeed, doomsday scenarios and war-gamingwhat ifs” have become almost a cottage industry. Fortunately, those earlier failures in the primaries provide a road map to a fair, secure, and accessible election—if U.S. officials are willing to learn and act quickly.

What happens in the 2020 U.S. election will reverberate all over the world, and not just for the usual reasons. Every democracy will face the challenge of organizing and ensuring safe and secure elections.  People in most countries still vote in person by paper ballot, but like the United States, many countries are aggressively expanding options for how their citizens can vote

Compared with the rest of the world, though, the United States stands apart in one fundamental aspect:  No single federal law governs elections. The 50 states and the District of Columbia each conduct their own elections under their own laws and regulations. The elections themselves are actually administered by 3,143 counties and equivalents within those states, which differ in resources, training, ballots, and interpretation of regulations. In Montana, for example, counties can automatically mail ballots to registered voters but are not required to. 

A similar diversity applies to the actual voting technology. In 2016, half of registered voters in the United States lived in areas that only optically scan paper ballots; a quarter lived in areas with direct-recording electronic (DRE) equipment, which only creates an electronic vote; and the remainder lived in areas that use both types of systems or where residents vote entirely by mail, using paper ballots that are optically scanned. Over 1,800 small counties still collectively counted a million paper ballots by hand. 

The failures during the primaries grew from a familiar litany: poor organization, untried technology, inadequate public information, and an inability to scale quickly. Counties that had the worst problems were usually ones that had introduced new voting software and hardware without adequately testing it, or without training operators and properly educating users.

There were some early warnings of trouble ahead. In February, with COVID not yet an issue in the United States, the Iowa Democratic caucus was thrown into chaos when the inadequately vetted IowaReporter smartphone app by Shadow, which was used to tabulate votes, broke down. The failure was compounded by malicious jamming of party telephone lines by American trolls. It took weeks to report final results. Georgia introduced new DRE equipment with inadequate training and in some cases, the wrong proportion of voting equipment to processing equipment, which delayed tabulation of the results.  

As fear of COVID spread, many states scaled up their absentee-voting options and reduced the number of polling places. In practice, “absentee voting” refers to the use of a paper ballot that is mailed to a voter, who fills it out and then returns it.  A few states like Kentucky, with good coordination between elected leaders and election administrators, executed smooth mail-in primaries earlier this year. More common, however, were failures of the U.S. Post Office and many state- and local-election offices to handle a surge of ballots.  In New York, some races remained undecided three weeks after its primary because of slow receipt and counting of ballots. Election officials in 23 states rejected over 534,000 primary mail-in ballots, compared with 319,000 mail-in ballots for the 2016 general election. 

The post office, along with procrastinating voters, has emerged as a critical failure node for absentee voting. Virginia rejected an astonishing 6 percent of ballots for lateness, compared with half of a percent for Michigan. Incidentally, these cases of citizens losing their vote due to known difficulties with absentee ballots far, far outweigh the incidences of voter fraud connected to absentee voting, contrary to the claims of certain politicians. 

At this point, a technologically savvy person could be forgiven for wondering, why can’t we vote over the Internet? The Internet has been in widespread public use in developed countries for more than a quarter century. It’s been more than 40 years since the introduction of the personal computer, and about 20 since the first smartphones came out. And yet we still have no easy way to use these nearly ubiquitous tools to vote.

A subset of technologists has long dreamed of Internet voting, via an app. But in most of the world, it remains just that: a dream. The main exception is Estonia, where Internet voting experiments began 20 years ago. The practice is now mainstream there—in the country’s 2019 parliamentary elections, nearly 44 percent of voters voted over the Internet without any problems. In the United States, over the past few years, some 55 elections have used app-based Internet voting as an option for absentee voting. However, that’s a very tiny percentage of the thousands of elections conducted by municipalities during that period. Despite Estonia’s favorable experiences with what it calls “i-voting,” in much of the rest of the world concerns about security, privacy, and transparency have kept voting over the Internet in the realm of science fiction. 

The rise of blockchain, a software-based system for guaranteeing the validity of a chain of transactions, sparked new hopes for Internet voting. West Virginia experimented with blockchain absentee voting in 2018, but election technology experts worried about possible vulnerabilities in recording, counting, and storing an auditable vote without violating the voter’s privacy.  The lack of transparency by the system provider, Voatz, did not dispel these worries. After a report from MIT’s Internet Policy Research Initiative reinforced those concerns, West Virginia canceled plans to use blockchain voting in this year’s primary.    

We can argue all we want about the promise and perils of Internet voting, but it won’t change the fact that this option won’t be available for this November’s general election in the United States. So officials will have to stick with tried-and-true absentee-voting techniques, improving them to avoid the fiascoes of the recent past. Fortunately, this shouldn’t be hard. Think of shoring up this election as an exercise involving flow management, human-factors engineering, and minimizing risk in a hostile (political) environment—one with a low signal-to-noise ratio. 

This coming November 3 will see a record voter turnout in the United States, an unprecedented proportion of which will be voting early and by mail, all during an ongoing pandemic in an intensely partisan political landscape with domestic and foreign actors trying to disrupt or discredit the election. To cope with such numbers, we’ll need to “flatten the curve.” A smoothly flowing election will require encouraging as many people as possible to vote in the days and weeks before Election Day and changing election procedures and rules to accommodate those early votes.  

That tactic will of course create a new challenge: handling the tens of millions of people voting by mail in a major acceleration of the U.S. trend of voting before Election Day. Historically, U.S. voters could cast an absentee ballot by mail only if they were out of state or had another state-approved excuse. But in 2000, Oregon pioneered the practice of voting by mail exclusively. There is no longer any in-person voting in Oregon—and, it is worth noting, Oregon never experienced any increases in fraud as it transitioned to voting by mail.

Overall, mail-in voting in U.S. presidential elections doubled from 12 percent (14 million) of all votes in 2004 to 24 percent (33 million) votes cast in 2016. Those numbers, however, hide great diversity: Ninety-seven percent of voters in the state of Washington but only 2 percent of West Virginians voted by mail in 2016. Early voting–in person at a polling station open before Election Day–also expanded from 8 percent to 17 percent of all votes during that same 12-year period, 2004 to 2016. 

Today, absentee voting and vote-by-mail are essentially equivalent as more states relax restrictions on mail-in voting. In 2020, five more states–Washington, Colorado, Utah, Hawaii, and California–will join Oregon to vote by mail exclusively. A COVID-induced relaxation of absentee-ballot rules means that over 190 million Americans, not quite two-thirds of the total population, will have a straightforward vote-by-mail option this fall. 

Whether they will be able to do so confidently and successfully is another question. The main concern with voting by mail is rejected ballots. The overall rejection rate for ballots at traditional, in-person voting places in the United States is 0.01 percent. Compare that with a 1 percent rejection rate for mail-in ballots in Florida in 2018 and a deeply dismaying 6 percent rate for Virginia in its 2020 primary.

Nearly all of the Virginia ballots were rejected because they arrived late, reflecting the lack of experience for many voting by mail for the first time. Forgetting to sign a ballot was another common reason for rejection. But votes were also refused because of regulations that some might deem overly strict—a tear in an envelope is enough to get a mail-in vote nixed in some districts. These verification procedures are less forgiving of errors, and first-time voters, especially ethnic and racial minorities, have their ballots rejected more frequently. Post office delivery failures also contributed.    

We already know how to deal with all this and thereby minimize ballot rejection. States could automatically send ballots to registered voters weeks before the actual election date. Voters could fill out their ballot, seal it in an envelope, and place that envelope inside a larger envelope, which they would sign. A bar code on that outside envelope would allow the voter and election administrators to track its location. It is vitally important for voters to have feedback that confirms their vote has been received and counted. 

This ballot could be mailed, deposited in a secure dropbox, or returned in person to the local election office for processing. In 2016, more than half the voters in vote-by-mail states returned their ballots, not by mail but by secure drop-off boxes or by visiting their local election offices. 

The signature on the outer envelope would be verified against a signature on file, either from a driver’s license or on a voting app, to guard against fraud. If the signature appeared odd or if some other problem threatened the ballot’s rejection, the election office would contact the voter by text, email, or phone to sort out the problem. The voter could text a new signature, for example.  Once verified, the ballots could be promptly counted, either before or on Election Day.

The problem with these best practices is that they are not universal. A few states, including Arizona, already employ such procedures and enjoy very low rates of rejected mail-in ballots: Maricopa County, Ariz. had a mail-in ballot rejection rate of just 0.03 percent in 2018, roughly on a par with in-person voting. Most states, however, lack these procedures and infrastructure: Only 13 percent of mail-in ballots this primary season had bar codes.

The ballots themselves could stand some better human-factors engineering. Too often, it is too challenging to correctly fill out a ballot or even an application for a ballot. In 2000, a poorly designed ballot in Florida’s Palm Beach County may have deprived Al Gore of Florida’s 29 electoral votes, and therefore the presidency. And in Travis County, Texas, a complex, poorly designed application to vote by mail was incorrectly filled out by more than 4,500 voters earlier this year. Their applications rejected, they had to choose on Election Day between not voting or going to the poll and risking infectionAnd yet help is readily available: Groups like the Center for Civic Design can provide best practices.

Training on signature verification also widely varies within and among states. Only 20 states now require that election officials give voters an opportunity to correct a disqualified mail-in ballot. 

Timely processing is the final mail-in challenge. Eleven states do not start processing absentee ballots until Election Day, three start the day after, and three start the day before. In a prepandemic election, mail-in ballots made up a smaller share of all votes, so the extra time needed for processing and counting was relatively minor. Now with mail-in ballots potentially making up over half of all votes in the United States, the time needed to process and count ballots may delay results for days or weeks. In the current political climate of suspicion and hyper-partisanship, that could be disastrous—unless people are informed about it and expecting it.

The COVID-19 pandemic is strongly accelerating a trend toward absentee voting that began a couple of decades ago. Contrary to what many people were anticipating five or 10 years ago, though, most of the world is not moving toward Internet voting but rather to a more advanced version of what they’re using already. That Estonia has done so well so far with i-voting offers a tantalizing glimpse of a possible future. For 99.998 percent of the world’s population, though, the paper ballot will reign for the foreseeable future. Fortunately, major technological improvements envisioned over the next decade will increase the security, reliability, and speedy processing of paper ballots, whether they’re cast in person or by mail. 

Jonathan Coopersmith is a Professor at Texas A&M University, where he teaches the history of technology.  He is the author of FAXED: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015).  His current interests focus on the importance of froth, fraud, and fear in emerging technologies.  For the last decade, he has voted early and in person. 

100 Million Zoom Sessions Over a Single Optical Fiber

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/single-optical-fibers-100-million-zoom

Journal Watch report logo, link to report landing page

A team at the Optical Networks Group at University College London has sent 178 terabits per second through a commercial singlemode optical fiber that has been on the market since 2007. It’s a record for the standard singlemode fiber widely used in today’s networks, and twice the data rate of any system now in use. The key to their success was transmitting it across a spectral range of 16.8 terahertz, more than double the the broadest range in commercial use.

The goal is to expand the capacity of today’s installed fiber network to serve the relentless demand for more bandwidth for Zoom meetings, streaming video, and cloud computing. Digging holes in the ground to lay new fiber-optic cables can run over $500,000 a kilometer in metropolitan areas, so upgrading transmission of fibers already in the ground by installing new optical transmitters, amplifiers, and receivers could save serious money. But it will require a new generation of optoelectronic technology.

A new generation of fibers have been in development for the past few years that promise higher capacity by carrying signals on multiple paths through single fibers. Called spatial division multiplexing, the idea has been demonstrated in fibers with multiple cores, multiple modes through individual cores, or combining multiple modes in multiple fibers. It’s demonstrated record capacity for single fibers, but the technology is immature and would require the expensive laying of new fibers. Boosting capacity of fibers already in the ground would be faster and cheaper. Moreover, many installed fibers remain dark, carrying no traffic, or transmitting on only a few of the roughly 100 available wavelengths, making them a hot commodity for data networks.

“The fundamental issue is how much bandwidth we can get” through installed fibers, says Lidia Galdino,  a University College lecturer who leads a team including engineers from equipment maker Xtera and Japanese telecomm firm KDDI. For a baseline they tested Corning Inc.’s SMF-28 ULL (ultra-low-loss) fiber, which has been on the market since 2007. With a pure silica core, its attenuation is specified at no more than 0.17 dB/km at the 1550-nanometer minimum-loss wavelength, close to the theoretical limit. It can carry 100-gigabit/second signals more than a thousand kilometers through a series of amplifiers spaced every 125 km.

Generally, such long-haul fiber systems operate in the C band of wavelengths from 1530 to 1565 nm. A few also operate in the L band from 1568 to 1605 nm, most notably the world’s highest-capacity submarine cable, the 13,000-km Pacific Light Cable, with nominal capacity at 24,000 gigabits per second on each of six fiber pairs. Both bands use well-developed erbium-doped fiber amplifiers, but that’s about the limit of their spectral range.

To cover a broader spectral range, UCL added the largely unused wavelengths of 1484 to 1520 nm in the shorter-wavelength S band. That required new amplifiers that used thulium to amplify those wavelengths. Because only two thulium amplifiers were available, they also added Raman-effect fiber amplifiers to balance gain across that band. They also used inexpensive semiconductor optical amplifiers to boost signals reaching the receiver after passing through 40 km of fiber. 

Another key to success is format. “We encoded the light in the best possible way” of geometric coding quadrature amplitude modulation (QAM) format to take advantage of differences in signal quality between bands. “Usually commercial systems use 64 points, but we went to 1024 [QAM levels]…an amazing achievement,” for the best quality signals, Gandino said. 

This experiment, reported in IEEE Photonics Technology Letters, is only the first in a planned series. Their results are close to the Shannon limit on communication rates imposed by noise in the channel. The next step, she says, will be buying more optical amplifiers so they can extend transmission beyond 40 km.

“This is fundamental research on the maximum capacity per channel,” Galdino says. The goal is to find limits, rather than to design new equipment. Their complex system used more than US$2.6 million of equipment, including multiple types of amplifiers and modulation schemes. It’s a testbed, not optimized for cost, performance, or reliability, but for experimental flexibility. Industry will face the challenge of developing detectors, receivers, amplifiers and high-quality lasers on new wavelengths, which they have already started. If they succeed, a single fiber pair will be able to carry enough video for all 50 million school-age children in the US to be on two Zoom video channels at once.

Unlock Wireless Test Capabilities On Your RF Gear

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/unlock_wireless_test_capabilities_on_your_rf_gear

Discover how easy it is to update your instrument to test the latest wireless standards.

We’re offering 30-day software trials that evolve test capabilities on your signal analyzers and signal generators. Automatically generate or analyze signals for many wireless applications.

Choose from our more popular applications:

  • Bluetooth ®
  • WLAN 802.11
  • Vector Modulation Analysis
  • And more

Predicting the Lifespan of an App

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/predicting-the-lifespan-of-an-app

The number of apps smartphone users have to choose from is daunting, with roughly 2 million available through the Apple Store alone. But survival of the fittest applies to the digital world too, and not all of these apps will go on to become the next Tik Tok. In a study published 29 July in IEEE Transactions on Mobile Computing, researchers describe a new model for predicting the long-term survival of apps, which outperforms seven existing designs.

“For app developers, understanding and tracking the popularity of an app is helpful for them to act in advance to prevent or alleviate the potential risks caused by the dying apps,” says Bin Guo, a professor at Northwestern Polytechnical University who helped develop the new model.

“Furthermore, the prediction of app life cycle is crucial for the decision-making of investors. It helps evaluate and assess whether the app is promising for the investors with remarkable rewards, and provides in advance warning to avoid investment failures.”

In developing their new model, AppLife, Guo’s team took a Multi-Task Learning (MTL) approach. This involves dividing data on apps into segments based on time, and analyzing factors – such as download history, ratings, and reviews – at each time interval. AppLife then predicts the likelihood of an app being removed within the next one or two years.

The researchers evaluated AppLife using a real-world dataset with more than 35,000 apps from the Apple Store that were available in 2016, but had been released the previous year. “Experiments show that our approach outperforms seven state-of-the-art methods in app survival prediction. Moreover, the precision and the recall reach up to 84.7% and 95.1%, respectively,” says Guo.

Intriguingly, AppLife was particularly good at predicting the survival of apps for tools—even more so than apps for news and video. Guo says this could be because more apps for tools exist in the dataset, feeding the model with more data to improve its performance in this respect. Or, he says, it could be caused by greater competition among tool apps, which in turn leads to more detailed and consistent user feedback.

Moving forward, Guo says he plans on building upon this work. While AppLife currently looks at factors related to individual apps, Guo is interested in exploring interactions among apps, for example which ones complement each other. Analyzing the usage logs of apps is another area of interest, he says.

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/for-the-iot-user-anonymity-shouldnt-be-an-afterthought-it-should-be-baked-in-from-the-start

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

Building Your Own Cellphone Network Can Be Empowering, and Also Problematic

Post Syndicated from Roberto J. González original https://spectrum.ieee.org/telecom/wireless/building-your-own-cellphone-network-can-be-empowering-and-problematic

During the summer of 2013, news reports recounted how a Mexican pueblo launched its own do-it-yourself cellphone network. The people of Talea de Castro, population 2,400, created a mini-telecom company, the first of its kind in the world, without help from the government or private companies. They built it after Mexico’s telecommunications giants failed to provide mobile service to the people in the region. It was a fascinating David and Goliath story that pitted the country’s largest corporation against indigenous villagers, many of whom were Zapotec-speaking subsistence farmers with little formal education.

The reports made a deep impression on me. In the 1990s, I’d spent more than two years in Talea, located in the mountains of Oaxaca in southern Mexico, working as a cultural anthropologist.

Reading about Talea’s cellular network inspired me to learn about the changes that had swept the pueblo since my last visit. How was it that, for a brief time, the villagers became international celebrities, profiled by USA Today, BBC News, Wired, and many other media outlets? I wanted to find out how the people of this community managed to wire themselves into the 21st century in such an audacious and dramatic way—how, despite their geographic remoteness, they became connected to the rest of the world through the wondrous but unpredictably powerful magic of mobile technology.

I naively thought it would be the Mexican version of a familiar plot line that has driven many Hollywood films: Small-town men and women fearlessly take on big, bad company. Townspeople undergo trials, tribulations, and then…triumph! And they all lived happily ever after.

I discovered, though, that Talea’s cellphone adventure was much more complicated—neither fairy tale nor cautionary tale. And this latest phase in the pueblo’s centuries-long effort to expand its connectedness to the outside world is still very much a work in progress.

Sometimes it’s easy to forget that 2.5 billion people—one-third of the planet’s population—don’t have cellphones. Many of those living in remote regions want mobile service, but they can’t have it for reasons that have to do with geography, politics, or economics.

In the early 2000s, a number of Taleans began purchasing cellphones because they traveled frequently to Mexico City or other urban areas to visit relatives or to conduct business. Villagers began petitioning Telcel and Movistar, Mexico’s largest and second-largest wireless operators, for cellular service, and they became increasingly frustrated by the companies’ refusal to consider their requests. Representatives from the firms claimed that it was too expensive to build a mobile network in remote locations—this, in spite of the fact that Telcel’s parent company, América Móvil, is Mexico’s wealthiest corporation. The board of directors is dominated by the Slim family, whose patriarch, Carlos Slim, is among the world’s richest men.

But in November 2011, an opportunity arose when Talea hosted a three-day conference on indigenous media. Kendra Rodríguez and Abrám Fernández, a married couple who worked at Talea’s community radio station, were among those who attended. Rodríguez is a vivacious and articulate young woman who exudes self-confidence. Fernández, a powerfully built man in his mid-thirties, chooses his words carefully. Rodríguez and Fernández are tech savvy and active on social media. (Note: These names are pseudonyms.)

Also present were Peter Bloom, a U.S.-born rural-development expert, and Erick Huerta, a Mexican lawyer specializing in telecommunications policy. Bloom had done human rights work in Nigeria in rural areas where “it wasn’t possible to use existing networks because of security concerns and lack of finances,” he explains. “So we began to experiment with software that would enable communication between phones without relying upon any commercial companies.” Huerta had worked with the United Nations’ International Telecommunication Union.

The two men founded Rhizomatica, a nonprofit focused on expanding cellphone service in indigenous areas around the world. At the November 2011 conference, they approached Rodríguez and Fernández about creating a community cell network in Talea, and the couple enthusiastically agreed to help.

In 2012, Rhizomatica assembled a ragtag team of engineers and hackers to work on the project. They decided to experiment with open-source software called OpenBTS—Open Base Transceiver Station—which allows cell phones to communicate with each other if they are within range of a base station. Developed by David Burgess and Harvind Samra, cofounders of the San Francisco Bay Area startup Range Networks, the software also allows networked phones to connect over the Internet using VoIP, or Voice over Internet Protocol. OpenBTS had been successfully deployed at Burning Man and on Niue, a tiny Polynesian island nation.

Bloom’s team began adapting OpenBTS for Talea. The system requires electrical power and an Internet connection to work. Fortunately, Talea had enjoyed reliable electricity for nearly 40 years and had gotten Internet service in the early 2000s.

In the meantime, Huerta pulled off a policy coup: He convinced federal regulators that indigenous communities had a legal right to build their own networks. Although the government had granted telecom companies access to nearly the entire radio-frequency spectrum, small portions remained unoccupied. Bloom and his team decided to insert Talea’s network into these unused bands. They would become digital squatters.

As these developments were under way, Rodríguez and Fernández were hard at work informing Taleans about the opportunity that lay before them. Rodríguez used community radio to pique interest among her fellow villagers. Fernández spoke at town hall meetings and coordinated public information sessions at the municipal palace. Soon they had support from hundreds of villagers eager to get connected. Eventually, they voted to invest 400,000 pesos (approximately US $30,000) of municipal funding for equipment to build the cellphone network. By doing so, the village assumed a majority stake in the venture.

In March 2013, the cellphone network was finally ready for a trial run. Volunteers placed an antenna near the town center, in a location they hoped would provide adequate coverage of the mountainside pueblo. As they booted up the system, they looked for the familiar signal bars on their cellphone screens, hoping for a strong reception.

The bars light up brightly—success!

The team then walked through the town’s cobblestone streets in order to detect areas with poor reception. As they strolled up and down the paths, they heard laughter and shouting from several houses. People emerged from their homes, stunned.

“We’ve got service!” exclaimed a woman in disbelief.

“It’s true, I called the state capital!” cried a neighbor.

“I just connected to my son in Guadalajara!” shouted another.

Although Rodríguez, Fernández, and Bloom knew that many Taleans had acquired cellphones over the years, they didn’t realize the extent to which the devices were being used every day—not for communications but for listening to music, taking photos and videos, and playing games. Back at the base station, the network computer registered more than 400 cellphones within the antenna’s reach. People were already making calls, and the number was increasing by the minute as word spread throughout town. The team shut down the system to avoid an overload.

As with any experimental technology, there were glitches. Outlying houses had weak reception. Inclement weather affected service, as did problems with Internet connectivity. Over the next six months, Bloom and his team returned to Talea every week, making the 4-hour trip from Oaxaca City in a red Volkswagen Beetle.

Their efforts paid off. The community network proved to be immensely popular, and more than 700 people quickly subscribed. The service was remarkably inexpensive: Local calls and text messages were free, and calls to the United States were only 1.5 cents per minute, approximately one-tenth of what it cost using landlines. For example, a 20-minute call to United States from the town’s phone kiosk might cost a campesino a full day’s wages.

Initially, the network could handle only 11 phone calls at a time, so villagers voted to impose a 5-minute limit to avoid overloads. Even with these controls, many viewed the network as a resounding success. And patterns of everyday life began to change, practically overnight: Campesinos who walked 2 hours to get to their corn fields could now phone family members if they needed provisions. Elderly women collecting firewood in the forest had a way to call for help in case of injury. Youngsters could send messages to one another without being monitored by their parents, teachers, or peers. And the pueblo’s newest company—a privately owned fleet of three-wheeled mototaxis—increased its business dramatically as prospective passengers used their phones to summon drivers.

Soon other villages in the region followed Talea’s lead and built their own cellphone networks—first, in nearby Zapotec-speaking pueblos and later, in adjacent communities where Mixe and Mixtec languages are spoken. By late 2015, the villages had formed a cooperative, Telecomunicaciones Indígenas Comunitarias, to help organize the villages’ autonomous networks. Today TIC serves more than 4,000 people in 70 pueblos.

But some in Talea de Castro began asking questions about management and maintenance of the community network. Who could be trusted with operating this critical innovation upon which villagers had quickly come to depend? What forms of oversight would be needed to safeguard such a vital part of the pueblo’s digital infrastructure? And how would villagers respond if outsiders tried to interfere with the fledgling network?

On a chilly morning in May 2014, Talea’s Monday tianguis, or open-air market, began early and energetically. Some merchants descended upon the mountain village by foot alongside cargo-laden mules. But most arrived on the beds of clattering pickup trucks. By daybreak, dozens of merchants were displaying an astonishing variety of merchandise: produce, fresh meat, live farm animals, eyeglasses, handmade huaraches, electrical appliances, machetes, knockoff Nike sneakers, DVDs, electronic gadgets, and countless other items.

Hundreds of people from neighboring villages streamed into town on this bright spring morning, and the plaza came alive. By midmorning, the pueblo’s restaurants and cantinas were overflowing. The tianguis provided a welcome opportunity to socialize over coffee or a shot of fiery mezcal.

As marketgoers bustled through the streets, a voice blared over the speakers of the village’s public address system, inviting people to attend a special event—the introduction of a new, commercial cellphone network. Curious onlookers congregated around the central plaza, drawn by the sounds of a brass band.

As music filled the air, two young men in polo shirts set up inflatable “sky dancers,” then assembled tables in front of the stately municipal palace. The youths draped the tables with bright blue tablecloths, each imprinted with the logo of Movistar, Mexico’s second-largest wireless provider. Six men then filed out of the municipal palace and sat at the tables, facing the large audience.

What followed was a ceremony that was part political rally, part corporate event, part advertisement—and part public humiliation. Four of the men were politicians affiliated with the authoritarian Institutional Revolutionary Party, which has dominated Oaxaca state politics for a century. They were accompanied by two Movistar representatives. A pair of young, attractive local women stood nearby, wearing tight jeans and Movistar T-shirts.

The politicians and company men had journeyed to the pueblo to inaugurate a new commercial cellphone service, called Franquicias Rurales (literally, “Rural Franchises”). As the music faded, one of the politicians launched a verbal attack on Talea’s community network, declaring it to be illegal and fraudulent. Then the Movistar representatives presented their alternative plan.

A cellphone antenna would soon be installed to provide villagers with Movistar cellphone service—years after a group of Taleans had unsuccessfully petitioned the company to do precisely that. Movistar would rent the antenna to investors, and the franchisees would sell service plans to local customers in turn. Profits would be divided between Movistar and the franchise owners.

The grand finale was a fiesta, where the visitors gave away cheap cellphones, T-shirts, and other swag as they enrolled villagers in the company’s mobile-phone plans.

The Movistar event came at a tough time for the pueblo’s network. From the beginning, it had experienced technical problems. The system’s VoIP technology required a stable Internet connection, which was not always available in Talea. As demand grew, the problems continued, even after a more powerful system manufactured by the Canadian company Nutaq/NuRAN Wireless was installed. Sometimes, too many users saturated the network. Whenever the electricity went out, which is not uncommon during the stormy summer months, technicians would have to painstakingly reboot the system.

In early 2014, the network encountered a different threat: opposition from the community. Popular support for the network was waning amidst a growing perception that those in charge of maintaining the network were mismanaging the enterprise. Soon, village officials demanded that the network repay the money granted by the municipal treasury. When the municipal government opted to become Talea’s Movistar franchise owner, they used these repaid funds to pay for the company’s antenna. The homegrown network lost more than half of its subscribers in the months following the company’s arrival. In March 2019, it shut down for good.

No one forced Taleans to turn their backs on the autonomous cell network that had brought them international fame. There were many reasons behind the pueblo’s turn to Movistar. Although the politicians who condemned the community network were partly responsible, the cachet of a globally recognized brand also helped Movistar succeed. The most widely viewed television events in the pueblo (as in much of the rest of the world) are World Cup soccer matches. As Movistar employees were signing Taleans up for service during the summer of 2014, the Mexican national soccer team could be seen on TV wearing jerseys that prominently featured the telecom company’s logo.

But perhaps most importantly, young villagers were drawn to Movistar because of a burning desire for mobile data services. Many teenagers from Talea attended high school in the state capital, Oaxaca City, where they became accustomed to the Internet and all that it offers.

Was Movistar’s goal to smash an incipient system of community-controlled communication, destroy a bold vision for an alternative future, and prove that villagers are incapable of managing affairs themselves? It’s hard to say. The company’s goals were probably motivated more by long-term financial objectives: to increase market share by extending its reach into remote regions.

And yet, despite the demise of Talea’s homegrown network, the idea of community-based telecoms had taken hold. The pueblo had paved the way for significant legal and regulatory changes that made it possible to create similar networks in nearby communities, which continue to thrive.

The people of Talea have long been open to outside ideas and technologies, and have maintained a pragmatic attitude and a willingness to innovate. The same outlook that led villagers to enthusiastically embrace the foreign hackers from Rhizomatica also led them to give Movistar a chance to redeem itself after having ignored the pueblo’s earlier appeals. In this town, people greatly value convenience and communication. As I tell students at my university, located in the heart of Silicon Valley, villagers for many years have longed to be reliably and conveniently connected to each other—and to the rest of the world. In other words, Taleans want cellphones because they want to be like you.

This article is based on excerpts from Connected: How a Mexican Village Built Its Own Cell Phone Network (University of California Press, 2020).

About the Author

Roberto J. González is chair of the anthropology department at San José State University.

How to Improve Threat Detection and Hunting in the AWS Cloud Using the MITRE ATT&CK Matrix

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_improve_threat_detection_and_hunting_in_the_aws_cloud

SANS and AWS Marketplace will discuss the exercise of applying MITRE’s ATT&CK Matrix to the AWS Cloud. They will also explore how to enhance threat detection and hunting in an AWS environment to maintain a strong security posture.

5G Just Got Weird

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/standards/5g-release-16

The only reason you’re able to read this right now is because of the Internet standards created by the Internet Engineering Task Force. So while standards may not always be the most exciting thing in the world, they make exciting things possible. And occasionally, even the standards themselves get weird.

That’s the case with the recent 5G standards codified by the 3rd Generation Partnership Project (3GPP), the industry group that establishes the standards for cellular networks. 3GPP finalized Release 16 on July 3.

Release 16 is where things are getting weird for 5G. While earlier releases focused on the core of 5G as a generation of cellular service, Release 16 lays the groundwork for new services that have never been addressed by cellular before. At least, not in such a rigorous, comprehensive way.

“Release 15 really focused on the situation we’re familiar with,” says Danny Tseng, a director of technical marketing at Qualcomm, referring to cellular service, adding, “release 16 really broke the barrier” for connected robots, cars, factories, and dozens of other applications and scenarios.

4G and other earlier generations of cellular focused on just that: cellular. But when 3GPP members started gathering to hammer out what 5G could be, there was interest in developing a wireless system that could do more than connect phones. “The 5G vision has always been this unifying platform,” says Tseng. When developing 5G standards, researchers and engineers saw no reason that wireless cellular couldn’t also be used to connect anything wireless.

With that in mind, here’s an overview of what’s new and weird in Release 16. If you’d rather pour through the standards yourself, here you go. But if you’d rather not drown in page after page of technical details, don’t worry—keep reading for the cheat sheet.

Vehicle-to-Everything

One of the flashiest things in Release 16 is V2X, short for “Vehicle to Everything.” In other words, using 5G for cars to communicate with each other and everything else around them. Hanbyul Seo, an engineer at LG Electronics, says V2X technologies have previously been standardized in IEEE 802.11p and 3GPP LTE V2X, but that the intention in these cases was to enable basic safety services. Seo is one of the rapporteurs for 3GPP’s item on V2X, meaning he was responsible for reporting on the item’s progress to 3GPP.

In defining V2X for 5G, Seo says the most challenging thing was to provide high data throughput, reliability, and low latency, both of which are essential for anything beyond the most basic communications. Seo explains that earlier standards typically deal with messages with hundreds of bytes that are expected to reach 90 percent of receivers in a 300-meter radius within a few hundred milliseconds. The 3GPP standards bring those benchmarks into the realm of gigabytes per second, 99.999 percent reliability, and just a few milliseconds.

Sidelinking

Matthew Webb, a 3GPP delegate for Huawei and the other rapporteur for the 3GPP item on V2X, adds that Release 16 also introduces a new technique called sidelinking. Sidelinks will allow 5G-connected vehicles to communicate directly with one another, rather than going through a cell-tower intermediary. As you might imagine, that can make a big difference for cars speeding past each other on a highway as they alert each other about their positions.

Tseng says that sidelinking started as a component of the V2X work, but it can theoretically apply to any two devices that might need to communicate directly rather than go through a base station first. Factory robots are one example, or large-scale Internet of Things installations.

Location Services

Release 16 also includes information on location services. In past generations of cellular, three cell towers were required to triangulate where a phone was by measuring the round-trip distance of a signal from each tower. But 5G networks will be able to use the round-trip time from a single tower to locate a device. That’s because massive MIMO and beamforming allow 5G towers to send precise signals directly to devices, and so the network can measure the direction and angle of a beam, along with its distance from the tower, to locate it.

Private Networks

Then there’s private networks. When we think of cellular networks, we tend to think of wide networks that cover lots of ground so that you can always be sure you have a signal. But 5G incorporates millimeter waves, which are higher frequency radio waves (30 to 300 GHz) that don’t travel nearly as far as traditional cell signals. Millimeter waves means it will be possible to build a network just for an office building, factory, or stadium. At those scales, 5G could function essentially like Wi-Fi networks.

Unlicensed Spectrum

The last area to touch on is Release 16’s details on unlicensed spectrum. Jing Sun, an engineer at Qualcomm and the 3GPP’s rapporteur on the subject, says Release 16 is the first occasion unlicensed spectrum has been included in the 5G’s cellular service. According to Sun, it made sense to expand 5G into unlicensed spectrum in the 5 and 6 GHz bands because the bands are widely available around the world and ideal for use now that 5G is pushing cellular service into higher frequency bands. Unlicensed spectrum could be key for private networks as, just like Wi-Fi, the networks could use that spectrum without having to go through the rigorous process of licensing a frequency band which may or may not be available.

Release 17 Will “Extend Reality”

Release 16 has introduced a lot of new areas for 5G service, but very few of these areas are finished. “The Release 17 scope was decided last December,” says Tseng. “We’ve got a pretty good idea of what’s in there.” In general, that means building on a lot of the blocks established in Release 16. For example, Release 17 will include more mechanisms by which devices—not just cars—can sidelink.

And it will include entirely new things as well. Release 17 includes a work item on extended reality—the catch-all term for alternate reality and virtual reality technologies. Tseng says there is also an item about NR-Lite, which will attempt to address current gaps in IoT coverage by using the 20 MHz band. Sun says Release 17 also includes a study item to explore the possibility of using frequencies in the 52 to 71 GHz range, far higher than anything used in cellular today.

Finalizing Release 17 will almost certainly be pushed back by the ongoing Covid-19 pandemic, which makes it difficult if not impossible for groups working on work items and study items to meet face-to-face. But it’s not stopping the work entirely. And when Release 17 is published, it’s certain to make 5G even weirder still.

How To Use Wi-Fi Networks To Ensure a Safe Return to Campus

Post Syndicated from Jan Dethlefs original https://spectrum.ieee.org/view-from-the-valley/telecom/wireless/want-to-return-to-campus-safely-tap-wifi-network

IEEE COVID-19 coverage logo, link to landing page

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Universities, together with other operators of large spaces like shopping malls or airports, are currently facing a dilemma: how to return to business as fast as possible, while at the same time providing a safe environment?

Given that the behavior of individuals is the driving factor for viral spread, key to answering this question will be understanding and monitoring the dynamics of how people move and interact while on campus, whether indoors or outdoors.

Fortunately, universities already have the perfect tool in place:  the campus-wide Wi-Fi network. These networks typically cover every indoor and most outdoor spaces on campus, and users are already registered. All that needs to be added is data analytics to monitor on-campus safety.

We, a team of researchers from the University of Melbourne and the startup Nexulogy, have developed the necessary algorithms that, when fed data already gathered by campus Wi-Fi networks, can help keep universities safe. We have already tested these algorithms successfully at several campuses and other large yet contained environments.

To date, little attention has been paid to using Wi-Fi networks to track social distancing. Countries like Australia have rolled out smartphone apps to support contact tracing, typically using Bluetooth to determine proximity. A recent Google/Apple collaboration, also using Bluetooth, led to a decentralized protocol for contact monitoring.

Yet the success of these apps mainly relies on people voluntarily downloading them. A study by the University of Oxford estimated that more than 70 percent of smartphone users in the United Kingdom would have to install the app for it to be effective. But adoption is not happening at anything near that scale; the Australian COVIDSafe app, for example, released in April 2020, has only been downloaded by 6 million people by mid-June 2020, or about 24 percent of the population.

Furthermore, this kind of Bluetooth-based tracking does not relate the contacts to a physical location, such as a classroom. This makes it hard to satisfy the requirements of running a safe campus. And data collected by the Bluetooth tracking apps is generally not readily available to the campus owners, so it doesn’t help make their own spaces safer.

Our Wi-Fi based algorithms provide the least privacy-intrusive monitoring mechanisms thus far, because they use only anonymous device addresses; no individual user names are necessary to understand crowd densities and proximity. In the case of a student or campus staff member reporting a positive coronavirus test, the device addresses determined to belong to someone at risk can be passed on authorities with the appropriate privacy clearance. Only then would names be matched to devices, with people at risk informed individually and privately.

Wi-Fi presents the best solution for universities for a couple of reasons: wireless coverage is already campus wide; it is a safe assumption that everyone on campus is carrying at least one Wi-Fi capable device; and virtually everyone living and working on campus registers their devices to have internet access. Such tracking is possible without onerous user-facing app downloads.

Often university executives already have the rights to use the information collected in the wireless system included as part of its Terms and Conditions. In the midst of this pandemic, they now also have a legal or, at least a moral obligation, to use such data to their best ability to improve safety and well-being of everyone on campus.

The process starts by collecting the time and symbolic location (also known as network access point) of Wi-Fi capable devices when they are first detected by the Wi-Fi infrastructure, for example, when a student enters the campus’ Wi-Fi  environment, and then during regular intervals or when they change locations. Then, after consolidating any multiple devices of a single user, our algorithms calculate the number of occupants in a given area. That provides a quick insight into crowd density in any building or outdoor plaza.

Our algorithms can also reconstruct the journey of any user within the Wi-Fi infrastructure, and from that can derive exposure time to other users, spaces visited and transmission risk.

This information lets campus managers do several things. For one, our algorithms can easily identify high risk areas by time and location and flag areas where social distance limits are regularly exceeded. This helps campus managers focus resources on areas that may need more hand sanitizers or more frequent deep cleaning.

For another, imagine an infected staff member has been identified by public health authorities. Based on the health authorities’ request, the university can identify possible infection pathways by tracking the individual’s journeys through the campus. It is possible to backtrack the movement history since the start of the data collection, for days or even weeks if necessary. 

To illustrate the methodology, we assumed one of us had a COVID infection and we reconstructed his journey and exposure to other university visitors during a recent visit to a busy university campus. In the heat map below, you can see that only the conference building showed a significant number of contacts (red bar) with an exposure time of, in this example, more than 30 minutes. While we only detected other individuals by Wi-Fi devices, the university executives could now notify people that potentially have been exposed to an infectious user.

This system isn’t without technical challenges. The biggest problem is noise in the system that needs to be removed before the data is useful.

For example, depending on the building layout and wireless network configurations, wireless counts inside specific zones can include passing outside traffic or static devices (like desktop computers). We have developed algorithms to eliminate both without requiring access to any user information.

Even though the identification and management of infection risks is limited to the area covered by the wireless infrastructure, the timely identification of a COVID event naturally benefits areas beyond the campus. Campuses, and even residential campuses in lockdown, have a daily influx of external visitors —staff, contractors, and family members. Those individuals could be identified via telecommunication providers if requested by health authorities.

In the case of someone on campus testing positive for the virus, people they came in contact with, their contact times and places they went to can be identified within minutes. This allows the initiation of necessary measures (e.g. COVID testing or decontamination) in a targeted, timely and cost-effective way.

Since the analytics can happen in the cloud, the software can easily be updated to reflect on new or refined medical knowledge or health regulations, say a new exposure time threshold or physical distancing guidelines.

Privacy is paramount in this process. Understanding population densities and crowd management is done anonymously. Only in the case of someone on campus reporting a confirmed case of the coronavirus  do authorities with the necessary privacy clearance need to connect the devices and the individuals.  Our algorithm operates separately from the identification and notification process.

As universities around the world are eager to welcome students back to the campus, proactive plans need to be in place to ensure the safety and wellbeing of everyone. Wi-Fi is available and ready to help.

Jan Dethlefs is a data scientist and Simon Wei is a data engineer, both at Nexulogy in Melbourne, Australia.  Stephan Winter is a professor and Martin Tomko is a senior lecturer, both in the Department of Infrastructure Engineering at the University of Melbourne, where they work with geospatial data analytics.

Twitter Bots Are Spreading Massive Amounts of COVID-19 Misinformation

Post Syndicated from Thor Benson original https://spectrum.ieee.org/tech-talk/telecom/internet/twitter-bots-are-spreading-massive-amounts-of-covid-19-misinformation

Back in February, the World Health Organization called the flood of misinformation about the coronavirus flowing through the Internet a “massive infodemic.” Since then, the situation has not improved. While social media platforms have promised to detect and label posts that contain misleading information related to COVID-19, they haven’t stopped the surge.

But who is responsible for all those misleading posts? To help answer the question, researchers at Indiana University’s Observatory on Social Media used a tool of their own creation called BotometerLite that detects bots on Twitter. They first compiled a list of what they call “low-credibility domains” that have been spreading misinformation about COVID-19, then used their tool to determine how many bots were sharing links to this misinformation. 

Their findings, which they presented at this year’s meeting of the Association for the Advancement of Artificial Intelligence, revealed that bots overwhelmingly spread misinformation about COVID-19 as opposed to accurate content. They also found that some of the bots were acting in “a coordinated fashion” to amplify misleading messages.  

The scale of the misinformation problem on Twitter is alarming. The researchers found that overall, the number of tweets sharing misleading COVID-19 information was roughly equivalent to the number of tweets that linked to New York Times articles. 

We talked with Kai-Cheng Yang, a PhD student who worked on this research, about the bot-detection game.

This conversation has been condensed and edited for clarity.

IEEE Spectrum: How much of the overall misinformation is being spread by bots?

Kai-Cheng Yang: For the links to the low-credibility domains, we find about 20 to 30 percent are shared by bots. The rest are likely shared by humans.

Spectrum: How much of this activity is bots sharing links themselves, and how much is them amplifying tweets that contain misinformation?

Yang: It’s a combination. We see some of the bots sharing the links directly and other bots are retweeting tweets containing those links, so they’re trying to interact with each other.

Spectrum: How do your Botometer and BotometerLite tools identify bots? What are they looking for? 

Yang: Both Botometer and BotometerLite are implemented as supervised machine learning models. We first collect a group of Twitter accounts that are manually annotated as bots or humans. We extract their characteristics from their profiles (number of friends, number of followers, if using background image, etc), and we collect data on content, sentiment, social network, and temporal behaviors. We then train our machine learning models to learn how bots are different from humans in terms of these characteristics. The differences between Botometer and BotometerLite is that Botometer considers all these characteristics whereas BotometerLite only focuses on the profiles for efficiency.

Spectrum: The links these bots are sharing: Where do they lead?

Yang: We have compiled a list of 500 or so low-credibility domains. They’re mostly news sites, but we would characterize many of them as ‘fake news.’ We also consider extremely hyper-partisan websites as low-credibility.

Spectrum: Can you give a few examples of the kinds of COVID-related misinformation that appear on these sites? 

Yang: Common themes include U.S. politics, status of the outbreak, and economic issues. A lot of the articles are not necessarily fake, but they can be hyper-partisan and misleading in some sense. We also see false information like: the virus is weaponized, or political leaders have already been vaccinated.

Spectrum: Did you look at whether the bots spreading misinformation have followers, and whether those followers are humans or other bots? 

Yang: Examining the followers of Twitter accounts is much harder due the API rate limit, and we didn’t conducted such analysis this time.

Spectrum: In your paper, you write that some of the bots seem to be acting in a coordinated fashion. What does that mean? 

Yang: We find that some of the accounts (not necessarily all bots) were sharing information from the same set of low-credibility websites. For two arbitrary accounts, this is very unlikely, yet we found some accounts doing so together. The most plausible explanation is that these accounts were coordinated to push the same information. 

Spectrum: How do you detect bot networks? 

Yang: I’m assuming you are referring to the network shown in the paper. For that, we simply extract the list of websites each account shares and then find the accounts that have very similar lists and consider them to be connected.

Spectrum: What do you think can be done to reduce the amount of misinformation we’re seeing on social media?

Yang: I think it has to be done by the platforms. They can do flagging, or if they know a source is low-credibility, maybe they can do something to reduce the exposure. Another thing we can do is improve the average person’s journalism literacy: Try to teach people that there might be those kinds of low-credibility sources or fake news online and to be careful. We have seen some recent studies indicating that if you tell the user what they’re seeing might be from low-credibility sources, they become much more sensitive to such things. They’re actually less likely to share those articles or links. 

Spectrum: Why can’t Twitter prevent the creation and proliferation of bots? 

Yang: My understanding is that when you try to make your tool or platform easy to use for real users, it opens doors for the bot creators at the same time. So there is a trade-off.

In fact, according to my own experience, recently Twitter started to ask the users to put in their phone numbers and perform more frequent two-step authentications and recaptcha checks. It’s quite annoying for me as a normal Twitter user, but I’m sure it makes it harder, though still possible, to create or control bots. I’m happy to see that Twitter has stepped up.

With 5G Rollout Lagging, Research Looks Ahead to 6G

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/telecom/wireless/with-5g-rollout-lagging-research-looks-ahead-to-6g

Amid a 5G rollout that has faced its fair share of challenges, it might seem somewhat premature to start looking ahead at 6G, the next generation of mobile communications. But 6G development is happening now, and it’s being pursued in earnest by both industry and academia.

Much of the future landscape for 6G was mapped out in an article published in March of this year in an article published by IEEE Communications titled “Toward 6G Networks: Use Cases and Technologies.”  The article presents the requirements, the enabling technologies and the use cases for adopting a systematic approach to overcoming the research challenges for 6G.

“6G research activities are envisioning radically new communication technologies, network architectures, and deployment models,” said Michele Zorzi,  a professor at the University of Padua in Italy, and one of the authors of the IEEE Communications article. “Although some of these solutions have already been examined in the context of 5G, they were intentionally left out of initial 5G standards developments and will not be part of early 5G commercial rollout mainly because markets are not mature enough to support them.”

The foundational difference between 5G and 6G networks, according to Zorzi, will be the increased role that intelligence will play in 6G networks. It will go beyond merely classification and prediction tasks as is the case in legacy and/or 5G systems.

While machine-learning-driven networks are now still in their infancy, they will likely represent a fundamental component of the 6G ecosystem, which will shift towards a fully-user-centric architecture where end terminals will be able to make autonomous network decisions without supervision from centralized controllers.

This decentralization of control will enable sub-millisecond latency as required by several 6G services (which is below the already challenging 1-millisecond requirement of emerging 5G systems). This is expected to yield more responsive network management.

To achieve this new kind of performance, the underlying technologies of 6G will be fundamentally different from 5G. For example, says Marco Giordani, a researcher at the University of Padua and co-author of the IEEE Communications article, even though 5G networks have been designed to operate at extremely high frequencies in the millimeter-wave bands, 6G will exploit even higher-spectrum technologies—terahertz and optical communications being two examples.

At the same time, Giordani explains that 6G will have a new cell-less network architecture that is a clear departure from current mobile network designs. The cell-less paradigm can promote seamless mobility support, targeting interruption-free communication during handovers, and can provide quality of service (QoS) guarantees that are in line with the most challenging mobility requirements envisioned for 6G, according to Giordani.

Giordani adds: “While 5G networks (and previous generations) have been designed to provide connectivity for an essentially bi-dimensional space, future 6G heterogeneous architectures will provide three-dimensional coverage by deploying non-terrestrial platforms (e.g., drones, HAPs, and satellites) to complement terrestrial infrastructures.”

Key Industry and Academic Initiatives in 6G Development:

Indian Mobile Service Providers Suspected of Providing Discriminatory Services

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/internet/indian-mobile-service-providers-suspected-of-providing-discriminatory-services

India’s Telecom Disputes Settlement and Appellate Tribunal (TDSAT) has granted interim relief to telecom companies Bharti Airtel and Vodafone Idea, allowing them to continue with their premium-service plans. The TDSAT order came on 18 July, exactly a week after the country’s telecom regulatory authority had blocked the two companies from offering better speeds to higher-paying customers, citing net neutrality violations.

“This is not a final determination by the TDSAT,” says Apar Gupta of the Internet Freedom Foundation, a digital liberties organization that has been at the forefront of the fight for online freedom, privacy, and innovation in India. While the Telecom Regulatory Authority of India (TRAI) continues with its inquiry, the two providers will not be prevented from rolling out their plains.

The matter was brought to TRAI’s notice on 8 July by a rival mobile service provider, Reliance Jio, which wrote to the regulatory body asking about Airtel’s and Vodafone Idea’s Platinum and RedX plans, respectively. “Before offering any such plans ourselves…we would like to seek the Authority’s views on whether [these] tariff offerings…are in compliance with the extant regulatory framework,” the letter said.

Three days later, TRAI asked for the respective Airtel and Vodafone Idea plans to be blocked while these claims were investigated. It also sent both telcos a 10-point questionnaire related to various elements of their services, seeking clarification on how they defined “priority 4G network” and “faster speeds,” among other things. Following the blocking of the plans, Vodafone Idea approached TDSAT, arguing that TRAI’s order was illegal and arbitrary, considering that their RedX plan had been rolled out over eight months earlier. When contacted for comment on the matter, Vodafone declined, “as the matter is in TDSAT court.” Airtel, meanwhile, has agreed to comply with TRAI’s directive and not take new customers for its Platinum plan until the matter has been fully investigated.

Although it is being framed as such by media coverage and in the court of public opinion, strictly speaking, the offering of new tariffs by Airtel and Vodafone Idea are not net neutrality concerns, says Nikhil Pahwa, co-founder of Save the Internet, the campaign that played a key role in framing India’s net neutrality rules. “In India, net neutrality regulation covers…whether specific internet services or apps are either being priced differentially or being offered at speeds different from the rest of the Internet.” However, from a consumer perspective, he adds, “I think it is important for the TRAI to investigate these plans because…it is impossible for telecom operators to guarantee speeds for customers. What needs to be investigated is whether speeds are effectively deprecated for a particular set of consumers, because the throughput from a mobile base station is limited.”

Since July 2018, India has had stringent net neutrality regulations in place—possibly among the strongest in the world—at least on paper. Any form of data discrimination is banned; blocking, degrading, slowing down or granting preferential speeds or treatment by providers is prohibited; and Internet service providers stand to lose their licenses if found in violation. This was the result of a massive, public, volunteer-driven campaign since 2015. Save the Internet estimates that over 1 million citizens were part of the campaign at one point or another.

The concept of net neutrality captured public imagination when, in 2014, Airtel decided it would charge extra for VoIP services. The company pulled its plan after public outcry, but the wheels of differential pricing were set in motion. This resulted in TRAI prohibiting discriminatory tariffs for data services in 2016—a precursor to the net neutrality principles adopted two years later. These developments also forced Facebook to withdraw its zero-rated Free Basics service in India.

“We have not seen net neutrality enforcement in India till now in a very clear manner,” says Gupta, adding that TRAI is in the process of coming up with an enforcement mechanism. “They opened a consultation on it, and invited views from people… Right now they’re in the process of making…recommendations to the Department of Telecom, which can then frame them under the Telegraph Act.” The telecom department exercises wider powers under this Act, even though TRAI also has specific powers in administering certain licensing conditions, including quality of service and interconnection.

“[The] internet is built around the idea that all users have equal right to create websites, applications, and services for the rest of the world, and enables innovation because it is a space with infinite competition,” Pahwa says. And net neutrality is at the core of that freedom.

Cognitive Radios Will Go Where No Deep-Space Mission Has Gone Before

Post Syndicated from Sven Bilén original https://spectrum.ieee.org/telecom/wireless/cognitive-radios-will-go-where-no-deepspace-mission-has-gone-before

Space seems empty and therefore the perfect environment for radio communications. Don’t let that fool you: There’s still plenty that can disrupt radio communications. Earth’s fluctuating ionosphere can impair a link between a satellite and a ground station. The materials of the antenna can be distorted as it heats and cools. And the near-vacuum of space is filled with low-level ambient radio emanations, known as cosmic noise, which come from distant quasars, the sun, and the center of our Milky Way galaxy. This noise also includes the cosmic microwave background radiation, a ghost of the big bang. Although faint, these cosmic sources can overwhelm a wireless signal over interplanetary distances.

Depending on a spacecraft’s mission, or even the particular phase of the mission, different link qualities may be desirable, such as maximizing data throughput, minimizing power usage, or ensuring that certain critical data gets through. To maintain connectivity, the communications system constantly needs to tailor its operations to the surrounding environment.

Imagine a group of astronauts on Mars. To connect to a ground station on Earth, they’ll rely on a relay satellite orbiting Mars. As the space environment changes and the planets move relative to one another, the radio settings on the ground station, the satellite orbiting Mars, and the Martian lander will need continual adjustments. The astronauts could wait 8 to 40 minutes—the duration of a round trip—for instructions from mission control on how to adjust the settings. A better alternative is to have the radios use neural networks to adjust their settings in real time. Neural networks maintain and optimize a radio’s ability to keep in contact, even under extreme conditions such as Martian orbit. Rather than waiting for a human on Earth to tell the radio how to adapt its systems—during which the commands may have already become outdated—a radio with a neural network can do it on the fly.

Such a device is called a cognitive radio. Its neural network autonomously senses the changes in its environment, adjusts its settings accordingly—and then, most important of all, learns from the experience. That means a cognitive radio can try out new configurations in new situations, which makes it more robust in unknown environments than a traditional radio would be. Cognitive radios are thus ideal for space communications, especially far beyond Earth orbit, where the environments are relatively unknown, human intervention is impossible, and maintaining connectivity is vital.

Worcester Polytechnic Institute and Penn State University, in cooperation with NASA, recently tested the first cognitive radios designed to operate in space and keep missions in contact with Earth. In our tests, even the most basic cognitive radios maintained a clear signal between the International Space Station (ISS) and the ground. We believe that with further research, more advanced, more capable cognitive radios can play an integral part in successful deep-space missions in the future, where there will be no margin for error.

Future crews to the moon and Mars will have more than enough to do collecting field samples, performing scientific experiments, conducting land surveys, and keeping their equipment in working order. Cognitive radios will free those crews from the onus of maintaining the communications link. Even more important is that cognitive radios will help ensure that an unexpected occurrence in deep space doesn’t sever the link, cutting the crew’s last tether to Earth, millions of kilometers away.

Cognitive radio as an idea was first proposed by Joseph Mitola III at the KTH Royal Institute of Technology, in Stockholm, in 1998. Since then, many cognitive radio projects have been undertaken, but most were limited in scope or tested just a part of a system. The most robust cognitive radios tested to date have been built by the U.S. Department of Defense.

When designing a traditional wireless communications system, engineers generally use mathematical models to represent the radio and the environment in which it will operate. The models try to describe how signals might reflect off buildings or propagate in humid air. But not even the best models can capture the complexity of a real environment.

A cognitive radio—and the neural network that makes it work—learns from the environment itself, rather than from a mathematical model. A neural network takes in data about the environment, such as what signal modulations are working best or what frequencies are propagating farthest, and processes that data to determine what the radio’s settings should be for an optimal link. The key feature of a neural network is that it can, over time, optimize the relationships between the inputs and the result. This process is known as training.

For cognitive radios, here’s what training looks like. In a noisy environment where a signal isn’t getting through, the radio might first try boosting its transmission power. It will then determine whether the received signal is clearer; if it is, the radio will raise the transmission power more, to see if that further improves reception. But if the signal doesn’t improve, the radio may try another approach, such as switching frequencies. In either case, the radio has learned a bit about how it can get a signal through its current environment. Training a cognitive radio means constantly adjusting its transmission power, data rate, signal modulation, or any other settings it has in order to learn how to do its job better.

Any cognitive radio will require initial training before being launched. This training serves as a guide for the radio to improve upon later. Once the neural network has undergone some training and it’s up in space, it can autonomously adjust the radio’s settings as necessary to maintain a strong link regardless of its location in the solar system.

To control its basic settings, a cognitive radio uses a wireless system called a software-defined radio. Major functions that are implemented with hardware in a conventional radio are accomplished with software in a software-defined radio, including filtering, amplifying, and detecting signals. That kind of flexibility is essential for a cognitive radio.

There are several basic reasons why cognitive radio experiments are mostly still limited in scope. At their core, the neural networks are complex algorithms that need enormous quantities of data to work properly. They also require a lot of computational horsepower to arrive at conclusions quickly. The radio hardware must be designed with enough flexibility to adapt to those conclusions. And any successful cognitive radio needs to make these components work together. Our own effort to create a proof-of-concept cognitive radio for space communications was possible only because of the state-of-the-art Space Communications and Navigation (SCaN) test bed on the ISS.

NASA’s Glenn Research Center created the SCaN test bed specifically to study the use of software-defined radios in space. The test bed was launched by the Japan Aerospace Exploration Agency and installed on the main lattice frame of the space station in July 2012. Until its decommissioning in June 2019, the SCaN test bed allowed researchers to test how well ­software-defined radios could meet the demands expected of radios in space—such as real-time reconfiguration for orbital operations, the development and verification of new software for custom space networks, and, most relevant for our group, cognitive communications.

The test bed consisted of three software-defined radios broadcasting in the S-band (2 to 4 gigahertz) and Ka-band (26.5 to 40 GHz) and receiving in the L-band (1 to 2 GHz). The SCaN test bed could communicate with NASA’s Tracking and Data Relay Satellite System in low Earth orbit and a ground station at NASA’s Glenn Research Center, in Cleveland.

Nobody has ever used a cognitive radio system on a deep-space mission before—nor will they, until the technology has been thoroughly vetted. The SCaN test bed offered the ideal platform for testing the tech in a less hostile environment close to Earth. In 2017, we built a cognitive radio system to communicate between ground-based modems and the test bed. Ours would be the first-ever cognitive radio experiments conducted in space.

In our experiments, the SCaN test bed was a stand-in for the radio on a deep-space probe. It’s essential for a deep-space probe to maintain contact with Earth. Otherwise, the entire mission could be doomed. That’s why our primary goal was to prove that the radio could maintain a communications link by adjusting its radio settings autonomously. Maintaining a high data rate or a robust signal were lower priorities.

A cognitive radio at the ground station would decide on an “action,” or set of operating parameters for the radio, which it would send to the test-bed transmitter and to two modems at the ground station. The action dictated a specific data rate, modulation scheme, and power level for the test-bed transmitter and the ground station modems that would most likely be effective in maintaining the wireless link.

We completed our first tests during a two-week window in May 2017. That wasn’t much time, and we typically ended up with only two usable passes per day, each lasting just 8 or 9 minutes. The ISS doesn’t pass over the same points on Earth during each orbit, so there’s a limited number of opportunities to get a line-of-sight connection to a particular location. Despite the small number of passes, though, our radio system experienced plenty of dynamic and challenging link conditions, including fluctuations in the atmosphere and weather. Often, the solar panels and other protrusions on the ISS created large numbers of echoes and reflections that our system had to take into account.

During each pass, the neural network would compare the quality of the communications link with data from previous passes. The network would then select the previous pass with the conditions that were most similar to those of the current pass as a jumping-off point for setting the radio. Then, as if fiddling with the knob on an FM radio, the neural network would adjust the radio’s settings to best fit the conditions of the current pass. These settings included all of the elements of the wireless signal, including the data rate and modulation.

The neural network wasn’t limited to drawing on just one previous pass. If the best option seemed to be taking bits and pieces of multiple passes to create a bespoke solution, the network would do just that.

During the tests, the cognitive radio clearly showed that it could learn how to maintain a communications link. The radio autonomously selected settings to avoid losing contact, and the link remained stable even as the radio adjusted itself. It also managed a signal power strong enough to send data, even though that wasn’t a primary goal for us.

Overall, the success of our tests on the SCaN test bed demonstrated that cognitive radios could be used for deep-space missions. However, our experiments also uncovered several problems that will have to be solved before such radios blast off for another planet.

The biggest problem we encountered was something called “catastrophic forgetting.” This happens when a neural network receives too much new information too quickly and so forgets a lot of the information it already possessed. Imagine you’re learning algebra from a textbook and by the time you reach the end, you’ve already forgotten the first half of the material, so you start over. When this happened in our experiments, the cognitive radio’s abilities degraded significantly because it was basically retraining itself over and over in response to environmental conditions that kept getting overwritten.

Our solution was to implement a tactic called ensemble learning in our cognitive radio. Ensemble learning is a technique, still largely experimental, that uses a collection of “learner” neural networks, each of which is responsible for training under a limited set of conditions—in our case, on a specific type of communications link. One learner may be best suited for ISS passes with heavy interference from solar particles, while another may be best suited for passes with atmospheric distortions caused by thunderstorms. An overarching meta–neural network decides which learner networks to use for the current situation. In this arrangement, even if one learner suffers from catastrophic forgetting, the cognitive radio can still function.

To understand why, let’s say you’re learning how to drive a car. One way to practice is to spend 100 hours behind the wheel, assuming you’ll learn everything you need to know in the process. That is currently how cognitive radios are trained; the hope is that what they learn during their training will be applicable to any situation they encounter. But what happens when the environments the cognitive radio encounters in the real world differ significantly from the training environments? It’s like practicing to drive on highways for 100 hours but then getting a job as a delivery truck driver in a city. You may excel at driving on highways, but you might forget the basics of driving in a stressful urban environment.

Now let’s say you’re practicing for a driving test. You might identify what scenarios you’ll likely be tested on and then make sure you excel at them. If you know you’re bad at parallel parking, you may prioritize that over practicing rights-of-way at a 4-way stop sign. The key is to identify what you need to learn rather than assume you will practice enough to eventually learn everything. In ensemble learning, this is the job of the meta–neural network.

The meta–neural network may recognize that the radio is in an environment with a high amount of ionizing radiation, for example, and so it will select the learner networks for that environment. The neural network thus starts from a baseline that’s much closer to reality. It doesn’t have to replace information as quickly, making catastrophic forgetting much less likely.

We implemented a very basic version of ensemble learning in our cognitive radio for a second round of experiments in August 2018. We found that the technique resulted in fewer instances of catastrophic forgetting. Nevertheless, there are still plenty of questions about ensemble learning. For one, how do you train the meta–neural network to select the best learners for a scenario? And how do you ensure that a learner, if chosen, actually masters the scenario it’s being selected for? There aren’t solid answers yet for these questions.

In May 2019, the SCaN test bed was decommissioned to make way for an X-ray communications experiment, and we lost the opportunity for future cognitive-radio tests on the ISS. Fortunately, NASA is planning a three-CubeSat constellation to further demonstrate cognitive space communications. If the mission is approved, the constellation could launch in the next several years. The goal is to use the constellation as a relay system to find out how multiple cognitive radios can work together.

Those planning future missions to the moon and Mars know they’ll need a more intelligent approach to communications and navigation than we have now. Astronauts won’t always have direct-to-Earth communications links. For example, signals sent from a radio telescope on the far side of the moon will require relay satellites to reach Earth. NASA’s planned orbiting Lunar Gateway, aside from serving as a staging area for surface missions, will be a major communications relay.

The Lunar Gateway is exactly the kind of space communications system that will benefit from cognitive radio. The round-trip delay for a signal between Earth and the moon is about 2.5 seconds. Handing off radio operations to a cognitive radio aboard the Lunar Gateway will save precious seconds in situations when those seconds really matter, such as maintaining contact with a robotic lander during its descent to the surface.

Our experiments with the SCaN test bed showed that cognitive radios have a place in future deep-space communications. As humanity looks again at exploring the moon, Mars, and beyond, guaranteeing reliable connectivity between planets will be crucial. There may be plenty of space in the solar system, but there’s no room for dropped calls.

This article appears in the August 2020 print issue as “Where No Radio Has Gone Before.”

About the Authors

Alexander Wyglinski is a professor of electrical engineering and robotics engineering at Worcester Polytechnic Institute. Sven Bilén is a professor of engineering design, electrical engineering, and aerospace engineering at the Pennsylvania State University. Dale Mortensen is an electronics engineer and Richard Reinhart is a senior communications engineer at NASA’s Glenn Research Center, in Cleveland.

APTs Use Coronavirus as a Lure

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/apts_use_coronavirus_as_a_lure

Malwarebytes

Threat actors are closely monitoring public events happening around the world, and quickly employing those themes in attack vectors to take advantage of the opportunity. That said, various Advanced Persistent Threat (APT) groups are using the coronavirus pandemic as a theme in several malicious campaigns.

By using social engineering tactics such as spam and spear phishing with COVID-19 as a lure, cybercriminals and threat actors increase the likelihood of a successful attack. In this paper, we:

  • Provide an overview of several different APT groups using coronavirus as a lure.
  • Categorize APT groups according to techniques used to spam or send phishing emails.
  • Describe various attack vectors, timeline of campaigns, and malicious payloads deployed.
  • Analyze use of COVID-19 lure and code execution.
  • Get ready to dig into the details of each APT group, their origins, what they’re known for and their latest strike. 

Twitter’s Direct Messages Is a Bigger Headache Than the Bitcoin Scam

Post Syndicated from Fahmida Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/twitters-direct-messages-is-a-bigger-headache-than-the-bitcoin-scam

Twitter has re-enabled the ability for verified accounts to post new messages and restored access to locked accounts after Wednesday’s unprecedented account takeover attack. The company is still investigating what happened in the attack, which resulted in accounts belonging to high-profile individuals posting similar messages asking people to send Bitcoins to an unknown cryptocurrency wallet. 

Twitter said about 130 accounts were affected in this attack, and they included high-profile individuals such as Tesla CEO Elon Musk, former president Barack Obama, presumptive Democratic candidate for president Joe Biden, former New York City mayor Michael Bloomberg, and Amazon CEO Jeff Bezos. While there was “no evidence” the attackers had obtained account passwords, Twitter has not yet provided any information about anything else the attackers may have accessed, such as users’ direct messages. If attackers had harvested the victims’ direct messages for potentially sensitive information, the damage is far worse than the thousands of dollars the attackers made off the scam.

Messages can contain a lot of valuable information. Elon Musk’s public messages have impacted Tesla’s stock price, so it is possible that something he said in a direct message could also move markets. Even if confidential information was not shared over direct messages, just the knowledge of who these people have spoken to could be dangerous in the wrong hands. An attacker could know about the next big investment two CEOs were discussing, or learn what politicians discussed when they thought they were on a secure communications channel,  says Max Heinemeyer, director of threat hunting at security company Darktrace.

“It matters a lot if DMs were accessed: Imagine what kind of secrets, extortion material and explosive news could be gained from reading the private messages of high-profile, public figures,”  said Heinemeyer.

The attackers used social engineering to access internal company tools, but it’s not known if the tools provided full access or if there were limitations in what the attackers could do at that point. The fact that Twitter does not offer end-to-end encryption for direct messages increases the likelihood that attackers were able to see the contents of the messages. End-to-end encryption is a way to protect the data as it travels from one location to another. The message’s contents are encrypted on a user’s device, and only the intended recipient can decrypt the message to read it. If end-to-end encryption had been in place for direct messages, the attackers may been able to see in the internal tool that there were messages, but not know what the messages actually said. 

“We don’t know the full extent of the attack, but Twitter wouldn’t have to worry about whether or not the attacker read, changed, or exfiltrated DMs if they had end-to-end encryption for DMs like we’ve asked them to,” the Electronic Frontier Foundation (EFF) said in an emailed statement. Eva Galperin, EFF’s director of cybersecurity said the EFF asked Twitter to begin encrypting DMs as part of the EFF’s Fix It Already campaign in 2018. 

“They did not fix it,” Galperin said.

Providing end-to-end encryption for direct messages is not an unsurmountable challenge for Twitter, says Richard White, adjunct professor of cybersecurity at University of Maryland Global Campus. Encrypting data in motion can be complex, as it takes a lot of resources and memory for the devices to perform real-time decryption. But many messaging platforms have successfully implemented end-to-end encryption. There are also services that have addressed the challenge of having encrypted messages accessible from multiple devices. The real issue is the magnitude of Twitter’s reach, complexity of infrastructure, and the sheer number of global users, White says. Scaling up what has worked in other cases is not straightforward because the issues become more complex, making the changes “more time-consuming and costly,” White said.

Twitter was working on end-to-end encrypted direct messages back in 2018, Sen. Ron Wyden in a statement. It is not clear if the project was still underway at the time of the hack or if it had been shuttered.

“If hackers gained access to users’ DMs, this breach could have a breathtaking impact for years to come, Wyden said  

It is possible the Bitcoin scam was a “head-turning attack” that acted as a smokescreen to hide the attackers’ true objectives, says White. There is precedent for this kind of subterfuge, such as the distributed denial-of-service attack against Sony in 2011, during which attackers compromised 101 million user accounts. Back in 2013,  Gartner analyst Avivah Litan warned that criminals were using DDoS attacks to distract bank security staff from detecting fraudulent money transfers. 

“Attackers making a lot of noise in one area while secretly coming in from another is a very effective tactic,” White said.

White says it’s unlikely that this attack was intended as a distraction because it was too noisy. Being that obvious undermines the effectiveness of the diversion as it doesn’t give attackers time to carry out their activities. A diversion should not attract attention to the very accounts being targeted.

However, that doesn’t mean the attackers didn’t access any of the direct messages belonging to the victims, and that doesn’t mean the attackers won’t do something with the direct messages now, even if that hadn’t been their primary goal. 

“It is unclear what other nefarious activities the attackers may have done behind the scenes,” Heinemeyer said.

More Worries over the Security of Web Assembly

Post Syndicated from David Schneider original https://spectrum.ieee.org/tech-talk/telecom/security/more-worries-over-the-security-of-web-assembly

In 1887, Lord Acton famously wrote, “Power tends to corrupt, and absolute power corrupts absolutely.” He was, of course, referring to people who wield power, but the same could be said for software.

As Luke Wagner of Mozilla described in these pages in 2017, the Web has recently adopted a system that affords software running in browsers much more power than was formerly available—thanks to something called Web Assembly, or Wasm for short. Developers take programs written, say, in C++, originally designed to run natively on the user’s computer, and compile them into Wasm that can then be sent over the Web and run on a standard Web browser. This allows the browser-based version of the program to run nearly as fast as the native one—giving the Web a powerful boost. But as researchers continue to discover, with that additional power comes additional security issues.

One of the earliest concerns with Web Assembly was its use for running software that would mine cryptocurrency using people’s browsers. Salon, to note a prominent example, began in February of 2018 to allow users to browse its content without having to view advertisements so long as they allowed Salon to make use of their spare CPU cycles to mine the cryptocurrency Monero. This represented a whole new approach to web publishing economics, one that many might prefer to being inundated with ads.

Salon was straightforward about what it was doing, allowing readers to opt in to cryptomining or not. Its explanation of the deal it was offering could be faulted perhaps for being a little vague, but it did address such questions as “Why are my fans turning on?”

To accomplish this in-browser crypto-mining, Salon used a software developed by a now-defunct operation called CoinHive, which made good use of Web Assembly for the required number crunching. Such mining could also have been carried out in the Web’s traditional in-browser programming language, Javascript, but much less effectively.

Although there was debate within the computer-security community for a while about whether such cryptocurrency mining really constituted malware or just a new model for monetizing websites, in practice it amounted to malware, with most sites involved not informing their visitors that such mining was going on. In many cases, you couldn’t fault the website owners, who were oblivious that mining code had been sneaked onto their websites.

A 2019 study conducted by researchers at the Technical University of Braunschweig in Germany investigated the top 1 million websites and found Web Assembly to be used in about 1,600 of them. More than half of those instances were for mining crytocurrency. Another shady use of Web Assembly they found, though far less prevalent, was for code obfuscation: to hide malicious actions running in the browser that would be more apparent if done using Javascript.

To make matters even worse, security researchers have increasingly been finding vulnerabilities in Web Assembly, some that had been known and rectified for native programs years ago. The latest discoveries in this regard appear in a paper posted online by Daniel Lehmann and Michael Pradel of the University of Stuttgart, and Johannes Kinder of Bundeswehr University Munich, submitted to the 2020 Usenix Security Conference, which is to take place in August. These researchers show that Web Assembly, as least as it is now implemented, contains vulnerabilities that are much more subtle than just the possibility that it could be used for surreptitious cryptomining or for code obfuscation.

One class of vulnerabilities stems fundamentally from how Web Assembly manages memory compared with what goes on natively. Web Assembly code runs on a virtual machine, one the browser creates. That virtual machine includes a single contiguous block of memory without any holes. That’s different from what takes place when a program runs natively, where the virtual memory provided for a program has many gaps—referred to as unmapped pages. When code is run natively, a software exploit that tries to read or write to a portion of memory that it isn’t supposed to access could end up targeting an unmapped page, causing the malicious program to halt. Not so with Web Assembly.

Another memory-related vulnerability of Web Assembly arises from the fact that an attacker can deduce how a program’s memory will be laid out simply by examining the Wasm code. For a native application, the computer’s operating system offers what’s called address space layout randomization, which makes it harder for an attacker to target a particular spot in program memory.

To help illustrate the security weaknesses of Web Assembly, these authors describe a hypothetical Wasm application that converts images from one format to another. They imagine that somebody created such a service by compiling a program that uses a version of the libpgn library that contains a known buffer-overflow vulnerability. That wouldn’t likely be a problem for a program that runs natively because modern compilers include what are known as stack canaries—a protection mechanism that prevents exploitation of this kind of vulnerability. Web Assembly includes no such protections and thus would inherit a vulnerability that was truly problematic.

Although the creators of Web Assembly took pains to make it safe, it shouldn’t come as a great surprise that unwelcome applications of its power and unexpected vulnerabilities of its design have come to light. That’s been the story of networked computers from the outset, after all.

Infinera and Windstream Beam 800 Gigabits Per Second Through a Single Optical Fiber

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/infinera-and-windstream-beam-800-gigabits-per-second-through-a-single-optical-fiber

For the first time, an 800 gigabit per second connection has been made over a live fiber optic link. The connection, a joint test in June conducted by Infinera and Windstream, beamed through a fiber optic line stretching from San Diego and Phoenix. If widely implemented, 800G connections could reduce the costs of operating long-haul fiber networks.

800G should not be confused with the more commonly-known 5G cellular service. In the latter, the “G” refers to the current generation of wireless technology. In fiber optics, the “G” indicates how many gigabits per second an individual cable can carry. For most long-haul routes today, 100G is standard.

The test conducted by Infinera, an optical transmission equipment manufacturer, and Windstream, a service provider, is not the first 800G demonstration, nor is it even the first 800G over long distances. It is, however, the first demonstration over a live network, where conditions are rarely, if ever, as ideal as a laboratory.

“We purposely selected this travel route because of how typical it looks,” says Art Nichols, Windstream’s vice president of architecture and technology.

In a real-world route, amplifiers and repeaters, which boost and regenerate optical signals respectively, are not placed regularly along the route for optimal performance. Instead, they’re placed near where people actually live, work, and transmit data. This means that a setup that might deliver 800 Gbps in a lab may not necessarily work over an irregular live network.

For 800G fiber, 800 Gbps is the maximum data rate possible and usually is not sustainable over very long distances, often falling off after about 100 kilometers. The 800G test conducted by Infinera and Windstream successfully delivered the maximum data rate through a single fiber across more than 730 km. “There’s really a fundamental shift in the underlying technology that made this happen,” says Rob Shore, the senior vice president of marketing at Infinera.

Shore credits Infinera’s Nyquist subcarriers [PDF] for sustaining maximum data rates over long distances. Named for electrical engineer Harry Nyquist, the subcarriers digitally divide a single laser beam into 8 components.

“It’s the same optical signal, and we’re essentially dividing it or compartmentalizing it into separate individual data streams,” Shore says.

Infinera’s use of Nyquist subcarriers amplifies the effect of another, widely-adopted optical technique: probabilistic constellation shaping. According to Shore, the technique, originally pioneered by Nokia, is a way to “groom” individual optical signals for better performance—including traveling longer distances before suffering from attenuation. Shore says that treating each optical signal as 8 separate signals thanks to the Nyquist subcarriers essentially compounds the effects of probabilistic constellation shaping, allowing Infinera’s 800G headline data rates to travel much further than is typically possible.

What’s next for 800G after this test? “Obviously, the very first thing we need to is to actually release the product” used for the demonstration, Shore says, which he expects Infinera to do before the end of the year. 800G fiber could come to play an important part in network backhaul, especially as 5G networks come on-line around the world. All that wireless data will have to travel through the wired infrastructure somehow, and 800G fiber could ensure there will be bandwidth to spare.

The Uncertain Future of Ham Radio

Post Syndicated from Julianne Pepitone original https://spectrum.ieee.org/telecom/wireless/the-uncertain-future-of-ham-radio

Will the amateur airwaves fall silent? Since the dawn of radio, amateur operators—hams—have transmitted on tenaciously guarded slices of spectrum. Electronic engineering has benefited tremendously from their activity, from the level of the individual engineer to the entire field. But the rise of the Internet in the 1990s, with its ability to easily connect billions of people, captured the attention of many potential hams. Now, with time taking its toll on the ranks of operators, new technologies offer opportunities to revitalize amateur radio, even if in a form that previous generations might not recognize.

The number of U.S. amateur licenses has held at an anemic 1 percent annual growth for the past few years, with about 7,000 new licensees added every year for a total of 755,430 in 2018. The U.S. Federal Communications Commission doesn’t track demographic data of operators, but anecdotally, white men in their 60s and 70s make up much of the population. As these baby boomers age out, the fear is that there are too few young people to sustain the hobby.

“It’s the $60,000 question: How do we get the kids involved?” says Howard Michel, former CEO of the American Radio Relay League (ARRL). (Since speaking with IEEE Spectrum, Michel has left the ARRL. A permanent replacement has not yet been appointed.)

This question of how to attract younger operators also reveals deep divides in the ham community about the future of amateur radio. Like any large population, ham enthusiasts are no monolith; their opinions and outlooks on the decades to come vary widely. And emerging digital technologies are exacerbating these divides: Some hams see them as the future of amateur radio, while others grouse that they are eviscerating some of the best things about it.

No matter where they land on these battle lines, however, everyone understands one fact. The world is changing; the amount of spectrum is not. And it will be hard to argue that spectrum reserved for amateur use and experimentation should not be sold off to commercial users if hardly any amateurs are taking advantage of it.

Before we look to the future, let’s examine the current state of play. In the United States, the ARRL, as the national association for hams, is at the forefront, and with more than 160,000 members it is the largest group of radio amateurs in the world. The 106-year-old organization offers educational courses for hams; holds contests where operators compete on the basis of, say, making the most long-distance contacts in 48 hours; trains emergency communicators for disasters; lobbies to protect amateur radio’s spectrum allocation; and more.

Michel led the ARRL between October 2018 and January 2020, and he fits easily the profile of the “average” American ham: The 66-year-old from Dartmouth, Mass., credits his career in electrical and computer engineering to an early interest in amateur radio. He received his call sign, WB2ITX, 50 years ago and has loved the hobby ever since.

“When our president goes around to speak to groups, he’ll ask, ‘How many people here are under 20 [years old]?’ In a group of 100 people, he might get one raising their hand,” Michel says.

ARRL does sponsor some child-centric activities. The group runs twice-annual Kids Day events, fosters contacts with school clubs across the country, and publishes resources for teachers to lead radio-centric classroom activities. But Michel readily admits “we don’t have the resources to go out to middle schools”—which are key for piquing children’s interest.

Sustained interest is essential because potential hams must clear a particular barrier before they can take to the airwaves: a licensing exam. Licensing requirements vary—in the United States no license is required to listen to ham radio signals—but every country requires operators to demonstrate some technical knowledge and an understanding of the relevant regulations before they can get a registered call sign and begin transmitting.

For those younger people who are drawn to ham radio, up to those in their 30s and 40s, the primary motivating factor is different from that of their predecessors. With the Internet and social media services like WhatsApp and Facebook, they don’t need a transceiver to talk with someone halfway around the world (a big attraction in the days before email and cheap long-distance phone calls). Instead, many are interested in the capacity for public service, such as providing communications in the wake of a disaster, or event comms for activities like city marathons.

“There’s something about this post-9/11 group, having grown up with technology and having seen the impact of climate change,” Michel says. “They see how fragile cellphone infrastructure can be. What we need to do is convince them there’s more than getting licensed and putting a radio in your drawer and waiting for the end of the world.”

New Frontiers

The future lies in operators like Dhruv Rebba (KC9ZJX), who won Amateur Radio Newsline’s 2019 Young Ham of the Year award. He’s the 15-year-old son of immigrants from India and a sophomore at Normal Community High School in Illinois, where he also runs varsity cross-country and is active in the Future Business Leaders of America and robotics clubs. And he’s most interested in using amateur radio bands to communicate with astronauts in space.

Rebba earned his technician class license when he was 9, after having visited the annual Dayton Hamvention with his father. (In the United States, there are currently three levels of amateur radio license, issued after completing a written exam for each—technician, general, and extra. Higher levels give operators access to more radio spectrum.)

“My dad had kind of just brought me along, but then I saw all the booths and the stalls and the Morse code, and I thought it was really cool,” Rebba says. “It was something my friends weren’t doing.”

He joined the Central Illinois Radio Club of Bloomington, experimented with making radio contacts, participated in ARRL’s annual Field Days, and volunteered at the communications booths at local races.

But then Rebba found a way to combine ham radio with his passion for space: He learned about the Amateur Radio on the International Space Station (ARISS) program, managed by an international consortium of amateur radio organizations, which allows students to apply to speak directly with crew members onboard the ISS. (There is also an automated digital transponder on the ISS that allows hams to ping the station as it orbits.)

Rebba rallied his principal, science teacher, and classmates at Chiddix Junior High, and on 23 October 2017, they made contact with astronaut Joe Acaba (KE5DAR). For Rebba, who served as lead control operator, it was a crystallizing moment.

“The younger generation would be more interested in emergency communications and the space aspect, I think. We want to be making an impact,” Rebba says. “The hobby aspect is great, but a lot of my friends would argue it’s quite easy to talk to people overseas with texting and everything, so it’s kind of lost its magic.”

That statement might break the hearts of some of the more experienced hams recalling their tinkering time in their childhood basements. But some older operators welcome the change.

Take Bob Heil (K9EID), the famed sound engineer who created touring systems and audio equipment for acts including the Who, the Grateful Dead, and Peter Frampton. His company Heil Sound, in Fairview Heights, Ill., also manufactures amateur radio technology.

“I’d say wake up and smell the roses and see what ham radio is doing for emergencies!” Heil says cheerfully. “Dhruv and all of these kids are doing incredible things. They love that they can plug a kit the size of a cigar box into a computer and the screen becomes a ham radio…. It’s all getting mixed together and it’s wonderful.”

But there are other hams who think that the amateur radio community needs to be much more actively courting change if it is to survive. Sterling Mann (N0SSC), himself a millennial at age 27, wrote on his blog that “Millennials Are Killing Ham Radio.”

It’s a clickbait title, Mann admits: His blog post focuses on the challenge of balancing support for the dominant, graying ham population while pulling in younger people too. “The target demographic of every single amateur radio show, podcast, club, media outlet, society, magazine, livestream, or otherwise, is not young people,” he wrote. To capture the interest of young people, he urges that ham radio give up its century-long focus on person-to-person contacts in favor of activities where human to machine, or machine to machine, communication is the focus.

These differing interests are manifesting in something of an analog-to-digital technological divide. As Spectrum reported in July 2019, one of the key debates in ham radio is its main function in the future: Is it a social hobby? A utility to deliver data traffic? And who gets to decide?

Those questions have no definitive or immediate answers, but they cut to the core of the future of ham radio. Loring Kutchins, president of the Amateur Radio Safety Foundation, Inc. (ARSFi)—which funds and guides the “global radio email” system Winlink—says the divide between hobbyists and utilitarians seems to come down to age.

“Younger people who have come along tend to see amateur radio as a service, as it’s defined by FCC rules, which outline the purpose of amateur radio—especially as it relates to emergency operations,” Kutchins (W3QA) told Spectrum last year.

Kutchins, 68, expanded on the theme in a recent interview: “The people of my era will be gone—the people who got into it when it was magic to tune into Radio Moscow. But Grandpa’s ham radio set isn’t that big a deal compared to today’s technology. That doesn’t have to be sad. That’s normal.”

Gramps’ radios are certainly still around, however. “Ham radio is really a social hobby, or it has been a very social hobby—the rag-chewing has historically been the big part of it,” says Martin F. Jue (K5FLU), founder of radio accessories maker MFJ Enterprises, in Starkville, Miss. “Here in Mississippi, you get to 5 or 6 o’ clock and you have a big network going on and on—some of them are half-drunk chattin’ with you. It’s a social group, and they won’t even talk to you unless you’re in the group.”

But Jue, 76, notes the ham radio space has fragmented significantly beyond rag-chewing and DXing (making very long-distance contacts), and he credits the shift to digital. That’s where MFJ has moved with its antenna-heavy catalog of products.

“Ham radio is connected to the Internet now, where with a simple inexpensive handheld walkie-talkie and through the repeater systems connected to the Internet, you’re set to go,” he says. “You don’t need a HF [high-frequency] radio with a huge antenna to talk to people anywhere in the world.”

To that end, last year MFJ unveiled the RigPi Station Server: a control system made up of a Raspberry Pi paired with open-source software that allows operators to control radios remotely from their iPhones or Web browser.

“Some folks can’t put up an antenna, but that doesn’t matter anymore because they can use somebody else’s radio through these RigPis,” Jue says.

He’s careful to note the RigPi concept isn’t plug and play—“you still need to know something about networking, how to open up a port”—but he sees the space evolving along similar lines.

“It’s all going more and more toward digital modes,” Jue says. “In terms of equipment I think it’ll all be digital at some point, right at the antenna all the way until it becomes audio.”

The Signal From Overseas

Outside the United States, there are some notable bright spots, according to Dave Sumner (K1ZZ), secretary of the International Amateur Radio Union (IARU). This collective of national amateur radio associations around the globe represents hams’ interests to the International Telecommunication Union (ITU), a specialized United Nations agency that allocates and manages spectrum. In fact, in China, Indonesia, and Thailand, amateur radio is positively booming, Sumner says.

China’s advancing technology and growing middle class, with disposable income, has led to a “dramatic” increase in operators, Sumner says. Indonesia is subject to natural disasters as an island nation, spurring interest in emergency communication, and its president is a licensed operator. Trends in Thailand are less clear, Sumner says, but he believes here, too, that a desire to build community response teams is driving curiosity about ham radio.

“So,” Sumner says, “you have to be careful not to subscribe to the notion that it’s all collapsing everywhere.”

China is also changing the game in other ways, putting cheap radios on the market. A few years ago, an entry-level handheld UHF/VHF radio cost around US $100. Now, thanks to Chinese manufacturers like Baofeng, you can get one for under $25. HF radios are changing, too, with the rise of software-defined radio.

“It’s the low-cost radios that have changed ham radio and the future thereof, and will continue to do so,” says Jeff Crispino, CEO of Nooelec, a company in Wheatfield, N.Y., that makes test equipment and software-defined radios, where demodulating a signal is done in code, not hardwired electronics. “SDR was originally primarily for military operations because they were the only ones who could afford it, but over the past 10 years, this stuff has trickled down to become $20 if you want.” Activities like plane and boat tracking, and weather satellite communication, were “unheard of with analog” but are made much easier with SDR equipment, Crispino says.

Nooelec often hears from customers about how they’re leveraging the company’s products. For example, about 120 members from the group Space Australia to collect data from the Milky Way as a community project. They are using an SDR and a low-noise amplifier from Nooelec with a homemade horn antenna to detect the radio signal from interstellar clouds of hydrogen gas.

“We will develop products from that feedback loop—like for hydrogen line detection, we’ve developed accessories for that so you can tap into astronomical events with a $20 device and a $30 accessory,” Crispino says.

Looking ahead, the Nooelec team has been talking about how to “flatten the learning curve” and lower the bar to entry, so that the average user—not only the technically adept—can explore and develop their own novel projects within the world of ham radio.

“It is an increasingly fragmented space,” Crispino says. “But I don’t think that has negative connotations. When you can pull in totally unique perspectives, you get unique applications. We certainly haven’t thought of it all yet.”

The ham universe is affected by the world around it—by culture, by technology, by climate change, by the emergence of a new generation. And amateur radio enthusiasts are a varied and vibrant community of millions of operators, new and experienced and old and young, into robotics or chatting or contesting or emergency communications, excited or nervous or pessimistic or upbeat about what ham radio will look like decades from now.

As Michel, the former ARRL CEO, puts it: “Every ham has [their] own perspective. What we’ve learned over the hundred-plus years is that there will always be these battles—AM modulation versus single-sideband modulation, whatever it may be. The technology evolves. And the marketplace will follow where the interests lie.”

About the Author

Julianne Pepitone is a freelance technology, science, and business journalist and a frequent contributor to IEEE Spectrum. Her work has appeared in print, online, and on television outlets such as Popular Mechanics, CNN, and NBC News.

Intel, NSF Invest $9 Million in Machine Learning for Wireless Networks Projects

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/wireless/intel-nsf-invest-9-million-machine-learning-wireless-networks

Intel and the National Science Foundation (NSF) have awarded a three-year grant to a research team for research on delivering distributed machine learning computations over wireless edge networks to enable a broad range of new wireless applications. The team is a joint group from the University of Southern California (USC) and the University of California, Berkeley. The award was part of Intel’s and the NSF’s Machine Learning for Wireless Networking Systems effort, a multi-university research program to accelerate “fundamental, broad-based research” on developing wireless-specific machine learning techniques which can be applied to new wireless systems and architecture design.

Machine learning can hopefully manage the size and complexity of next-generation wireless networks. Intel and the NSF focused on efforts to harness discoveries in machine learning to design new algorithms, schemes, and communication protocols to handle density, latency, and throughput demands of complex networks. In total, US $9,000,000 has been awarded to 15 research teams.