Tag Archives: Telecom

How To Build a Radio That Ignores Its Own Transmissions

Post Syndicated from Joel Brand original https://spectrum.ieee.org/telecom/wireless/how-to-build-a-radio-that-ignores-its-own-transmissions

Wireless isn’t actually ubiquitous. We’ve all seen the effects: Calls are dropped and Web pages sometimes take forever to load. One of the most fundamental reasons why such holes in our coverage occur is that wireless networks today are overwhelmingly configured as star networks. This means there’s a centrally located piece of infrastructure, such as a cell tower or a router, that communicates with all of the mobile devices around it in a starburst pattern.

Ubiquitous wireless coverage will happen only when a different type of network, the mesh network, enhances these star networks. Unlike a star network, a mesh network consists of nodes that communicate with one another as well as end-user devices. With such a system, coverage holes in a wireless network can be filled by simply adding a node to carry the signal around the obstruction. A Wi-Fi signal, for example, can be strengthened in a portion of a building with poor reception by installing a node that communicates with the main router.

However, current wireless mesh-network designs have limitations. By far the biggest is that a node in a mesh network will interfere with itself as it relays data if it uses the same frequency to transmit and receive signals. So current designs send and receive on different frequency bands. But spectrum is a scarce resource, especially for the heavily trafficked frequencies used by cellular networks and Wi-Fi. It can be hard to justify devoting so much spectrum to filling in coverage holes when cell towers and Wi-Fi routers do a pretty good job of keeping people connected most of the time.

And yet, a breakthrough here could bring mesh networks into even the most demanding and spectrum-intensive networks, for example ones connecting assembly floor robots, self-driving cars, or drone swarms. And indeed, such a breakthrough technology is now emerging: self-interference cancellation (SIC). As the name implies, SIC makes it possible for a mesh-network node to cancel out the interference it creates by transmitting and receiving on the same frequency. The technology literally doubles a node’s spectral efficiency, by eliminating the need for separate transmit and receive frequencies.

There are now tens of billions of wireless devices in the world. At least 5 billion of them are mobile phones, according to the GSM Association. The Wi-Fi Alliance reports more than 13 billion Wi-Fi equipped devices in use, and the Bluetooth Special Interest Group predicts that more than 7.5 billion Bluetooth devices will be shipped between 2020 and 2024. Now is the time to bring wireless mesh networks to the mainstream, as wireless capability is built in to more products—bathroom scales, tennis shoes, pressure cookers, and too many others to count. Consumers will expect them to work everywhere, and SIC will make that possible, by enabling robust mesh networks without coverage holes. Best of all, perhaps, they’ll do so by using only a modest amount of spectrum.

Cellphones, Wi-Fi routers, and other two-way radios are considered full-duplex radios. This means they are capable of both sending and receiving signals, oftentimes by using separate transmitters and receivers. Typically, radios will transmit and receive signals using either frequency-division duplexing, meaning transmit and receive signals use two different frequencies, or time-division duplexing, meaning transmit and receive signals use the same frequency but at different times. The downside of both duplexing techniques is that each frequency band is theoretically being used to only half of its potential at any given time—in other words, to either send or receive, not both.

It’s been a long-standing goal among radio engineers to develop full duplex on the same frequency, which would be able to make maximal use of spectrum by transmitting and receiving on the same band at the same time. You could think of other full-duplex measures as being like a two-lane highway, with traffic traveling in different directions on different lanes. Full-duplexing on the same frequency would be like building just a single lane with cars driving in both directions at once. That’s nonsensical for traffic, perhaps, but entirely possible for radio engineering.

To be clear, full duplex on the same frequency remains a goal that radio engineers are still working toward. Self-interference cancellation is bringing radios closer to that goal, by enabling a radio to cancel out its own transmissions and hear other signals on the same frequency at the same time, but it is not a perfected technology.

SIC is just now starting to emerge into mainstream use. In the United States, there are at least three startups bringing SIC to real-world applications: GenXComm, Lextrum, and Kumu Networks (where I am vice president of product management). There are also a handful of substantial programs developing self-cancellation techniques at universities, namely Columbia, Stanford (where Kumu Networks got its start), and the University of Texas at Austin.

Upon first consideration, SIC might seem simple. After all, the transmitting radio knows exactly what its transmit signal will be, before the signal is sent. Then, all the transmitting radio has to do is cancel out its own transmit signal from the mixture of signals its antenna is picking up in order to hear signals from other radios, right?

In reality, SIC is more complicated, because a radio signal must go through several steps before transmission that can affect the transmitted signal. A modern radio, such as the one in your smartphone, starts with a digital version of the signal to be transmitted in its software. However, in the process of turning the digital representation into a radio-frequency signal for transmission, the radio’s analog circuitry generates noise that distorts the RF signal, making it impossible to use the signal as-is for self-cancellation. This noise cannot be easily predicted because it partly results from ambient temperatures and subtle manufacturing imperfections.

The difference in magnitude of an interfering transmitted signal’s power in comparison with that of a desired received signal also confounds cancellation. The power transmitted by the radio’s amplifier is many orders of magnitude stronger than the power of received signals. It’s like trying to hear someone several feet away whispering to you while you’re simultaneously shouting at them.

Furthermore, the signal that arrives at the receiving antenna is not quite the same as it was when the radio sent it. By the time it returns, the signal also includes reflections from nearby trees, walls, buildings, or anything else in the radio’s vicinity. The reflections become even more complicated when the signal bounces off of moving objects, such as people, vehicles, or even heavy rain. This means that if the radio simply canceled out the transmit signal as it was when the radio sent it, the radio would fail to cancel out these reflections.

So to be done well, self-interference cancellation techniques depend on a mixture of algorithms and analog tricks to account for signal variations created by both the radio’s components and its local environment. Recall that the goal is to create a signal that is the inverse of the transmit signal. This inverse signal, when combined with the original receive signal, should ideally cancel out the original transmit signal entirely—even with the added noise, distortions, and reflections—leaving only the received signal. In practice, though, the success of any cancellation technique is still measured by how much cancellation it provides.

Kumu’s SIC technique attempts to cancel out the transmit signal at three different times while the radio receives a signal. With this three-tier approach, Kumu’s technique reaches roughly 110 decibels of cancellation, compared with the 20 to 25 dB of cancellation achievable by a typical mesh Wi-Fi access point.

The first step, which is carried out in the analog realm, is at the radio frequency level. Here, a specialized SIC component in the radio samples the transmit signal, just before it reaches the antenna. By this point, the radio has already modulated and amplified the signal. What this means is that any irregularities caused by the radio’s own signal mixer, power amplifier, and other components are already present in the sample and can therefore be canceled out by simply inverting the sample taken and feeding it into the radio’s receiver.

The next step, also carried out in the analog realm, cancels out more of the transmitting signal at the intermediate-frequency (IF) level. Intermediate frequencies, as the name suggests, are a middle step between a radio’s creation of a digital signal and the actual transmitted signal. Intermediate frequencies are commonly used to reduce the cost and complexity of a radio. By using an intermediate frequency, a radio can reuse components like filters, rather than including separate filters for every frequency band and channel on which the radio may operate. Both Wi-Fi routers and cellphones, for example, first convert digital signals to intermediate frequencies in order to reuse components, and convert the signals to their final transmit frequency only later in the process.

Kumu’s SIC technique tackles IF cancellation in the same way it carries out the RF cancellation. The SIC component samples the IF signal in the transmitter before it’s converted to the transmit frequency, modulated, and amplified. The IF signal is then inverted and applied to the receive signal after the receive signal has been converted to the intermediate frequency. An interesting aspect of Kumu’s SIC technique to note here is that the sampling steps and cancellation steps happen as an inverse of one another. In other words, while the SIC component samples the IF signal before the RF signal in the transmitter, the component applies cancellation to the RF signal before the IF signal.

The third and final step in Kumu’s cancellation process applies an algorithm to the received signal after it has been converted to a digital format. The algorithm compares the remaining received signal with the original transmit signal from before the IF and RF steps. The algorithm essentially combs through the received signal for any lingering effects that might possibly have been caused by the transmitter’s components or the transmitted signal reflecting through the nearby environment, and cancels them.

None of these steps is 100 percent effective. But taken together, they can reach a level of cancellation sufficient to remove enough of the transmit signal to enable the reception of other reasonably strong signals on the same frequency. This cancellation is good enough for many key applications of interest, such as the Wi-Fi repeater described earlier.

As I mentioned before, engineers still haven’t fully realized full-duplex-on-the-same-frequency radios. For now, SIC is being deployed in applications where the transmitter and receiver are close to one another, or even in the same physical chassis, but not sharing the same antenna. Let’s take a look at a few important examples.

Kumu’s technology is already commercially deployed in 4G Long Term Evolution (LTE) networks, where a device called a relay node can plug coverage holes, thanks to SIC. A relay node is essentially a pair of two-way radios connected back-to-back. The first radio of the pair, oriented toward a 4G cell tower, receives signals from the network. The second radio, oriented toward the coverage hole, passes on the signals, on the same frequency, to users in the coverage hole. The node also receives signals from users in the coverage hole and relays them—again, on the same frequency—to the cell tower. Relay nodes perform similarly to traditional repeaters and extenders that expand a coverage area by repeating a broadcasted signal farther from its source. The difference is, relay nodes do so without amplifying noise because they decode and regenerate the original signal, rather than just boosting it.

Because a relay node fully retransmits a signal, in order for the node to work properly, the transmitter facing the 4G base station must not interfere with the receiver facing the coverage hole. Remember that a big problem in reusing spectrum is that transmit signals are orders of magnitude louder than receive signals. You don’t want the node to drown out the signals it’s trying to relay from users by its own attempts to retransmit them. Likewise, you don’t want the transmitter facing the coverage hole to overwhelm signals coming in from the cell tower. SIC prevents each radio from drowning out the signals the other is listening for, by canceling out its own transmissions.

Ongoing 5G network deployments offer an even bigger opportunity for SIC. 5G differs from previous cellular generations with the inclusion of small cells, essentially miniature cell towers placed 100 to 200 meters apart. 5G networks require small cells because the cellular generation utilizes higher-frequency, millimeter-wave signals, which do not travel as far as other cellular frequencies. The Small Cell Forum predicts that by 2025, more than 13 million 5G small cells will be installed worldwide. Every single one of those small cells will require a dedicated link, called a backhaul link, that connects it to the rest of the network. The vast majority of those backhaul links will be wireless, because the alternative—fiber optic cable—is more expensive. Indeed, the 5G industry is developing a set of standards called Integrated Access and Backhaul (IAB) to develop more robust and efficient wireless backhaul links.

IAB, as the name suggests, has two components. The first is access, meaning the ability of local devices such as smartphones to communicate with the nearest small cell. The second is backhaul, meaning the ability of the small cell to communicate with the rest of the network. The first proposed schemes for 5G IAB were to either allow access and backhaul communications to take turns on the same high-speed channel, or to use separate channels for the two sets of communications. Both come with major drawbacks. The problem with sharing the same channel is that you’ve introduced time delays for latency sensitive applications like virtual reality and multiplayer gaming. On the other hand, using separate channels also incurs a substantial cost: You’ve doubled the amount of often-expensive wireless spectrum you need to license for the network. In both cases, you’re not making the most efficient use of the wireless capacity.

As in the LTE relay-node example, SIC can cancel the transmit signal from an access radio on a small cell at the backhaul radio’s receiver, and similarly, cancel the transmit signal from a backhaul radio on the same small cell at the access radio’s receiver. The end result is that the cell’s backhaul radio can receive signals from the wider network even as the cell’s access radio is talking to nearby devices.

Kumu’s technology is not yet commercially deployed in 5G networks using IAB, because IAB is still relatively new. The 3rd Generation Partnership Project, which develops protocols for mobile telecommunications, froze the first round of IAB standards in June 2020, and since then, Kumu has been refining its technology through industry trials.

One last technology worth mentioning is Wi-Fi, which is starting to make greater use of mesh networks. A home Wi-Fi network, for example, now needs to reach PCs, TVs, webcams, smartphones, and any smart-home devices, regardless of their location. A single router may be enough to cover a small house, but bigger houses, or a small office building, may require a mesh network with two or three nodes to provide complete coverage.

Current popular Wi-Fi mesh techniques allocate some of the available wireless bands for dedicated internal communication between the mesh nodes. By doing so, they give up on some of the capacity that otherwise could have been offered to users. Once again, SIC can improve performance by making it possible for internal communications and signals from devices to use the same frequencies simultaneously. Unfortunately, this application is still a way off compared with 4G and 5G applications. As it stands, it’s not currently cost effective to develop SIC technology for Wi-Fi mesh networks, because these networks typically handle much lower volumes of traffic than 4G and 5G base stations.

Mesh networks are being increasingly deployed in both cellular and Wi-Fi networks. The two technologies are becoming more and more similar in how they function and how they’re used, and mesh networks can address the coverage and backhaul issues experienced by both. Mesh networks are also easy to deploy and “self-healing,” meaning that data can be automatically routed around a failed node. Truly robust 4G LTE mesh networks are already being greatly improved by full duplex on the same frequency. I expect the same to happen in the near future with 5G and Wi-Fi networks.

And it will arrive just in time. The tech trend in wireless is to squeeze more and more performance out of the same amount of spectrum. SIC is quite literally doubling the amount of available spectrum, and by doing so it is helping to usher in entirely new categories of wireless applications.

This article appears in the March 2021 print issue as “The Radio That Can Hear Over Itself.”

About the Author

Joel Brand is the vice president of product management at Kumu Networks, in Sunnyvale, Calif.

Chaos Engineering Saved Your Netflix

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/telecom/internet/chaos-engineering-saved-your-netflix

To hear Greg Orzell tell it, the original Chaos Monkey tool was simple: It randomly picked a virtual machine hosted somewhere on Netflix’s cloud and sent it a “Terminate” command. Unplugged it. Then the Netflix team would have to figure out what to do.

That was a decade ago now, when Netflix moved its systems to the cloud and subsequently navigated itself around a major U.S. East Coast service outage caused by its new partner, Amazon Web Services (AWS).

Orzell is currently a principal software engineer at GitHub and lives in Mainz, Germany. As he recently recalled the early days of Chaos Monkey, Germany got ready for another long round of COVID-related pandemic lockdowns and deathly fear. Chaos itself raced outside.

But while the coronavirus wrenched daily life upside-down and inside out, a practice called chaos engineering, applied in computer networks, might have helped many parts of the networked world limp through their coronavirus-compromised circumstances.

Chaos engineering is a kind of high-octane active analysis, stress testing taken to extremes. It is an emerging approach to evaluating distributed networks, running experiments against a system while it’s in active use. Companies do this to build confidence in their operation’s ability to withstand turbulent conditions.

Orzell and his Netflix colleagues built Chaos Monkey as a Java-based tool from the AWS software development kit. The tool acted almost like a number generator. But when Chaos Monkey told a virtual machine to terminate, it was no simulation. The team wanted systems that could tolerate host servers and pieces of application services going down. “It was a lot easier to make that real by saying, ‘No, no, no, it’s going to happen,’ ” Orzell says. “We promise you it will happen twice in the next month, because we are going to make it happen.’ ”

In controlled and small but still significant ways, chaos engineers see if systems work by breaking them—on purpose, on a regular basis. Then they try to learn from it. The results show if a system works as expected, but they also build awareness that even in an engineering organization, things fail. All the time.

As practiced today, chaos engineering is more refined and ambitious still. Subsequent tools could intentionally slow things down to a crawl, send network traffic into black holes, and turn off network ports. (One related app called Chaos Kong could scale back company servers inside an entire geographic region. The system would then need to be resilient enough to compensate.) Concurrently, engineers also developed guardrails and safety practices to contain the blast radius. And the discipline took root.

At Netflix, chaos engineering has evolved into a platform called the Chaos Automated Platform, or ChAP, which is used to run specialized experiments. (See “Spawning Chaos,” above.) Nora Jones, a software engineer, founder and chief executive of a startup called Jeli, says teams need to understand when and where to experiment. She helped implement ChAP while still at Netflix. “Creating chaos in a random part of the system is not going to be that useful for you,” she says. “There needs to be some sort of reasoning behind it.”

Of course, the novel coronavirus has added entirely new kinds of chaos to network traffic. Traffic fluctuations during the pandemic did not all go in one direction either, says AWS principal solutions architect Constantin Gonzalez. Travel services like the German charter giant Touristik Union International (TUI), for instance, drastically pulled in its sails as traffic ground to a halt. But the point in building resilient networks is to make them elastic, he says.

Chaos engineering is geared for this. As an engineering mind-set, it alludes to Murphy’s Law, developed during moonshot-era rocket science: If something can go wrong, it will go wrong.

It’s tough to say that the practice kept the groaning networks up and running during the pandemic. There are a million variables. But for those using chaos-engineering techniques—even for as far-flung and traditional a business as DBS Bank, a consumer and investment institution in Singapore with US $437 billion in assets—they helped. DBS is three years into a network resiliency program, site-reliability engineer Harpreet Singh says, and even as the program got off the ground in early 2018 the team was experimenting with chaos-engineering tools.

And chaos seems to be catching. Jones’s Jeli startup delivers a strategic view on what she calls the catalyst events (events that might be simulated or sparked by chaos engineering), which show the difference between how an organization thinks it works and how it actually works. Gremlin, a four-year-old San Jose venture, offers chaos-engineering tools as service products. In January, the company also issued its first “State of Chaos Engineering” report for 2021. In a blog post announcing the publication, Gremlin vice president of marketing Aileen Horgan described chaos-engineering conferences these days as topping 3,500-plus registrants. Gremlin’s user base alone, she noted, has conducted nearly 500,000 chaos-engineering system attacks to date.

Gonzalez says AWS has been using chaos-engineering practices for a long time. This year—as the networked world, hopefully, recovers from stress that tested it like never before—AWS is launching a fault-injection service for its cloud customers to use to run their own experiments.

Who knows how they’ll be needed.

Treat Smart City Tech like Sewers, or Better

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/treat-smart-city-tech-like-sewers-or-better

Smart cities, like many things, took a beating in 2020. Alphabet, Google’s parent company, pulled its Sidewalk Labs subsidiary out of a smart-city project in Toronto. Cisco killed its plans to sell smart-city technology. And in many places, city budgets will be affected for years to come by the pandemic’s economic shock, making it more difficult to justify smart-city expenses.

That said, the pandemic also provided municipalities around the world with reason to invest new technologies for public transportation, contact tracing, and enforcing social distancing. In a way, the present moment is an ideal time for a new understanding of smart-city infrastructure and a new way of paying for it.

Cities need to think of their smart-city efforts as infrastructure, like roads and sewers, and as such, they need to think about investing in it, owning it, maintaining it, and controlling how it’s used in the same ways as they do for other infrastructure. Smart-city deployments affect the citizenry, and citizens will have a lot to say about any implementation. The process of including that feedback and respecting citizens’ rights means that cities should own the procurement process and control the implementation.

In some cases, citizen backlash can kill a project, such as the backlash against Sidewalk’s Toronto project over who exactly had access to the data collected by the smart-city infrastructure. Even when cities do get permission from citizens for deployments, the end results are often neighborhood-size “innovation zones” that are little more than glorified test beds. A truly smart city needs a master plan, citizen accountability, and a means of funding that grants the city ownership.

One way to do this would be for cities to create public authorities, just like they do when investing in public transportation or even health care. These authorities should have publicly appointed commissioners who manage and operate the sensors and services included in smart-city projects. They would also have the ability to raise funds using bond issues backed by the revenue created by smart-city implementation.

For example, consider a public safety project that requires sensors at intersections to reduce collisions. The project might use the gathered data to meet its own safety goals, but the insights derived from analyzing traffic patterns could also be sold to taxi companies or logistics providers.

These sales will underpin the repayment on bonds issued to pay for the technology’s deployment and management. While some municipal bonds mature in 10- to 30-year time frames, there are also bonds with 3- to 5-year terms that would be better suited to the shorter life spans of technologies like traffic-light sensors.

Even if bonds and public authorities aren’t the right way to proceed, owning and controlling the infrastructure has other advantages. Smart-city contracts could employ local contractors and act not just as a source of revenue for cities but also as an economic development tool that can create jobs and a halo effect to draw in new companies and residents.

For decades, cities have invested in their infrastructure using public debt. If cities invest in smart-city technologies the same way, they could give their citizens a bigger stake in the process, create new streams of revenue for the city, and improve quality of life. After all, people deserve to live in smarter cities, rather than innovation zones.

This article appears in the March 2021 print issue as “Smarter Smart Cities.”

Facebook’s Australian Struggles Cast Shadow on Net Policy Around the World

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/telecom/internet/facebook-drops-news-ban-after-australia-compromises-on-media-bargaining-code

Facebook users in Australia last week found they couldn’t access news and government web pages. That was by design. The social network removed these links in response to a proposed Australian government bill to require social media networks pay news organizations for use of their content. The ban on news access is actually part of a larger struggle between Facebook and media players like Rupert Murdoch’s News Corporation over the future of news content on social media.  

That battle ended (for now) when an agreement was brokered after hurried bargaining resulted in the Australian government making concessions. Now, if digital platforms like Facebook and Google negotiate commercial deals with news organizations that can be shown to contribute “to the sustainability of the Australian news industry,” they could avoid being subject to the new law. In addition, the companies will receive at least one month’s notice before “a last resort” arbitration is proposed if such negotiations fail.

Plenty of questions, however, remain: Will Facebook consent to other countries enacting similar policy on them? What might other countries’ deals with other players (including Google and Microsoft) look like? And will any of this matter to Facebook and Internet users beyond Australia’s borders?

The answer to that final question, at least, is very likely Yes.    

The battle began brewing back in April 2020 when the Australian government asked its Competition & Consumer Commission to draw up a news media bargaining code that would “address bargaining power imbalance between Australian news media businesses and digital platforms, specifically Google and Facebook.”

As the Media Code neared enactment, Google threatened in January to pull its search engine from Australia. In the same month, Mel Silva, managing director of Google Australia and New Zealand, stated before the Australian Senate Economics Legislation Committee that, “The principle of unrestricted linking between websites is fundamental to Search. Coupled with the unmanageable financial and operational risk if this version of the Code were to become law, it would give us no real choice but to stop making Google Search available in Australia.” 

Australian prime minister Scott Morrison’s reply was immediate. “We don’t respond to threats,” he told the press the same day. 

Microsoft, long-time competitor with Google and Facebook, was quick to exploit its rivals’ difficulties. After endorsing the government’s Media Code, Brad Smith, Microsoft president, wrote on his blog in February, Microsoft “committed that its Bing search service would remain in Australia and that it is prepared to share revenues with news organizations under the rules that Google and Facebook are rejecting.” He added that the company would support “a similar proposal in the United States, Canada, The European Union, and other countries.”

Such developments apparently gave Google second thoughts, and it began negotiating directly with several Australian media companies, including  News Corp., which announced on February 17 “it has agreed to an historic multi-year partnership with Google to provide trusted journalism from its news sites around the world in return for significant payments by Google.”

The Facebook Fight

In a formal announcement, William Easton, managing director of Facebook Australia & New Zealand said on Tuesday that after discussions, “We are satisfied that the Australian government has agreed to a number of changes and guarantees that address our core concerns about allowing commercial deals that recognize the value our platform provided to publishers relative to the value we receive from them.”

Campbell Brown, Facebook’s vice president of global news partnerships, added in a separate announcement that, “We’re restoring news on Facebook in Australia in the coming days. From now on, the government has clarified we will retain the ability to decide if news appears on Facebook so that we won’t automatically be subject to a forced negotiation.”

The amendments may help diminish the criticism that deals like Google’s News Corp. agreement will only stuff the pockets of media owners such as Rupert Murdoch, who, critics like Kara Swisher claim, has unduly influenced the Australian government’s legislation.

Certainly, News Corp., which owns The Wall Street Journal, The New York Post, and The Australian, as well as the U.K.’s The Times and The Sun has suffered from the online dominance of Google and Facebook in Australia and around the world. Before the ascendancy of both these online giants, newspapers were a thriving business, making money by selling copies and advertisements. In Australia today, according to the country’s competition watchdog, for every A$100 in media advertising revenues, Google accounts for $53, Facebook for $28, and the remaining $19 is shared by the content providers like News Corp. 

Meanwhile, Facebook will now have to deal with the fallout resulting from its news blackout. U.K. member of parliament Julian Knight described the ban as a “crass move,” and “the worst type of corporate culture.” While the U.K. News Media Association said the move “demonstrates why robust regulation is urgently needed.”

Canada also condemned the Facebook move. Heritage Minister Steven Guilbeault, who is drawing up the country’s own media code, said he has talked with French, German and Finnish counterparts about collaborating to ensure published content would be properly compensated. He added that he expected “ten, fifteen countries [would soon be] adopting similar rules.”

When Our Connected Devices Lack Their Connections

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/telecom/wireless/when-our-connected-devices-lack-their-connections

QR (Quick Response) codes have undergone something of a renaissance during the pandemic: Where I live in Australia, we use them to check in at a restaurant or cinema or classroom. They anchor the digital record of our comings and goings to better assist contact tracers. It’s annoying to have to check into such venues this way—and even more annoying when it doesn’t work.

Not long ago, I went to a local café, only to have the check-in process fail. I thought the problem might be my smartphone, so my companion gave it a go, using a different phone, OS, and carrier. No luck. We later learned the entire check-in infrastructure for our state had gone down. Millions of people were similarly vexed—I would argue completely needlessly.

Nearly all the apps on our smartphones—and on our desktop computers—rest on networked foundations. Over the last generation, network access has become ubiquitous, useful, cheap, and stable. While such connectivity has obvious benefits, it seems to have simultaneously encouraged a form of shortsightedness among app developers, who find it hard to imagine that our phones might ever be disconnected.

Before the arrival of the smartphone, we encountered connected devices only rarely. Now they number in the tens of billions—each designed around a persistently available network. Cut off from the network, these devices usually fail completely, as the contact-tracing app did for a time for so many Australians.

People who develop firmware or apps for connected devices tend to write code for two eventualities: One assumes perfect connectivity; the other assumes no connectivity at all. Yet our devices nearly always reside in a gray zone. Sometimes things work perfectly, sometimes they don’t work at all, and sometimes they only work intermittently.

It’s that third situation we need to keep front of mind. I could have checked into my café, only to have the data documenting my visit uploaded later (after some panicked IT administrator had located and rebooted the failed server). But the developers of this system weren’t thinking in those terms. They should have taken better care to design an app that could fail gracefully, retaining as much utility as possible, while patiently waiting for a network connection to be reestablished.

Later that same day, I tried to summon my smartphone-app-based loyalty card when checking out at the supermarket. The app wouldn’t launch. “Yeah,” the teenage cashier said, “The mobile signal is horrible in here.”

“But the app should work, even with a crappy signal,” I insisted.

“Everything’s connected,” he counseled, leaving it there.

Like fish in water, we now swim in a sea of our own connectivity. We can’t see it, except in those moments when the water suddenly recedes, leaving us flapping around on the bottom, gasping for breath.

Typically, such episodes are not much more than a temporary annoyance, but there are times when they become much more serious—such as when a smartwatch detects atrial fibrillation or a sudden fall and needs to signal that the wearer is in distress. In a life-or-death situation, an app needs to provide defense in depth to get its message out by every conceivable path: Bluetooth, mesh network—even an audio alarm.

As engineers, we need to remember that wireless networking is seldom perfect and design for the shadowy world of the sometimes connected. Our aim should be to build devices that do as well as they possibly can, connected or not.

This article appears in the March 2021 print issue as “Spotty Connections.”

Alphabet’s Loon Failed to Bring Internet to the World. What Went Wrong?

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/why-did-alphabets-loon-fail-to-bring-internet-to-the-world

Loon’s soaring promise to bring Internet access to the world via high-altitude balloons deflated last week, when the company announced that it will be shutting down. With the announcement, Loon became the latest in a list of tech companies that have been unable to realize the lofty goal of universal Internet access.

The company, a subsidiary of Alphabet (which also includes the subsidiary of Google), sought to bring the Internet to remote communities that were otherwise too difficult to connect. 

It’s not entirely surprising that Loon wasn’t able to close the global connectivity gap, even though the shutdown announcement itself seemingly came out of the blue. While the company had experienced some success in early trials and initial deployments, the reality is that the inspiring mission to “connect the next (or last) billion users” touted by tech companies is more difficult than they often realize.

But first, why did Loon fail? According to the January 21 blog post by Loon CEO Alastair Westgarth that announced the company’s fate, Loon wasn’t able to find a way “to get the costs low enough to build a long-term, sustainable business.”

Loon, originally spun out of Alphabet’s research subsidiary X, seemed to have some good things going for it. For one, the balloon-based business model promised to skirt the need for heavy infrastructure investments. Instead, each balloon could provide 4G LTE coverage over 11,000 square kilometers—an area that would otherwise require 200 cell towers to cover. The balloons could communicate with each other using wireless millimeter-wave backhaul links. All that was needed was one or two robust ground connections to the Internet backbone, and the balloons would theoretically be able to cover an entire country wirelessly.

The Loon team even figured out how to get the balloons to more or less stay put once they were in the air, which Sal Candido, Loon’s chief technology officer, once told IEEE Spectrum was the biggest technical challenge facing the concept.

Loon first rose to prominence when its balloons provided wireless coverage to Puerto Rico after Hurricane Maria devastated the island’s telecommunications infrastructure in September 2017. The company also created mobile hotspots in Peru in the wake of natural disasters on several occasions. Most auspiciously, the company deployed a commercial network in Kenya in partnership with service provider Telkom Kenya in July 2020.

Any network deployment is going to cost money, of course. And part of the problem with companies like Loon, according to Sonia Jorge, is that they expect unreasonable returns. Jorge is the executive director of the Alliance for Affordable Internet (A4AI), a global initiative to reduce the costs of Internet access, particularly in developing areas. In the tech industry, “very fast growth has yielded expectations of very high and very fast returns,” Jorge says, but in reality, such returns are uncommon. That goes double for companies connecting poorer or more remote communities.

It’s also possible that Loon, while lowering the cost of delivering wireless coverage in a country like Kenya, failed to lower it enough. For example, if the typical monthly cost of Internet access is US 50 dollars more than the average person can afford, and Loon is able to lower the cost to 25 dollars more than that person can afford, it’s still too expensive.

Which is not to say that Loon’s efforts were foolish or meaningless. “We absolutely need to experiment and innovate,” says Jorge. “People at Loon will take their knowledge and carry it elsewhere, in ways we can’t even imagine yet.”  Loon’s technological developments are already being carried over to another X project, Taara, which hopes to replace expensive fiber optic cable deployments with cheaper, over-the-air optical beams. Loon’s tech also has potential applications in satellite communications.

Technological innovation can only be part of the solution. According to Jorge, bringing affordable access to everyone also requires innovative policy, and being realistic about assumptions like how quickly you can reallocate spectrum for a new wireless use. It also means working with and taking inspiration from small-scale efforts by rural providers that are already making affordable Internet a reality.

And although tech companies like Loon may plummet because they underestimate the challenges facing them, the reality is that providing global Internet access is not impossible. According to a 2020 report from A4AI, the cost of providing affordable 4G-equivalent access to everyone over the age of 10 on the planet by 2030 is about $428 billion—about the same amount of money the world spends on soda per year.

The biggest piece of advice Jorge would give to tech companies like Loon that are looking to make a serious impact in expanding affordable access is to know what they’re getting into. In other words, don’t guess at how easily you’ll be able to access spectrum, or deploy in an area, or anything else. Work with local partners. “Just because you’re a wonderful inventor,” she says, “don’t think you know it all. Because you don’t.”

Smart Surface Sorts Signals—The “Metaprism” Evaluated

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/wireless/could-you-receive-data-using-a-metaprism

The search for better ways to optimize and direct wireless communication is never-ending. These days the uses of metamaterials—marvels of material science engineered to have properties not found in naturally-occurring stuff—also seem to be similarly boundless. Why not use the latter to attempt the former? 

This was the basic idea behind a recent theoretical study, published earlier this month in IEEE Transactions on Wireless Communications. In it, Italian scientists describe a means of directing wireless communication signals using a reflective surface made from a metamaterial. This meta “prism” differently redirects microwave and radio signals depending on their frequencies, much the way conventional prisms bend light differently depending on a light beam’s color.

Earlier generations of wireless networks rely on lower frequency electromagnetic waves that can easily penetrate objects and that are omni-directional, meaning a signal can be sent in a general area and does not require a lot of directional precision to be received.

But as these lower frequencies became overburdened with increasing demand, network providers have started using shorter wavelengths, which yield substantially higher rates of data transmission—but require more directional precision. These higher frequencies also must be relayed around objects that the waves cannot penetrate. 5G networks, now being rolled out, require such precision. Future networks that rely on even higher frequencies will too. 

Davide Dardari is an Associate Professor at the University of Bologna, Italy, who co-authored the new study. “The idea is rather simple,” he says. “The metaprism we proposed… [is] made of metamaterials capable of modifying the way an impinging electromagnetic field is reflected.” 

Just like a regular prism refracts white light to create a rainbow, this proposed metaprism could be used to create and direct an array of different frequencies, or sub-carrier channels. Sub-carrier frequencies would then be assigned to different users and devices based on their positioning.

A major advantage of this approach is that metaprisms, if distributed properly around an area, could passively direct wireless signals, without requiring any power supply. “One or more surfaces designed to act as metaprisms can be deployed in the environment, for example on walls, to generate artificial reflections that can be exploited to enhance the coverage in obstructed areas,” explains Dandari.

He also notes that, because the metaprism would direct the multiple, parallel communication channels in different directions, this will help reduce latency when multiple users are present. “As a [result of these advantages], we expect the cost of the prisms will be much lower than that of the other solutions,” he says.

While this work is currently theoretical, Dandari plans to collaborate with other experts to develop tailored metamaterials required for the metaprism. He also plans to explore the advantages of the metaprism as a means of controlling signal interference.

To Close the Digital Divide, the FCC Must Redefine Broadband Speeds

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/to-close-the-digital-divide-the-fcc-must-redefine-broadband-speeds

The coronavirus pandemic has brought the broadband gap in the United States into stark relief—5.6 percent of the population has no access to broadband infrastructure. But for an even larger percentage of the population, the issue is that they can’t afford access, or they get by on mobile phone plans. Recent estimates, for example, suggest that 15 million to 16 million students—roughly 30 percent of the grade-school population in the United States—lack broadband access for some reason.

The Federal Communications Commission (FCC) has punted on broadband access for at least a decade. With the recent change in the regulatory regime, it’s time for the country that created the ARPANET to fix its broadband access problem. While the lack of access is driven largely by broadband’s high cost, the reason that cost is driving the broadband gap is because the FCC’s current definition of broadband is stuck in the early 2000s.

The FCC defines broadband as a download speed of 25 megabits per second and an upload speed of 3 Mb/s. The agency set this definition in 2015, when it was already immediately outdated. At that time, I was already stressing a 50 Mb/s connection just from a couple of Netflix streams and working from home. Before 2015, the defined broadband speeds in the United States were an anemic 4 Mb/s down and 1 Mb/s up, set in 2010.

If the FCC wants to address the broadband gap rather than placate the telephone companies it’s supposed to regulate, it should again redefine broadband. The FCC could easily establish broadband as 100 Mb/s down and at least 10 Mb/s up. This isn’t a radical proposal: As of 2018, 90.5 percent of the U.S. population already had access to 100 Mb/s speeds, but only 45.7 percent were tapping into it, according to the FCC’s 2020 Broadband Deployment Report.

Redefining broadband will force upgrades where necessary and also reveal locations where competition is low and prices are high. As things stand, most people in need of speeds above 100 Mb/s have only one option: cable providers. Fiber is an alternative, but most U.S. fiber deployments are in wealthy suburban and dense urban areas, leaving rural students and those living on reservations behind. A lack of competition leaves cable providers able to impose data caps and raise fees.

What seems like a lack of demand is more likely a rejection of a high-cost service, even as more people require 100 Mb/s for their broadband needs. In the United States, 100 Mb/s plans cost $81.19 per month on average, according to data from consumer interest group New America. The group gathered broadband prices across 760 plans in 28 cities around the world, including 14 cities in the United States. When compared with other countries, prices in the United States are much higher. In Europe, the average cost of a 100/10 Mb/s plan is $48.48, and in Asia, a similar plan would cost $69.76.

Closing the broadband gap will still require more infrastructure and fewer monopolies, but redefining broadband is a start. With a new understanding of what constitutes reasonable broadband, the United States can proactively create new policies that promote the rollout of plans that will meet the needs of today and the future.

This article appears in the February 2021 print issue as “Redefining Broadband.”

2021 Cybersecurity and IT Failures Roundup

Post Syndicated from Robert N. Charette original https://spectrum.ieee.org/riskfactor/telecom/security/2021-cybersecurity-roundup

The pandemic year just passed once again demonstrates that IT-related failures are universally unprejudiced. Companies large and small, sectors private and public, reputations stellar and scorned: none are exempt. Herewith, the failures, interruptions, crimes and other IT-related setbacks that made the news in 2020.



Cloud Computing



Financial Institutions and Markets

Government IT

Health IT


Rail Transport

Aviation: The Year Without Airline Grinches (Almost)

Over the past several years, airline flight delays and cancellations [PDF] related to IT issues have averaged about one per month. The year 2020 kicked off with “technical issues” affecting British Airways’ computerized check-in at London’s Heathrow Airport, which caused more than 100 flight cancellations with numerous others being delayed. The outage impacted at least 10,000 passengers’ travel plans over two days in February. Then in March, as Covid-19 related government travel bans started to take hold, Delta Air Lines reported, “intermittent technical difficulties” for bookings and ticket changes.

Once the travel bans firmly took hold and flying trimmed back to a minimum, however, there has not been a major IT outage reported since Delta’s. I suspect this hiatus will not last long, as airline flight schedules start returning closer to some semblance of “normal,” perhaps (here’s hoping!) later this year.

Probably the biggest airline IT-related news of the year is the U.S. Federal Aviation Administration’s announcement that the Boeing 737 Max 8 aircraft can resume passenger service once a number of changes [PDF] are made. This may take up to a year to complete for all 450 aircraft that were grounded. The FAA Airworthiness Directive requires “installing new flight control computer (FCC) software, revising the existing [Airplane Flight Manual] to incorporate new and revised flight crew procedures, installing new MAX display system (MDS) software, changing the horizontal stabilizer trim wire routing installations, completing an angle of attack (AOA) sensor system test, and performing an operational readiness flight.” While both Brazil’s and European Union’s country’s civil aviation administration organizations have given their approval for the 737 Max to return to flight, some others like Canada’s, may mandate that additional requirements be met. 

Boeing’s myriad problems with the Max 8’s software (itself attributed to the crashes of both Lion Air JT610 and Ethiopian Airlines Flight 302) can be reviewed in both the June FAA Inspector General’s report as well as the final report of the U.S. House Committee on Transportation and Infrastructure investigation. (See also software executive and airplane enthusiast Gregory Travis’s comprehensive 2019 analysis of the 737 Max fiasco for Spectrum.) Whether Boeing or the airlines can convince the public to board the Max remains to be seen, even with American Airlines beginning flights with the aircraft in late December

Automakers: Software Potholes

Software and electronic-related recalls show no signs of slowing from their 2019 record levels. The year started off with GM issuing a second software recall to remedy problems caused by its first software recall issued in December 2019. The original recall and its software fix were aimed at correcting an error that could disable 463,995 2019 Chevrolet Silverado, GMC Sierra and Cadillac CT6 vehicles’ electronic stability control or antilock brake systems without warnings appearing on the dashboard. Unfortunately, the update was flawed. If an owner remotely started their vehicle using GM’s OnStar app, the brakes were disabled—although warnings were shown on the dash. About 162,000 vehicles received the original fix. The new software update seems to have done the trick.

Both Hyundai and Kia Motors, which Hyundai owns a 34% stake in, issued a number of recalls in 2020 involving moisture problems involving electronic circuits that could cause vehicle fires. Owners of many of the vehicles involved were warned to park their vehicles outside until the repairs were made. Hyundai also had to issue a recall to update its Remote Smart Parking Assistant software for the 2020 Sonata and Nexo models. A software error could allow a vehicle to continue moving after a system malfunction.

Other auto manufacturers had their share of recalls as well. Fiat Chrysler Automobiles recalled 318,537 2019 and 2020 cars and trucks because a software error could allow the backup camera to stay on when a vehicle is moving forward. Toyota recalled 700,000 Prius and Prius V models for a software problem that would prevent the cars from entering a failsafe driving mode as intended, while 735,000 Honda Motors 2018-2020 Accord and 2019-2020 Insight vehicles were recalled for software updates to its Body Control Module to prevent the malfunction of one or more electronic components including the rear-view camera display, turn signals and windshield wipers. Volkswagen had to slip its rollout of its new all-electric ID.3 models by several months due to software issues. 

Given the increasing amount and importance of vehicle software, Toyota launched two new software companies in July under an umbrella company called Woven Planet Holdings to increase the capability and reliability of its vehicles’ automation. Volkswagen created its own software business unit in 2019. Meanwhile, GM announced in November that it would hire another 3,000 workers before the end of the first quarter of 2021 to increase its engineering and software development capabilities. 

Cloud Computing: Intermittent Showers 

While cloud computing is generally reliable, when it is not, the impacts can be widespread and consequential, especially when so many people working or schooling from home. This truism was highlighted by several cloud computing outages this year. In March, Microsoft Azure experienced a six-hour outage attributed to a cooling system failure and another caused by VM capacity constraints. The same month, Google Cloud went down for about 90 minutes, which was ascribed to issues with infrastructure components. In April, GitHub (owned by Microsoft) experienced several disruptions related to multiple different system misconfiguration issues. In June, the IBM Cloud went down for over three hours due to problems linked to an external network provider—and once more later in the month, this time with little explanation. Amazon’s East Region U.S. AWS center suffered disruptions for over six hours in November for a large number of clients, from Adobe to Roku to The Wall Street Journal, that was caused by an operating system configuration issue. Multiple Google Cloud services suffered back to back service disruptions in December, the first that lasted for about an hour and affected Gmail, Google Classroom, Nest, and YouTube, among others. The outage was blamed on storage issues with Google’s authentication system. The second unscheduled downtime affected Gmail for nearly seven hours. An email configuration update issue was the culprit this time. 

Communications: Hello? Hello? Anyone There?

Recurrent communication problems continued throughout 2020. Several emergency service systems went offline, including Arizona’s 911 system in June that left 1 million people without service. Hampshire, England’s £39m new 999-system collapsed in July. Meanwhile, September saw 911 outages across 14 states for about an hour. 

T-Mobile wireless services, the second largest in the U.S., were unavailable to many of its customers for nearly 12 hours after the introduction of a new network router in June, causing 250 million nation-wide calls and 23,621 emergency calls to 911 in several states not to connect. Vodafone in Germany experienced equipment failure that kept 100,000 mobile phone users from making calls for three hours in November.

Disruptions also hit users of the Internet. In May, users of the videoconferencing platform Zoom across the globe experienced trouble logging into their meetings for about two hours, messaging platform Slack suffered an outage for nearly three hours, while Adobe Creative Cloud users were locked out for most of a day. A configuration error in Internet service company Cloudfare’s backbone network disrupted world-wide online services for about an hour in July. Then in August, Internet service provider CenturyLink went down, taking dozens of online services and a big chunk of world-wide Internet traffic down with it, while in Australia, a DNS issue affected Telstra’s Internet service for a few hours. In September, a problem with Microsoft’s Azure Active Directory kept users in the North America from their Microsoft Office 365 accounts and other services for five hours, while in October, a network infrastructure update issue again caused difficulties for North American Microsoft Office 365 and other service users for over four hours. And in December, Google suffered consecutive day outages. The first was caused by an internal administrative system storage issue and affected more than a dozen Google services, including Docs, Gmail, Nest, YouTube and its cloud services for about an hour. The next day, Gmail services were down for up to four hours by an email configuration issue.

Social media companies suffered their own outages, like Spotify and Tinder (caused by a Facebook issue) in July, Twitter in February and again in October, as well as Facebook across Europe in December.

Cybercrime: The Targets and Costs Increase

The number of records exposed by data breaches and especially unsecured databases continues to skyrocket, with at least 36 billion records exposed as of the end of September 2020. While the number of data breaches seems to have gone down, the number of large unsecured databases discovered seems to be climbing. StealthLabs has a comprehensive compilation of 25 major data breaches by month.

Ransomware attacks increased significantly in 2020, especially targeting governmentaleducational and hospital systems. Typical were the attacks against the City of Pensacola, Florida, the University of Utah, and the University of Vermont Medical Center. Businesses have not been immune either, with ransomware woes plaguing the likes of electronic company Foxconn, hospital and healthcare services company Universal Health Services, and cybersecurity company Cygilant.

The U.S. Treasury Department’s Office of Foreign Assets Control issued a five-page advisory [PDF] in October warning against paying ransomware demands, stating that it not only encourages more attacks, but it also may run afoul of OFAC regulations and result in civil penalties. Whether the advisory has any impact remains to be seen. Delaware County, Pennsylvania agreed to pay a $500,000 ransom in December, for example.

Nation-state sponsored intrusions have also been prevalent in 2020, such as those against Israel and the UAE. The Russia-attributed “SolarWinds” attack against the U.S. that was initially disclosed in December and then developed into a bigger story has especially caused alarm, with the amount of damage still being unraveled.

In light of how often ransomware attacks are initiated by phishing emails, government agencies and corporations have increased their employee phishing-training, including the use of phishing tests using mock phishing emails and websites. These tests frequently use the same information contained in real phishing emails as a template in order to see how their employees respond. Unfortunately, some of these tests have backfired, causing undue panic or rage among employees as a result. Both Tribune Publishing Co. and GoDaddy recently found out about the latter when their tests were less than well thought out.

Financial Institutions and Markets: Trading Will Resume Tomorrow

The year saw the continuation of bank outages in the UK beginning on New Year’s Day with millions of customers of Lloyds Banking Group unable to access online and mobile banking services. A few days later, computer problems at Clydesdale and Yorkshire banks kept wages and other payments from reaching customer accounts. Lloyds had another online problem in June, and other UK banks like SantanderNatWest, and Barclays experienced their own IT problems in late summer.

Other notable bank IT problems involved U.S. Chase Bank, where “technical issues” created incorrect customer balances in June and Nigerian First City Monument Bank, where up to 5.1 million customers had trouble accessing their online accounts for four days in July. Also in July, Australian Commonwealth Bank customers suffered a nine-hour online and banking outage, while National Australia Bank customers experienced a similar situation in October. A power outage at a data center took out India’s HDFC Bank, which interrupted its services for two days in November. HDFC’s November outage, along with previous incidents, caused the Reserve Bank of India in December to require HDFC to slow down its modernization efforts to ensure that its banking infrastructure was sufficiently reliable and resilient

IT problems at stock exchanges and trading platforms have been especially abundant this past year. In February, a hardware error halted trading at the Toronto Stock Exchange for two hours in February, while a software issue caused the Moscow Exchange to suspend trading for 42 minutes in May. Then in July, stock exchanges in Frankfurt, Vienna, Ljubljana, Prague, Budapest, Zagreb, Malta and Sofia were offline for three hours because of a “technical issue” with the German electronic trading platform Xetra T7 system that each exchange used. In October, a technical issue in third-party middleware software was blamed for the trading halt on Euronext exchanges in Amsterdam, Brussels, Dublin, Lisbon and Portugal. The same month, a hardware failure and subsequent failure of the back-up system took down the Tokyo Stock Exchange for a whole day, the worst electronic outage ever experienced. The problems led to the resignation of TSE Chief Executive Officer Koichiro Miyahara. In November, a software issue caused trading to be suspended on the Australian Stock Exchange for nearly the entire day, its worst outage in more than a decade.

Trading platforms also experienced numerous IT problems. In March, the trading platform Robinhood faced, according to the company’s founders, “stress on our infrastructure.” That stress resulted in three outages in the space of one week, alongside others in JuneAugustNovember and DecemberJ.P. Morgan endured a trading platform problem in March, while Charles SchwabE-TradeFidelityMerrill LynchTD Ameritrade, and Vanguard all had trading system technical issues of their own in November. Charles Schwab, Fidelity, TD Ameritade, and Interactive Brokers Group joined Robinhood with more outages in December as well.

Government IT: Anyone know COBOL?

The pandemic highlighted the dependence of governments everywhere on legacy IT systems, particularly in regard to state unemployment systems. The rapid increase in demand for unemployment benefits and the changes in the amount of benefits paid coupled with the inability to reprogram quickly the benefit systems affected unemployment systems in CaliforniaOregon and Washington State especially hard. On the other hand, nearly every state experienced technical problems, including rampant fraud. Computer issues also affected the Internal Revenue Services ability to send out Congressional approved stimulus checks in April as well.

Legacy IT system worries did not just affect the United States. In February, Canadian Prime Minister Justin Trudeau received a report warning that many mission-critical systems were “rusting out and at risk of failure.” Japan’s government pledged in June to modernize its administrative systems, which were criticized for being “behind the world by at least 20 years.” South Korea’s government also promised in June to accelerate its transition to a digital economy.

While unemployment system problems dominated government system woes, there were others in the news as well. Pittsburgh’s new state of the art employee payroll system had an inauspicious start at the beginning of the year. Meanwhile, Ohio’s Cuyahoga County is still awaiting its new $35 million computer system, which is $10 million over budget and already two years late. It may be ready by 2022. 

Pay issues involving the infamous Canadian Phoenix government payroll system that went live in 2016 continue to be resolved, with its replacement moving to early testing likely next year. Unfortunately, there still is no resolution to those tens of thousands of innocent unemployed Michigan workers falsely accused of employment fraud by Michigan’s Integrated Data Automated System (MiDAS) between October 2013 and September 2015. The state has been forcefully fighting without success to quash a class-action lawsuit for compensation; the case is now with the Michigan’s Supreme Court again for hopefully a final resolution in 2021. Finally, a review of Ohio’s $1.2 billion benefits system that went live in 2013 was still riddled with 1,100 defects and was partially responsible for up to $455 million in benefit overpayments and 24,000 backlog cases in the past year.

Health IT

Medicine’s shift to electronic health records continues to be a bumpy one. In January, the UK government pledged to provide £40 million to streamline logging into National Health Service IT systems. Some staff reportedly must log into as many as 15 different systems each shift. Also in January, it was reported that half of the 23 million records in Australia’s controversial national My Health Record system contain no information, showing that its perceived benefits are still convincing to most Australian patients or practitioners. In March, a research paper [PDF] published in the Mayo Clinic Proceedings indicated that U.S. physicians rated the usability of their EHRs an “F”, and that poorly implemented EHRs were contributing to physician burnout

In May, the U.K.’s National Audit Office reported that the now £8.1 billion IT modernization program being undertaken at the National Health Service is still a jumbled mess that hasn’t learned the lessons from its previous failure. Originally a £4.2 billion program in 2016 that promised a “paperless” NHS by 2020, the target date keeps getting pushed back, with a final cost likely to be much higher than currently projected. Additionally in May, a study published in JAMA Network Open that indicated hospital EHRs were failing to catch 33 percent of potentially harmful drug interactions and other medication errors, while in June, a study published in JAMA indicated more than 20 percent of patients were finding errors in their EHR notes.

In September, the U.S. Coast Guard began piloting its new EHR system that is based on the $4.4 billion Department of Defense Military Health System EHR effort called GENESIS that is planned to be fully deployed across DoD by 2024. The Coast Guard terminated its $67 million mismanaged EHR effort in 2018. In October, after a six month delay, the Department of Veteran Affairs finally rolled out its initial go-live EHR system at the Mann-Grandstaff Medical Center in Spokane, Washington. The $16.4 billion troubled EHR modernization project is scheduled to complete in 2028, although delays and increased costs are likely over the next 7 plus years.

Finally, in December, a briefing note written in October to Saskatchewan’s Minister of Health was made public by the Canadian Broadcasting Corporation warning that the province’s healthcare IT system was at growing risk of failure because of chronic underfunding. “A major equipment failure which may disrupt service and risk lives appears inevitable with the current funding model,” the note warned. When asked to comment about the note, Health Minister Paul Merriman said he “will be asking Ministry of Health officials to look into this matter and to find ways to improve the systems supported by eHealth.” Why the Minister did not ask in October when he received the note was not explained.

Policing: The Computer Wrongly ID’ed You

Issues with automated facial recognition (AFR) continue to dog law enforcement. In wake of social unrest in the U.S. and ongoing worries over AFR bias, Microsoft and Amazon announced in June that they would suspend selling face recognition software to police departments. IBM went one step further and announced in June that it would no longer work on the technology at all. In August, the use of AFR by British police was ruled unlawful by a Court of Appeals until the government officially approves its use. 

Along with the push against the use of AFR, there has been a backlash against the use of predictive policing software. For example, New Orleans, Louisiana and Los AngelesOakland and Santa Cruz, California have all moved to prohibit the use of predictive policing systems.

In December, the Massachusetts State Police announced that it would suspend using its automatic license plate readers until a time and date glitch found to be affecting five years-worth of data was corrected.

There were other police IT issues in the U.K. as well. In January, it was revealed that an error in the City of London Police’s new crime reporting service that was launched in 2018 kept information on over 300,000 fraud crime reports from being shared by the National Fraud Database with the London Police for 15-months. The database is used by major banks, financial institutions, law enforcement and government organizations to share information about fraud and to help the police with their criminal investigations. Then in October, the U.K.’s Police National Computer experienced a 10-hour outage blamed on a “human error,” with one senior police official saying the outage had caused “absolute chaos” across the country’s police forces.

Troubles with iOPS—the late, costly, and controversial new computer system installed by the Greater Manchester Police in the U.K. in late 2019—also persisted unabated throughout the year. The latest crash occurred just last month. Operational difficulties with the system have been linked to a staggering inability by the GMP to record accurate crime data as well. 

Rail Transport: Positive Train Control At Last

Few train or subway IT-related problems were reported this past year. In July, computer and other issues were reported to continue to plague Ottawa’s light rail transit system, while in September, service on the San Francisco area BART system was shut down for about four hours because of a failure of “one of a dozen field network devices.”

The biggest news was that after 12-years, 41 U.S. freight and passenger railroads have met (with two-days to spare) the federal mandate for deploying positive train control to prevent train accidents, such as train-to-train collisions, derailments caused by excessive train speed, train movements through misaligned track switches, and unauthorized train entry into work zones. Vehicle-train and track or equipment failures can still cause train accidents, however. The original deadline was the end of 2015, but that was date shifted back five years as it became clear most railroads would not be able to meet the mandate. 

It should be noted that the National Transportation Safety Board first recommended a form of automatic train control back in 1969. 

The Dog That Didn’t Bark

Finally, a note on what did not seem to happen. Typically, every year there are several memorable IT project failures or cancellations or other major dumpster fires. However, for all its legendary failures and disappointments, 2020 is marked by a dearth of this particular breed of IT catastrophe. There was Australian government’s Visa Processing Platform outsourcing plan failure that cost AU$92 million, the decision by Nacogdoches (Texas) Memorial Hospital to terminate its $20 million EHR contract for Cerner’s Community Works platform, and the Cyberpunk 2077 launch fiasco, but there have been few others that made the news. Whether there indeed were fewer IT project failures (and maybe more successes?), or just fewer reported, should be clearer a year from now at our next review.

St. Helena’s New Undersea Cable Will Deliver 18 Gbps Per Person

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/internet/st-helenas-new-undersea-cable-will-deliver-18-gbps-per-person

Since 1989, the island of St. Helena in the South Atlantic has relied on a single 7.6-meter satellite dish to connect the island’s residents to the rest of the world. While the download and upload speeds have increased over the years, the roughly 4,500 Saints, as the island’s residents call themselves, still share a measly 40-megabit-per-second downlink and a 14.4-Mbps uplink to stay connected. 

But come April, they’ll be getting quite an upgrade: An undersea cable with a maximum capacity of 80 terabits per second will make landfall on the island. That’s a higher data rate than residents can use or afford, so the island’s government is looking to satellite operators to defray the costs of tapping into the cable’s transmissions. However, an incumbent telecom monopoly, and the outdated infrastructure it maintains, could turn the entire project into a cable to nowhere.

Laying an undersea cable to an island with fewer than 5,000 inhabitants is obviously a terrible business idea. Someone has to pay for the seafloor route surveys, the manufacturing and laying of the fiber-optic cable and other hardware, and the ongoing operational costs. St. Helena, part of the British Overseas Territories, was able to pay for its cable only because the European Union’s European Development Fund granted the island €21.5 million in 2018 for that exact purpose.

St. Helena signed a contract with Google in December 2019 under which it would pay for a branch off the Equiano cable, which Google is currently laying from Portugal to South Africa. All that remained was finding someone to use and pay for the bulk of the data once it was flowing to the island. The minimum bandwidth that can be delivered by a single wavelength over the cable is 100 gigabits per second, still too much for St. Helena.

In recent years, several companies have begun launching low Earth orbit (LEO) satellite constellations to provide broadband service, with SpaceX’s Starlink being perhaps the most prominent example. Like any communications satellite, these connect users who can’t tap into terrestrial networks, whether because of distance or geography, by acting as a relay between the user and a ground-station antenna elsewhere that connects to an Internet backbone. The roughly 150-kilogram satellites in LEO constellations offer lower-latency connections compared with those of larger satellites up higher in geostationary orbits, but the trade-off is that they cannot see nearly as much of Earth’s surface. This limitation means the satellites need a scattering of regularly spaced ground stations to complete their connections. 

The scheme also creates ocean-size problems in maintaining coverage for airplanes, ships, and islands. These can connect directly to a LEO satellite, but then there needs to be a connection from the satellite to Earth’s terrestrial networks. Hence the need for a ground station with access to a long-haul cable.“It’s critical to find places where there’s a little bit of land,” says Michele Franci, who is in charge of operations at the LEO satellite company OneWeb, one of the companies interested in building on the island. “Without anything, there will be a hole in the coverage.”

St. Helena is one of the few little bits of land in the South Atlantic. So when OneWeb learned of the proposed Equiano cable, the benefits of having a ground station on the island became apparent. “Without that cable, it would not have been a feasible option,” Franci says. OneWeb filed for bankruptcy in March 2020 but has been bought jointly by the British government and the Indian telecom company Bharti Global.

Christian von der Ropp, who launched a campaign in 2011 to connect St. Helena to an undersea cable, sees ground stations as the ideal way to offset the cable’s operating costs. With OneWeb and other companies that have expressed some level of interest such as SpaceX and Maxar paying for the bulk of the throughput, Saints can siphon off the bit of data they need to conduct business and stay in touch with family and friends working off-island.

But von der Ropp, a satellite-telecom consultant, does foresee a last-mile problem in bringing high-speed connections to Saints, and he blames it on the island’s telecom monopoly, Sure. “They are terribly exploiting the islands,” von der Ropp declares. Pay-TV packages can cost £40 per month, which is a lot on an island where the average monthly income comes to about £700. Sure also charges up to 600 percent premiums for any data usage that exceeds the amount in a subscriber’s plan. Von der Ropp says Sure’s infrastructure is insufficient and that Saints would experience throttled connections in the last mile. However, Christine Thomas, the chief executive for Sure’s operations on St. Helena, says that Sure’s prices have been approved by the island government. Thomas also says that the company has invested £3 million in building out the island’s infrastructure since 2013, and recognizes the need for further upgrades to match the cable’s throughput.

Sure’s current contract with the St. Helena government runs through 31 December 2022. While the cable spur off the Equiano cable to St. Helena will land on the island in April, it will not carry data until early 2022, when the entire cable is completed. The island’s government is currently drafting new telecom regulations and exploring the possibility of issuing a license to a different provider after Sure’s term expires.

Meanwhile, von der Ropp, along with island resident Karl Thrower, who worked on communications infrastructure in Europe before moving to the island, plans to create a nonprofit service provider called Saintel as an alternative to Sure. The two propose that Saintel build new millimeter-wave and laser-link infrastructure to provide high-speed connections from the cable’s endpoint and fund it in part by offering its network as a test bed for those technologies.

Despite an entrenched monopoly and poor infrastructure—not to mention the coronavirus pandemic, resulting in severely restricted travel to the island for preliminary cable and ground-station work—St. Helena can act as a model for how to connect other remote islands. OneWeb’s Franci notes that building ground stations on St. Helena will also improve satellite connections for the island of Tristan de Cunha, about 2,400 kilometers to St. Helena’s south. And other parts of the world’s oceans need coverage, too: For example, there are large gaps in LEO coverage below the 45th parallel south that could be plugged by islands with ground stations.

“The places that used to be isolated and not really part of the mainstream now become relevant,” Franci says. While St. Helena will be as remote as ever, at least it will no longer be isolated.

A version of this article appears in the January 2021 print issue as “A Small Island Waits On Big Data Rates.”

Ensure 5G Systems Integrity by using Multiphysics Analysis of Chips, Packages, and Systems

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/ensure-5g-systems-integrity-by-using-multiphysics-analysis-of-chips-packages-and-systems

The evolution of 5G systems is rooted in the consistently increasing need for more data.  Whether the application is future medical systems, autonomous vehicles, smart cities, AR/VR, IoT, or standard mobile communications systems, all require an ever-increasing amount of data.  For 5G systems to reliably work, the physical data pathways must be well understood and reliably designed.  Whether the pathway is in the chip or the wireless channel, large amounts of data must seamlessly flow unimpeded.  This presentation will discuss the various data paths in 5G systems and how Multiphysics analysis can help design these paths to allow for maximum data integrity.


Presenter name 1
Wade Smith, Manager, 
Application Engineering

Wade Smith is an Applications Engineer Manager at ANSYS, Inc. in the High Frequency Electromagnetics Division where he specializes in Signal Integrity applications via tools such as HFSS, SIwave and Q3D.  Prior to joining ANSYS, Wade worked with Sciperio, Inc. where he conducted multiple projects including antenna, FSS, and wireless sensor applications using 3D Printed fabrication techniques and electronic textiles.  Prior to Sciperio, Wade worked at Harris Corporation where he was involved in projects such as wireless sensor applications, high-speed data-rate systems, 3D embedded RF Filter designs, antennas, and SIP/MCM applications used in the miniaturization of Software Defined Radio systems.  Wade has an M.S.E.E. from the University of Central Florida.

5G Cellular Spectrum Auction—Can’t Tell the Players Without a Scorecard

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/wireless/5g-cellular-spectrum-auctioncant-tell-the-players-without-a-scorecard

Steven Cherry Today’s episode may confuse people and search engines alike—we’ve titled this podcast series Radio Spectrum, but our topic for today is the radio spectrum. Could you here the capital letters the first time I said “radio spectrum” and all lower case letters the second time? I thought not.

Anyway, let’s dive in. Here’s a quote and a 10-point quiz question—who said this?

“The most pressing communications problem at this particular time, however, is the scarcity of radio frequencies in relation to the steadily growing demand.”

Everyone who guessed that it was a U.S. President give yourselves a point. If you guessed it was President Harry Truman, in 1950, give yourself the other 9 points.

At the time, he was talking mainly about commercial radio and the nascent technology of television, ham radios. We didn’t even yet have Telestar, a satellite linkage in the land-based telephone monopoly. That would come a decade later. A decade after that, Motorola researcher Martin Cooper would make the first mobile telephone call.

By the late ’70s, the Federal Communications Commission was set to allocate spectrum specifically for these new mobile devices. But it’s first cellular spectrum allocation was a messy affair. The U.S. was divided up into 120 cellular markets, with two licenses each, and in some cases, hundreds of bidders. By 1984, the FCC had switched over to a lottery system. Unsurprisingly, people gamed the system. The barriers to enter the lottery were low, and many of the 37,000 applications—yes, 37,000 applications—simply wanted to flip the spectrum for a profit if they won.

The FCC would soon move to an auction system. Overnight, the barrier to entry went from very low to very high. One observer noted that these auctions were not “for the weak of heart or those with shallow pockets.”

Cellular adoption grew at a pace no one could anticipate. In 1990 there were 12 million mobile subscriptions worldwide and no data services. Twenty-five years later, there were more than 7 billion subscriber accounts sending and receiving about 50 exabytes per day and accounting for something like four percent of global GDP.

Historically, cellular has occupied a chunk of the radio spectrum that had television transmissions on the one side and satellite use on the other. It should come as no surprise that to meet all that demand, our cellular systems have been muscling out their neighbors for some time.

The FCC is on the verge of yet another auction, to start on December 8. Some observers think this will be the last great auction, for at least a while. It’s for the lower portion of what’s called the C-band, which stretches from 3.7–4.2 gigahertz.

Here to sort out the who, what, when, why, and a bit of the how of this auction is Mark Gibson, Senior Director for Business Development and Spectrum Policy at CommScope, a venerable North Carolina-based manufacturer of cellular and other telecommunications equipment—the parent company of Arris, which might be a more recognizable name to our listeners. And most importantly, he’s an EE and an IEEE member.

Mark, welcome to the podcast.

Mark Gibson Thank you, Steven. I love that intro. I learned something. I got one point, but I didn’t get the other nine. So …  but thank you, that was very good.

Steven Cherry Thank you. Mark, maybe we could start with why this band and this particular 280 MHz portion of it is so important that one observer called it the Louisiana Purchase of cellular?

Mark Gibson Well, that’s a great question. Probably the best reason is how much spectrum is being offered. I believe this is the largest contiguous spectrum in the mid-bands, not considering the millimeter wave, which is the spectrum, about 28 gigahertz. Those were offered in large GHz chunks, but they have propagation issues. So this is the mid-band and of course, meaning you know, mid-band spectrum has superior propagation characteristics over millimeter wave. So you have 280 MHz of spectrum that’s in basically the sweet spot of the range—spectrum that Harry Truman didn’t think about when he talked about the dearth. And so that’s the primary reason. A large amount of spectrum in the middle—the upper-middle of the mid-band range—is one of the reasons this is of so much interest.

Steven Cherry I’m going to ask you a question about 5G that’s maybe a bit cynical. 5G makes our phones more complex, it uses a ton of battery life, it will often increase latency, something we had a recent podcast episode about, it’s fabulously expensive for providers to roll out, and therefore it will probably increase costs for customers, or at least prevent our already-too-high costs from decreasing. When, if ever, will it be a greater plus than a minus?

Mark Gibson That’s a great question, because when you consider the C-band—the Spectrum at 3700 to 3980—that … Let me back up a minute. One of the reasons the issues exist for 5G has to do with the fact that for all intents and purposes, 5G has been deployed in the millimeter wave bands. If you, for example, read a lot of the consumer press, the Wall Street Journal, and others, when they talk about 5G, they talk about it in the context of millimeter-wave.

They talk about small cells, indoor capability, and the like. When you think about 5G in the context of this spectrum, the C-band, a lot of the concerns you’re talking about, at least in terms of some of the complexity with respect to propagation, begin to diminish. Now, the inherent complexities with 5G still exist in some of those are overcome with a lot of spectrum, especially the fact that the spectrum will be TTD, time division duplex. But I’d say that for the most part, the fact that this band will be the 5G—will be deployed in this mid-band range—will give it superior characteristics over millimeter wave, at least in the fact you’ve got a lot of spectrum and also have it in the lower mid-band so you can travel further.

The other inherent concerns, I guess you can say about 5G or the same things we heard about 4G versus 3G. So I think it’s just a question of getting used to the Gs as they get bigger. The one main thing that will help this out is this band of spectrum as well as the band that’s right below it, which is the CPRS band that’s supposed to be a 5G band as well.

Steven Cherry There was already an auction this summer of the band just above this at 3.5 GHz. The process of December’s auction started back in 2018. The auction itself will be held at the end of 2020. What took so long?

Mark Gibson Well, part of the problem is that the band is occupied by … right now I think the number is around 16 500 Earth stations on the ground and then twenty-some satellites in the air. So it took a long time to figure out how to accommodate them. They’re incumbents, and so the age-old question, when new spectrum is made available: What to do with the incumbents? Do you share, do you relocate, do you do both and call it transitional sharing. There was a lot of discussion in that regard around the owners of the spectrum.

The interesting thing here, you have an Earth station on the ground that transmits to the satellite. That could be anybody, it could be for a lot of these are broadcasters. NPR has a network, Disney, ESPN, the regular cable broadcasters, and whatnot. Then you have satellite owners who are different and then you have Earth station owners who are different. So in this ecosystem, there’s three different classes of owners and trying to sort out their rights to the spectrum was complicated. And then how you deal with them in regard to sharing relocation. In the end, the commission came down to relocation and so now the complexity is around, “how do you relocate?” And “relocate” is sort of a generic term. It means to move in spectrum, basically repack them. But how do you repack 16 000 Earth stations into the band above? That would be into the 4.0 to 4.2 gigahertz band. So that’s taken a long time to sort out who has the rights to the spectrum; how do we make it equitable; how do we repack; and then how do we craft an auction? Those are some of the main reasons it’s taken so long.

Steven Cherry Auction rules have evolved over time, but there’s still a little bit of the problem of bidders bidding with the intention of flipping the spectrum instead of using it themselves. Can you say a word about the 2016 auction?

Mark Gibson Well, that’s a good question. With all the spectrum auctions, the commission the SEC manages, what they try to do is establish what’s called substantial service requirements to ensure that those that are going to buy the spectrum put it to some use. But that doesn’t eliminate totally speculation. And so with the TV band auction, there was a fair amount of speculation, mostly because that 600 MHz spectrum is really very valuable. That auction went for $39 billion, if I’m not mistaken. But there was another auction that was complicated because of the situation of TV stations having to be moved. But we saw this in the AWS-3, which is the 1.70 to 2.1 gig bands. A lot of speculation went on and we saw this in great detail, a great sense in the millimeter-wave. In fact, there had been a lot of speculation in the millimeter-wave bands even before the auctions occurred, with companies having acquired the spectrum and hanging onto it, hoping that they could sell it. In fact, several of them did and made a lot of money.

But the way the commission tries to address that primarily is through substantial service requirements over the period of their license term, which is typically 10 years. And what it says loosely is that in order for a licensee to claim their spectrum, they have to put it to some substantial service, which is usually a percent of the rural and percent of the urban population over a period of time. Those changed somewhat, but mostly this is what they try to do.

Steven Cherry Yeah, the 2016 auction was gamed by hedge funds and others who figured out that particular slices in the spectrum, that particular TV stations, if they could hold them, they could keep another buyer from holding a continuous span of spectrum. I should point out that the designers of the 2016 auction won a Nobel Economic Prize for their innovations. But —.

Mark Gibson Yes.

Steven Cherry — I’m just curious, the hedge funds sold these slices at a premium so that there wouldn’t be a gap in a range of spectrum. What would have happened if that were to be those little gaps?

Mark Gibson Well, if you have gaps in spectrum, that’s not it’s not terrible. It’s just you don’t then have a robust use of the spectrum. What our regulator and most regulators endeavor to do in any spectrum auction is to make allocations or the assignments, if you will, contiguous. In other words, there’s no gaps. You know, if there are gaps, what it means is you have a couple of things. One is if you have a gap in the spectrum for whatever reason and that gap, the people using that, then you have adjacent channel concerns. And these are concerns that could give rise to adjacent channel interference from dissimilar services.

With the way that spectrum auction is run, they were able to separate, if you will, bifurcate the auction so that there was a low and a high, and they sold them both at once. Base station frequencies were in the upper portion and the handset frequencies were in a lower portion of that band. And so if you have gaps, then you have problems with who’s in those gaps, and that gives rise to sharing issues. The other thing is, if there’s gaps, the value of the spectrum tends to get diminished because it’s not contiguous. And we’ve seen that in some instances in some of the other bands, and somewhat arcane bands like AWS-4 [Advanced Wireless Service], some of the bands that are owned by Dish and whatnot.

Steven Cherry There’s been a lot of consolidation in the cellular market. In fact, we have only three major players now that T-Mobile acquired Sprint, a merger that finally finished merging earlier this year. So who are the buyers in this auction?

Mark Gibson Well, they run the gamut. I mean, there’s a lot of them there. What’s interesting, a couple of interesting things have come out of it. The Tier 1s are well represented by AT&T, T-Mobile, Sprint, and Verizon. So we expect them all to participate. And interestingly enough, T-Mobile only bought eight markets. If you want to compare this to the spectrum right below it, with the CBRS [Citizens Broadband Radio Service] for at least a spectrum comparison. T-Mobile only bought eight markets and that.

So anyhow, the Tier— the Tier 1s are well represented, but so are the MVNOs [Mobile Virtual Network Operators]—the cable operators. And what’s interesting is that Charter and Comcast are bidding together. Now, ostensibly, that’s because they want to make sure they de-conflict some of the spectrum situations in market area boundaries. In other words, they’re generally separate in terms of their coverage areas. But there were some concerns that there may have some problems with interference de-confliction at the edges of the boundaries. So they decided to participate together, which is interesting.

You know, at the end of the day, there’s 58 companies that qualified to bid and there are a lot of wireless Internet service providers or we call WISPs, for whom this would be a fairly expensive endeavor, the WISPs did participate in the CBRS auction, but that, by comparison, will probably be easily an order of magnitude below what the C-band will be.

But you look at it, it’s the usual suspects. It’s most everybody that’s participated in a major spectrum auction going way back to probably the first ones, although their names have changed. You have a lot of rural telcos, rural telephone companies, a lot of smaller cable operators. Just generally people that could be speculating. Don’t see a ton of that in this band and this spectrum. But there are some that look like they could be some speculators. But I don’t see a lot of that.

Steven Cherry I think people think of Verizon as the biggest cellular provider in the U.S. With the biggest, best coverage overall, but they’ve been playing catch up when it comes to spectrum.

Well, their position is interesting. They act like they’re playing catch up, but they do have a lot of a spectrum. They have a lot of spectrum in the millimeter-wave band, they have a lot of 28 gig spectrum from the acquisition of LMDS [Local Multiband Distribution Service], which is a defunct service. And so they have a lot of 28 gig spectrum. They got a lot of spectrum in the AWS [i.e., Advanced Wireless Service-3] auction, they got a lot of spectrum. In fact, they were the largest spectrum acquirer and I’m sorry, second-largest spectrum acquirer in the CBRS auction. They got a lot of spectrum in the 600 MHz auction. So I don’t know that they’re spectrum poor compared perhaps to some of the others, but they just want to have a broader spectrum footprint, it looks like, or spectrum position.

Steven Cherry You mentioned Dish Network before … The T-Mobile–Sprint merger requires that it give up some spectrum to help Dish become a bigger competitor.

Mark Gibson Right.

Steven Cherry I think the government’s intent was that it be a fourth major to replace Sprint. Did that happen? Is that happening? Is Dish heavily involved in this auction?

Mark Gibson Ah, Dish is involved … I’m not sure to what extent heavily. We haven’t seen anything come out yet with respect to down payments and down payments, give you some indication of their interest in the spectrum. I will say that they spent the most amount of money, over a billion dollars in the CBRS auction. So they were very interested in that. And their spectrum position is, like I said, it’s really a mélange of spectrum across the bands. They have a bunch of spectrum in the AWS-4, a bunch of spectrum in the old 1.4 gig. Their spectrum holdings are really sort of all over the place and they have some spectrum that they risk losing because of this whole substantial service situation. They haven’t built some of their networks out and they have a unique situation because they’re a company who has not built a wireless network yet.

And so most of us expect that Dish again, will build their spectrum portfolio as needed, and then they can decide what they want to do. But you’re correct, they were considered by the commission as possibly being able to help with the 2.5 gig spectrum and some of the other spectrum that was made available through the Sprint-T-Mobile acquisition or merger.

Steven Cherry Auctions are about as exciting as the British game of cricket, which is incredibly boring if you don’t know the rules of the game —

Mark Gibson And going about as long!

Steven Cherry — Right! But it is very exciting if you do understand the rules and the teams and the best players and all that, what else should we be looking out for to get maximum excitement out of this auction?

Mark Gibson Well, I don’t think anybody who’s not interested in spectrum auctions is going to find this exciting other than the potential amount of monies coming into the Treasury. And at the end of the day, $28 to $35 billion dollars is not chump change. So there’s that interest there. And you compare that to CBRS and other auctions, you know, the auctions generate a fair amount of money. I think that’s something to keep in mind, just to watch the auction, are the posturing by the various key players, certainly the Tier 1s and the cable companies.

And just to watch what they do, it’ll be interesting to also watch what the WISPs do, and how the WISPs manage their bidding in some of their market regions. It was interesting in the CBRS auction, the one location that generated the highest per-POP [Point of Presence] revenue was a place called Loving, Tx., which has a total population, depending upon who you ask, of between 82 and 140 people. Yet that license went for many millions of dollars. So the dollar-per-MHz-POP, which is a measure of how much the spectrum was worth, was in the hundreds of dollars. So there’s that stuff that happens, which is we all looked at that and kind of scratched our heads. For this one, like I said, it’ll be the typical looking at what the Tier 1s here choose to do, for that matter, and all the telcos, what they’re going to do, certainly what Dish is going to do and how they do that.

But you won’t know that during the auction because you don’t know who’s bidding on what per se. A lot of the bidding is closed, and so you won’t know, for example, that AT&T or T-Mobile or Dish or whoever are bidding on a given market. You only know that at the end. What you will know, though, is at the end of the rounds how much the markets are going for. Then you can start making educated guesses as to what’s happening there based on what you might know about a given entity.

It’s also interesting to look at the entity names because there’s the applicant name and then there’s the name of the company and there’s sort of a lot of—it’s not chicanery, but hidden stuff going on. For example, Dish is bidding as Little Bear Wireless. So that’s interesting because they had a different name for the CBRS auction. So anyhow, I will be—and my cohorts will be—watching this auction to sort of seeing, first of all, how high it goes, how high some of the markets go. And then at the end, when the auction is all done, who bid for what? And then try to piece together what all that means.

Steven Cherry So what will Comcast’s role be in the auction and what would be a good outcome?

Mark Gibson Well, you know, it’s interesting. Comcast does not own spectrum per se they do a lot of leasing. I think they may have won some spectrum, a little bit of spectrum, in the 600-megahertz band, but they participated in AWS—AWS-1—as a consortium called SpectrumCo, and they were there with Cox and two other smaller cable companies. And they ended up getting a bunch of spectrum. I think they ended up bidding $4 billion in that auction and won a bunch but couldn’t figure out what to do with it. The association dissolved and the spectrum then was left to various other companies.

So I think for for Comcast it’ll be good if they get spectrum in the markets that they want to participate in, obviously. As you probably are aware, the auction is going to be done in two phases or the spectrum going to be awarded in two—and I think the auction is going to be done in two phases as well, the phase one is for the lower—the A Block, which is the 3700 to 3800 [MHz], the lower 100, in the top 46 PEAs, partial economic areas. So that auction will happen and then the rest of it will be sold—the rest of the 180. So the question is, well where will they be bidding? Will they be bidding in their DMAs, which is their market areas overlaid with the PEAs they’ll be bidding. I don’t know that, but that would be interesting to watch. But I think to the extent that Comcast gets any spectrum anywhere, I think it’ll be good for them that I’m pretty sure they can turn around and put it to good use. They have interesting plans, you know, with strand mounted approaches which are really very interesting. So we’ll see. That will be interesting to watch.

Steven Cherry And what will your company’s role in the auction be—and what would be a good outcome for CommScope?

Mark Gibson Our role is … We don’t participate in the auction. I, and my spectrum colleagues, will be watching it closely just to kind of discern sort of the parlor game of who’s doing what. Of course, when the auction ends, we find out we’re all wrong, but that’s okay. But we’ll be we’ll be looking at who’s doing what kind of trying to figure out who—to anticipate who’s doing what. Obviously, CommScope will be selling all of the auction winners what we do, which is infrastructure equipment, base station antennas, and everything in our potpourri —

What would be nice for you guys is if Dish and the cable companies do well since they have to start from scratch when it comes to equipment.

True. That’s true. Yeah, that would be good. And we work with all of these folks, so it’ll be interesting to see how that comes together.

And other than the money in the Treasury, what would be some good outcomes for consumers and citizens?

Well, that’s a good question. Having this spectrum now for 5G, and actually now we’re hearing 6G, in this band will be very useful for consumers. It’ll mean that your handsets will now be able to operate without putting your head in a weird position to accommodate the millimeter wave antennas that are inherent in the handsets. It’s right above CBRS, so there’s some contiguity. There’s another band that’s being looked at. There’s a rulemaking open on it right now. That’s 3450 to 3550. That’s right below CBRS. So that’s 100 megahertz.

So when you consider that plus the CBRS band plus the C-band, you’re looking now at 530 megahertz of mid-band spectrum possible. And so for consumers that will open up a lot of the use cases that are unique to 5G, things like IoT, vehicular technology use cases, that can only be partially addressed with the 5.9 GHz band. If you’ve been following that. C-V2X [Cellular Vehicle-to-Everything] capabilities to push more data to the vehicle will be much more doable across these swaths of spectrum—certainly in this band. We’ve seen interesting applications that were borne out of CBRS that should really see a good life in this band, things like precision agriculture and that sort of thing. So I think consumers will benefit from having the 280 contiguous megahertz of mid-band spectrum to enable all of the use cases that 5G claims to enable.

Steven Cherry Well, Mark, as the saying goes, in the American form of cricket—that is, baseball—you can’t tell the players without a scorecard. Thanks for giving us the scorecard and for joining us today.

Mark Gibson Well, my pleasure, Steven. Very good questions. I really enjoyed it. Thanks again.

Steven Cherry That’s very nice of you to say. We’ve been speaking with Mark Gibson of telecommunications equipment provider CommScope about the upcoming auction, December 8th, of more cellular spectrum in the C-band.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity. This interview was recorded November 30th, 2020. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website—where you can sign up for alerts or for our newsletter—or on Spotify, Apple Podcast, and wherever else you get your podcasts. We welcome your feedback on the Web or in social media.

For Radio Spectrum about the radio spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.


We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

Quantum Memory Milestone Boosts Quantum Internet Future

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/milestone-for-quantum-memory-efficiency-makes-quantum-internet-possible

An array of cesium atoms just 2.5 centimeters long has demonstrated a record level of storage-and-retrieval efficiency for quantum memory. It’s a pivotal step on the road to eventually building large-scale quantum communication networks spanning entire continents.

NYU Wireless Picks Up Its Own Baton to Lead the Development of 6G

Post Syndicated from NYU Wireless original https://spectrum.ieee.org/telecom/wireless/nyu_wireless_picks_up_its_own_baton_to_lead_the_development_of_6g

The fundamental technologies that have made 5G possible are unequivocally massive MIMO (multiple-input multiple-output) and millimeter wave (mmWave) technologies. Without these two technologies there would be no 5G network as we now know it.

The two men, who were the key architects behind these fundamental technologies for 5G, have been leading one of the premier research institutes in mobile telephony since 2012: NYU Wireless, a part of NYU’s Tandon School of Engineering

Ted Rappaport is the founding director of NYU Wireless, and one of the key researchers in the development of mmWave technology. Rappaport also served as the key thought leader for 5G by planting the flag in the ground nearly a decade ago that argued mmWave would be a key enabling technology for the next generation of wireless.  His earlier work at two other wireless centers that he founded at Virginia Tech and The University of Austin laid the early groundwork that helped NYU Wireless catapult into one of the premier wireless institutions in the world.

Thomas Marzetta, who now serves as the director of NYU Wireless, was the scientist who led the development of massive MIMO while he was at Bell Labs and has championed its use in 5G to where it has become a key enabling technology for it.

These two researchers, who were so instrumental in developing the technologies that have enabled 5G, are now turning their attention to the next generation of mobile communications, and, according to them both, we are facing some pretty steep technical challenges to realizing a next generation of mobile communications.

“Ten years ago, Ted was already pushing mobile mmWave, and I at Bell Labs was pushing massive MIMO,” said Marzetta.  “So we had two very promising concepts ready for 5G. The research concepts that the wireless community is working on for 6G are not as mature at this time, making our focus on 6G even more important.”

This sense of urgency is reflected by both men, who are pushing against any sense of complacency that may exist in starting the development of 6G technologies as soon as possible. With this aim in mind, Rappaport, just as he did 10 years ago, has planted a new flag in the world of mobile communications with his publication last year of an article with the IEEE, entitled “Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond” 

“In this paper, we said for the first time that 6G is going to be in the sub-terahertz frequencies,” said Rappaport. “We also suggested the idea of wireless cognition where human thought and brain computation could be sent over wireless in real time. It’s a very visionary look at something. Our phones, which right now are flashlights, emails, TV browsers, calendars, are going to be become much more.”

While Rappaport feels confident that they have the right vision for 6G, he is worried about the lack of awareness of how critical it is for the US Government funding agencies and companies to develop the enabling technologies for its realization. In particular, both Rappaport and Marzetta are concerned about the economic competitiveness of the US and the funding challenges that will persist if it is not properly recognized as a priority.

“These issues of funding and awareness are critical for research centers, like NYU Wireless,” said Rappaport. “The US needs to get behind NYU Wireless to foster these ideas and create these cutting-edge technologies.”

With this funding support, Rappaport argues, teaching research institutes like NYU Wireless can create the engineers that end up going to companies and making technologies like 6G become a reality. “There are very few schools in the world that are even thinking this far ahead in wireless; we have the foundations to make it happen,” he added.

Both Rappaport and Marzetta also believe that making national centers of excellence in wireless could help to create an environment in which students could be exposed constantly to a culture and knowledge base for realizing the visionary ideas for the next generation of wireless.

“The Federal government in the US needs to pick a few winners for university centers of excellence to be melting pots, to be places where things are brought together,” said Rappaport. “The Federal government has to get together and put money into these centers to allow them to hire talent, attract more faculty, and become comparable to what we see in other countries where huge amounts of funding is going in to pick winners.”

While research centers, like NYU Wireless, get support from industry to conduct their research, Rappaport and Marzetta see that a bump in Federal funding could serve as both amplification and a leverage effect for the contribution of industrial affiliates. NYU Wireless currently has 15 industrial affiliates with a large number coming from outside the US, according to Rappaport.

“Government funding could get more companies involved by incentivizing them through a financial multiplier,” added Rappaport

Of course, 6G is not simply about setting out a vision and attracting funding, but also tackling some pretty big technical challenges.

Both men believe that we will need to see the development of new forms of MIMO, such as holographic MIMO, to enable more efficient use of the sub 6 GHz spectrum. Also, solutions will need to be developed to overcome the blockage problems that occur with mmWave and higher frequencies.

Fundamental to these technology challenges is accessing new frequency spectrums so that a 6G network operating in the sub-terahertz frequencies can be achieved. Both Rappaport and Marzetta are confident that technology will enable us to access even more challenging frequencies.

“There’s nothing technologically stopping us right now from 30, and 40, and 50 gigahertz millimeter wave, even up to 700 gigahertz,” said Rappaport. “I see the fundamentals of physics and devices allowing us to take us easily over the next 20 years up to 700 or 800 gigahertz.”

Marzetta added that there is much more that can be done in the scarce and valuable sub-6GHz spectrum. While massive MIMO is the most spectrally efficient wireless scheme ever devised, it is based on extremely simplified models of how antennas create electromagnetic signals that propagate to another location, according to Marzetta, adding, “No existing wireless system or scheme is operating close at all to limits imposed by nature.”

While expanding the spectrum of frequencies and making even better use of the sub-6GHz spectrum are  the foundation for the realization of future networks, Rappaport and Marzetta also expect that we will see increased leveraging of AI and machine learning. This will enable the creation of intelligent networks that can manage themselves with much greater efficiency than today’s mobile networks.

“Future wireless networks are going to evolve with greater intelligence,” said Rappaport. An example of this intelligence, according to Rappaport, is the new way in which the Citizens Broadband Radio Service (CBRS) spectrum is going to be used in a spectrum access server (SAS) for the first time ever.

“It’s going to be a nationwide mobile system that uses these spectrum access servers that mobile devices talk to in the 3.6 gigahertz band,” said Rappaport. “This is going to allow enterprise networks to be a cross of old licensed cellular and old unlicensed Wi-Fi. It’s going to be kind of somewhere in the middle. This serves as an early indication of how mobile communications will evolve over the next decade.”

These intelligent networks will become increasingly important when 6G moves towards so-called cell-less (“cell-free”) networks.

Currently, mobile network coverage is provided through hundreds of roughly circular cells spread out across an area. Now with 5G networks, each of these cells will be equipped with a massive MIMO array to serve the users within the cell. But with a cell-less 6G network the aim would be to have hundreds of thousands, or even millions, of access points, spread out more or less randomly, but with all the networks operating cooperatively together.

“With this system, there are no cell boundaries, so as a user moves across the city, there’s no handover or handoff from cell to cell because the whole city essentially constitutes one cell,” explained Marzetta. “All of the people receiving mobile services in a city get it through these access points, which in principle, every user is served by every access point all at once.”

One of the obvious challenges of this cell-less architecture is just the economics of installing so many access points all over the city. You have to get all of the signals to and from each access point from or to one sort of central point that does all the computing and number crunching.

While this all sounds daunting when thought of in the terms of traditional mobile networks, it conceptually sounds far more approachable when you consider that the Internet of Things (IoT) will create this cell-less network.

“We’re going to go from 10 or 20 devices today to hundreds of devices around us that we’re communicating with, and that local connectivity is what will drive this cell-less world to evolve,” said Rappaport. “This is how I think a lot of 5G and 6G use cases in this wide swath of spectrum are going to allow these low-power local devices to live and breathe.”

To realize all of these technologies, including intelligent networks, cell-less networks, expanded radio frequencies, and wireless cognition, the key factor will be training future engineers. 

To this issue, Marzetta noted: “Wireless communications is  a growing and dynamic field that  is a real opportunity for the next generation of young engineers.”

4G on the Moon: One Small Leap, One Giant Step

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/wireless/4g-moon

Standing up a 4G/LTE network might seem well below the pay grade of a legendary innovation hub like Nokia Bell Labs. However, standing up that same network on the moon is another matter. 

Nokia and 13 other companies — including SpaceX and Lockheed-Martin — have won five-year contracts totaling over US$ 370 million from NASA to demonstrate key infrastructure technologies on the lunar surface. All of which is part of NASA’s Artemis program to return humans to the moon.

NASA also wants the moon to be a stepping stone for further explorations in the solar system, beginning with Mars.

For that, says L.K. Kubendran, lead for commercial space technology partnerships, a lunar outpost would need, at the very least, power, shelter, and a way to communicate.

And the moon’s 4G network will not only be carrying voice and data signals like those found in terrestrial applications, it’ll also be handling remote operations like controlling lunar rovers from a distance. The antennas and base stations will need to be ruggedized for the harsh, radiation-filled lunar environment. Plus, over the course of a typical lunar day-night cycle (28 earth days), the network infrastructure will face temperature extremes of over 250ºC from sunlight to shadow. Of course, every piece of hardware in the network must first be transported to the moon, too. 

“The equipment needs to be hardened for environmental stresses such as vibration, shock and acceleration, especially during launch and landing procedures, and the harsh conditions experienced in space and on the lunar surface,” says Thierry Klein, head of the Enterprise & Industrial Automation Lab, Nokia Bell Labs. 

Oddly enough, it’ll probably all work better on the moon, too. 

“The propagation model will be different,” explains Dola Saha, assistant professor in Electrical and Computer Engineering at the University at Albany, SUNY. She adds that the lack of atmosphere as well as the absence of typical terrestrial obstructions like trees and buildings will likely mean better signal propagation. 

To ferry the equipment for their lunar 4G network, Nokia will be collaborating with Intuitive Machines — who’s also developing a lunar ice drill for the moon’s south pole. “Intuitive Machines is building a rover that they want to land on the moon in late 2022,” says Kubendran. That rover mission now seems likely to be the rocket that’ll begin hauling 4G infrastructure to the lunar surface.    

For all the technological innovation a lunar 4G network might bring about, its signals could unfortunately also mean bad news for radio astronomy. Radio telescopes are notoriously vulnerable to interference (a.k.a. radio frequency interference, or RFI). For instance, a stray phone signal from Mars carries power enough to interfere with the Jodrell Bank radio observatory in Manchester, U.K. So, asks Emma Alexander, an astrophysicist writing for The Conversation, “how would it fare with an entire 4G network on the moon?”

That depends, says Saha. 

“Depends on which frequencies you’re using, and…the side lobes and…filters [at] the front end of the of these networks,” Saha says. On Earth, she continues, there are so many applications that the FCC in the US, and corresponding bodies in other countries, allocate frequencies to each of them. 

“RFI can be mitigated at the source with appropriate shielding and precision in the emission of signals,” Alexander writes. “Astronomers are constantly developing strategies to cut RFI from their data. But this increasingly relies on the goodwill of private companies.” As well as regulations by governments, she adds, looking to shield earthly radio telescopes from human-generated interference from above. 

IoT Network Companies Have Cracked Their Chicken-and-Egg Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-network-companies-have-cracked-their-chicken-and-egg-problem

Along with everything else going on, we may look back at 2020 as the year that companies finally hit upon a better business model for Internet of Things (IoT) networks. Established network companies such as the Things Network and Helium, and new players such as Amazon, have seemingly given up on the idea of making money from selling network connectivity. Instead, they’re focused on getting the network out there for developers to use, assuming in the process that they’ll benefit from the effort in the long run.

IoT networks have a chicken-and-egg problem. Until device makers see widely available networks, they don’t want to build products that run on the network. And customers don’t want to pay for a network, and thus, fund its development, if there aren’t devices available to use. So it’s hard to raise capital to build a new wide-area network (WAN) that provides significant coverage and supports a plethora of devices that are enticing to use.

It certainly didn’t help such network companies as Ingenu, MachineQ, Senet, and SigFox that they’re all marketing half a dozen similar proprietary networks. Even the cellular carriers, which are promoting both LTE-M for machine-to-machine networks and NB-IoT for low-data-rate networks, have historically struggled to justify their investments in IoT network infrastructure. After COVID-19 started spreading in Japan, NTT DoCoMo called it quits on its NB-IoT network, citing a lack of demand.

“Personally, I don’t believe in business models for [low-power WANs],” says Wienke Giezeman, the CEO and cofounder of the Things Network. His company does deploy long-range low-power WAN gateways for customers that use the LoRa Alliance’s LoRaWAN specifications. (“LoRa” is short for “long-range.”) But Giezeman sees that as the necessary element for later providing the sensors and software that deliver the real IoT applications customers want to buy. Trying to sell both the network and the devices is like running a restaurant that makes diners buy and set up the stove before it cooks their meals.

The Things Network sets up the “stove” and includes the cost of operating it in the “meal.” But, because Giezeman is a big believer in the value of open source software and creating a sense of abundance around last-mile connectivity, he’s also asking customers to opt in to turning their networks into public networks.

Senet does something similar, letting customers share their networks. Helium is using cryptocurrencies to entice people to set up LoRaWAN hotspots on their networks and rewarding them with tokens for keeping the networks operational. When someone uses data from an individual’s Helium node, that individual also gets tokens that might be worth something one day. I actually run a Helium hotspot in my home, although I’m more interested in the LoRa coverage than the potential for wealth.

And there’s Amazon, which plans to embed its own version of a low-power WAN into its Echo and its Ring security devices. Whenever someone buys one of these devices they’ll have the option of adding it as a node on the Amazon Sidewalk network. Amazon’s plan is to build out a decentralized network for IoT devices, starting with a deal to let manufacturer Tile use the network for its ­Bluetooth tracking devices.

After almost a decade of following various low-power IoT networks, I’m excited to see them abandon the idea that the network is the big value, and instead recognize that it’s the things on the network that entice people. Let’s hope this year marks a turning point for low-power WANs.

This article appears in the November 2020 print issue as “Network Included.”

Breaking the Latency Barrier

Post Syndicated from Shivendra Panwar original https://spectrum.ieee.org/telecom/wireless/breaking-the-latency-barrier

For communications networks, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging texts on our phones, but we now think nothing of streaming videos from YouTube and Netflix. No wonder, then, that video now consumes up to 60 percent of Internet bandwidth. If this trend continues, we might yet see full-motion holography delivered to our mobiles—a techie dream since Princess Leia’s plea for help in Star Wars.

Recently, though, high bandwidth has begun to share the spotlight with a different metric of merit: low latency. The amount of latency varies drastically depending on how far in a network a signal travels, how many routers it passes through, whether it uses a wired or wireless connection, and so on. The typical latency in a 4G network, for example, is 50 milliseconds. Reducing latency to 10 milliseconds, as 5G and Wi-Fi are currently doing, opens the door to a whole slew of applications that high bandwidth alone cannot. With virtual-reality headsets, for example, a delay of more than about 10 milliseconds in rendering and displaying images in response to head movement is very perceptible, and it leads to a disorienting experience that is for some akin to seasickness.

Multiplayer games, autonomous vehicles, and factory robots also need extremely low latencies. Even as 5G and Wi-Fi make 10 milliseconds the new standard for latency, researchers, like my group at New York University’s NYU Wireless research center, are already working hard on another order-of-magnitude reduction, to about 1 millisecond or less.

Pushing latencies down to 1 millisecond will require reengineering every step of the communications process. In the past, engineers have ignored sources of minuscule delay because they were inconsequential to the overall latency. Now, researchers will have to develop new methods for encoding, transmitting, and routing data to shave off even the smallest sources of delay. And immutable laws of physics—specifically the speed of light—will dictate firm restrictions on what networks with 1-millisecond latencies will look like. There’s no one-size-fits-all technique that will enable these extremely low-latency networks. Only by combining solutions to all these sources of latency will it be possible to build networks where time is never wasted.

Until the 1980s, latency-sensitive technologies used dedicated end-to-end circuits. Phone calls, for example, were carried on circuit-switched networks that created a dedicated link between callers to guarantee minimal delays. Even today, phone calls need to have an end-to-end delay of less than 150 milliseconds, or it’s difficult to converse comfortably.

At the same time, the Internet was carrying delay-tolerant traffic, such as emails, using technologies like packet switching. Packet switching is the Internet equivalent of a postal service, where mail is routed through post offices to reach the correct mailbox. Packets, or bundles of data between 40 and 1,500 bytes, are sent from point A to point B, where they are reassembled in the correct order. Using the technology available in the 1980s, delays routinely exceeded 100 milliseconds, with the worst delays well over 1 second.

Eventually, Voice over Internet Protocol (VoIP) technology supplanted circuit-switched networks, and now the last circuit switches are being phased out by providers. Since VoIP’s triumph, there have been further reductions in latency to get us into the range of tens of milliseconds.

Latencies below 1 millisecond would open up new categories of applications that have long been sought. One of them is haptic communications, or communicating a sense of touch. Imagine balancing a pencil on your fingertip. The reaction time between when you see the pencil beginning to tip over and then moving your finger to keep it balanced is measured in milliseconds. A human-controlled tele­operated robot with haptic feedback would need a similar latency level.

Robots that aren’t human controlled would also benefit from 1-millisecond latencies. Just like a person, a robot can avoid falling over or dropping something only if it reacts within a millisecond. But the powerful computers that process real-time reactions and the batteries that run them are heavy. Robots could be lighter and operate longer if their “brains” were kept elsewhere on a low-latency wireless network.

Before I get into all the ways engineers might build ultralow-latency networks, it would help to understand how data travels from one device to another. Of course, the signals have to physically travel along a transmission link between the devices. That journey isn’t limited by just the speed of light, because delays are caused by switches and routers along the way. And even before that, data must be converted and prepped on the device itself for the journey. All of these parts of the process will need to be redesigned to consistently achieve submillisecond latencies.

I should mention here that my research is focused on what’s called the transport layer in the multiple layers of protocols that govern the exchange of data on the Internet. That means I’m concerned with the portion of a communications network that controls transmission rates and checks to make sure packets reach their destination in the correct order and without errors. While there are certainly small sources of delay on devices themselves, I’m going to focus on network delays.

The first issue is frame duration. A wireless access link—the link that connects a device to the larger, wired network—schedules transmissions within periodic intervals called frames. For a 4G link, the typical frame duration is 1 millisecond, so you could potentially lose that much time just waiting for your turn to transmit. But 5G has shrunk frame durations, lessening their contribution to delay.

Wi-Fi functions differently. Rather than using frames, Wi-Fi networks use random access, where a device transmits immediately and reschedules the transmission if it collides with another device’s transmission in the link. The upside is that this method has shorter delays if there is no congestion, but the delays build up quickly as more device transmissions compete for the same channel. The latest version of Wi-Fi, Wi-Fi Certified 6 (based on the draft standard IEEE P802.11ax) addresses the congestion problem by introducing scheduled transmissions, just as 4G and 5G networks do.

When a data packet is traveling through the series of network links connecting its origin to its destination, it is also subject to congestion delays. Packets are often forced to queue up at links as both the amount of data traveling through a link and the amount of available bandwidth fluctuate naturally over time. In the evening, for example, more data-heavy video streaming could cause congestion delays through a link. Sending data packets through a series of wireless links is like drinking through a straw that is constantly changing in length and diameter. As delays and bandwidth change, one second you may be sucking up dribbles of data, while the next you have more packets than you can handle.

Congestion delays are unpredictable, so they cannot be avoided entirely. The responsibility for mitigating congestion delays falls to the Transmission Control Protocol (TCP), one part of the collection of Internet protocols that governs how computers communicate with one another. The most common implementations of TCP, such as TCP Cubic, measure congestion by sensing when the buffers in network routers are at capacity. At that point, packets are being lost because there is no room to store them. Think of it like a bucket with a hole in it, placed underneath a faucet. If the faucet is putting more water into the bucket than is draining through the hole, it fills up until eventually the bucket overflows. The bucket in this example is the router buffer, and if it “overflows,” packets are lost and need to be sent again, adding to the delay. The sender then adjusts its transmission rate to try and avoid flooding the buffer again.

The problem is that even if the buffer doesn’t overflow, data can still be stuck there queuing for its turn through the bucket “hole.” What we want to do is allow packets to flow through the network without queuing up in buffers. ­YouTube uses a variation of TCP developed at Google called TCP BBR, short for Bottleneck Bandwidth and Round-Trip propagation time, with this exact goal. It works by adjusting the transmission rate until it matches the rate at which data is passing through routers. Going back to our bucket analogy, it constantly adjusts the flow of water out of the faucet to keep it the same as the flow out of the hole in the bucket.

For further reductions in congestion, engineers have to deal with previously ignorable tiny delays. One example of such a delay is the minuscule variation in the amount of time it takes each specific packet to transmit during its turn to access the wireless link. Another is the slight differences in computation times of different software protocols. These can both interfere with TCP BBR’s ability to determine the exact rate to inject packets into the connection without leaving capacity unused or causing a traffic jam.

A focus of my research is how to redesign TCP to deliver low delays combined with sharing bandwidth fairly with other connections on the network. To do so, my team is looking at existing TCP versions, including BBR and others like the Internet Engineering Task Force’s L4S (short for Low Latency Low Loss ­Scalable throughput). We’ve found that these existing versions tend to focus on particular applications or situations. BBR, for example, is optimized for ­YouTube videos, where video data typically arrives faster than it is being viewed. That means that BBR ignores bandwidth variations because a buffer of excess data has already made it to the end user. L4S, on the other hand, allows routers to prioritize time-sensitive data like real-time video packets. But not all routers are capable of doing this, and L4S is less useful in those situations.

My group is figuring out how to take the most effective parts of these TCP versions and combine them into a more versatile whole. Our most promising approach so far has devices continually monitoring for data-packet delays and reducing transmission rates when delays are building up. With that information, each device on a network can independently figure out just how much data it can inject into the network. It’s somewhat like shoppers at a busy grocery store observing the checkout lines to see which ones are moving faster, or have shorter queues, and then choosing which lines to join so that no one line becomes too long.

For wireless networks specifically, reliably delivering latencies below 1 millisecond is also hampered by connection handoffs. When a cellphone or other device moves from the coverage area of one base station (commonly referred to as a cell tower) to the coverage of a neighboring base station, it has to switch its connection. The network initiates the switch when it detects a drop in signal strength from the first station and a corresponding rise in the second station’s signal strength. The handoff occurs during a window of several seconds or more as the signal strengths change, so it rarely interrupts a connection.

But 5G is now tapping into millimeter waves—the frequencies above 20 ­gigahertz—which bring unprecedented hurdles. These frequency bands have not been used before because they typically don’t propagate as far as lower frequencies, a shortcoming that has only recently been addressed with technologies like beamforming. Beamforming works by using an array of antennas to point a narrow, focused transmission directly at the intended receiver. Unfortunately, obstacles like pedestrians and vehicles can entirely block these beams, causing frequent connection interruptions.

One option to avoid interruptions like these is to have multiple millimeter-wave base stations with overlapping coverage, so that if one base station is blocked, another base station can take over. However, unlike regular cell-tower handoffs, these handoffs are much more frequent and unpredictable because they are caused by the movement of people and traffic.

My team at NYU is developing a solution in which neighboring base stations are also connected by a fiber-optic ring network. All of the traffic to this group of base stations would be injected into, and then circulate around, the ring. Any base station connected to a cellphone or other wireless device can copy the data as it passes by on the ring. If one base station becomes blocked, no problem: The next one can try to transmit to the device. If it, too, is blocked, the one after that can have a try. Think of a baggage carousel at an airport, with a traveler standing near it waiting for her luggage. She may be “blocked” by others crowding around the carousel, so she might have to move to a less crowded spot. Similarly, the fiber-ring system would allow reception as long as the mobile device is anywhere in the range of at least one unblocked base station.

There’s one final source of latency that can no longer be ignored, and it’s immutable: the speed of light. Because light travels so quickly, it used to be possible to ignore it in the presence of other, larger sources of delay. It cannot be disregarded any longer.

Take for example the robot we discussed earlier, with its computational “brain” in a server in the cloud. Servers and large data centers are often located hundreds of kilometers away from end devices. But because light travels at a finite speed, the robot’s brain, in this case, cannot be too far away. Moving at the speed of light, wireless communications travel about 300 kilometers per millisecond. Signals in optical fiber are even slower, at about 200 km/ms. Finally, considering that the robot would need round-trip communications below 1 millis­econd, the maximum possible distance between the robot and its brain is about 100 km. And that’s ignoring every other source of possible delay.

The only solution we see to this problem is to simply move away from traditional cloud computing toward what is known as edge computing. Indeed, the push for submillisecond latencies is driving the development of edge computing. This method places the actual computation as close to the devices as possible, rather than in server farms hundreds of kilo­meters away. However, finding the real estate for these servers near users will be a costly challenge to service providers.

There’s no doubt that as researchers and engineers work toward 1-millisecond delays, they will find more sources of latency I haven’t mentioned. When every microsecond matters, they’ll have to get creative to whittle away the last few slivers on the way to 1 millisecond. Ultimately, the new techniques and technologies that bring us to these ultralow latencies will be driven by what people want to use them for.

When the Internet was young, no one knew what the killer app might be. Universities were excited about remotely accessing computing power or facilitating large file transfers. The U.S. Department of Defense saw value in the Internet’s decentralized nature in the event of an attack on communications infrastructure. But what really drove usage turned out to be email, which was more relevant to the average person. Similarly, while factory robots and remote surgery will certainly benefit from submillisecond latencies, it is entirely possible that neither emerges as the killer app for these technologies. In fact, we probably can’t confidently predict what will turn out to be the driving force behind vanishingly small delays. But my money is on multiplayer gaming.

This article appears in the November 2020 print issue as “Breaking the Millisecond Barrier.”

About the Author

Shivendra Panwar is director of the New York State Center for Advanced Technology in Telecommunications and a professor in the electrical and computer engineering department at New York University.

Fake News Is a Huge Problem, Unless It’s Not

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/fake-news-is-a-huge-problem-unless-its-not

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Jonathan Swift in 1710 definitely said, “Falsehood flies, and the truth comes limping after it.” Mark Twain, on the other hand, may or may not have said, “A lie can travel halfway around the world while the truth is putting on its shoes.”

Especially in the context of politics, we lately use the term “fake news” instead of “political lies” and the problem of fake news—especially when it originates abroad—seems to be much with us these days. It’s believed by some to have had a decisive effect upon the 2016 U.S. Presidential election and fears are widespread that the same foreign adversaries are at work attempting to influence the vote in the current contest.

A report in 2018 commissioned by the U.S. Senate Intelligence Committee centered its attention on the Internet Research Agency, a shadowy arm of Russia’s intelligence services. The report offers, to quote an account of it in Wired magazine, “the most extensive look at the IRA’s attempts to divide Americans, suppress the vote, and boost then-candidate Donald Trump before and after the 2016 presidential election.”

Countless hours of research have gone into identifying and combating fake news. A recent study found more than 2000 articles about fake news published between 2017 and 2020.

Nonetheless, there’s a dearth of actual data when it comes to the magnitude, extent, and impact, of fake news.

For one thing, we get news that might be fake in various ways—from the Web, from our phones, from television—yet it’s hard to aggregate these disparate sources. Nor do we know what portion of all our news is fake news. Finally, the impact of fake news may or may not exceed its prevalence—we just don’t know.

A new study looks into these very questions. Its authors include two researchers at Microsoft who listeners of the earlier incarnation of this podcast will recognize: David Rothschild and Duncan Watts were both interviewed here back in 2012. The lead author, Jennifer Allen, was a software engineer at Facebook before becoming a researcher at Microsoft in its Computational Social Science Group and she is also a Ph.D. student at the MIT Sloan School of Management and the MIT Initiative on the Digital Economy. She’s my guest today via Skype.

Jenny, welcome to the podcast.

Jennifer Allen Thank you, Steven. Happy to be here.

Steven Cherry [[COPY]] Jenny, Wikipedia defines “fake news” as “a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media.” The term made its way into the online version of the Random House dictionary in 2017 as “false news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or discrediting a public figure, political movement, company, etc.” Jenny, are you okay with either of these definitions? More simply, what is fake news?

Jennifer Allen Yeah. Starting off with a tough question. I think the way that we define fake news really changes whether or not we consider it to be a problem or the magnitude of the problem. So the way that we define fake news in our research—and how the academic community has defined fake news—is that it is false or misleading information masquerading as legitimate news. I think the way that, you know, we’re using fake news is really sort of this hoax news that’s masquerading as true news. And that is the definition that I’m going to be working with today.

Steven Cherry The first question you tackled in this study and here I’m quoting it: “Americans consume news online via desktop computers and increasingly mobile devices as well as on television. Yet no single source of data covers all three modes.” Was it hard to aggregate these disparate sources of data?

Jennifer Allen Yes. So that was one thing that was really cool about this paper, is that usually when people study fake news or misinformation, they do so in the context of a single platform. So there’s a lot of work that happens on Twitter, for example. And Twitter is interesting and it’s important for a lot of reasons, but it certainly does not give a representative picture of the way that people consume information today or consume news today. It might be popular among academics and journalists. But the average person is not necessarily on Twitter. And so one thing that was really cool and important, although, as you mentioned, it was difficult as well, was to combine different forms of data.

And so we looked at a panel of Nielsen TV data, as well as a desktop panel of individual Web traffic, also provided by Nielsen. And then finally, we also looked at mobile traffic with an aggregate data set provided to us by ComScore. And so we have these three different datasets that really allow us to triangulate the way that people are consuming information and give sort of a high-level view.

Steven Cherry You found a couple of interesting things. One was that most media consumption is not news-related—maybe it isn’t surprising—and there’s a big difference across age lines.

Jennifer Allen Yes, we did find that. And so—as perhaps it might not be surprising—older people consume a lot more television than younger people do. And younger people spend more time on mobile and online than older people do. However, what might be surprising to you is that no matter whether it’s old people or younger people, the vast majority of people are consuming more news on TV. And so that is a stat that surprises a lot of people, even as we look across age groups—that television is the dominant news source, even among people age 18 to 24.

Steven Cherry When somebody looked at a television-originating news piece on the web instead of actually on television, you characterized it as online. That is to say, you characterized by the consumption of the news, not its source. How did you distinguish news from non-news, especially on social media?

Jennifer Allen Yes. So there are a lot of different definitions of news here that you could use. We tried to take the widest definition possible across all of our platforms. So on television, we categorized as news anything that Nielsen categorizes as news as part of their dataset. And they are the gold standard dataset for TV consumption. And so, think Fox News, think the Today show. But then we also added things that maybe they wouldn’t consider news. So Saturday Night Live often contains news clips and touches on the topical events of the day. And so we also included that show as news. And so, again, we tried to take a really wide definition. And the same online.

And so online, we also aggregated a list of, I think, several thousand Web site that were both mainstream news and hyper-partisan news, as well as fake news. And we find hyper-partisan news and fake news using these news lists that have emerged in the large body of research that has come out of the 2016 elections / fake news phenomenon. And so there again, we tried to take the widest definition of fake news. And so not only are things like your crappy single-article site but also things like Breitbart and the Daily Wire we categorize as hyper-partisan sites.

Steven Cherry Even though we associate online consumption with young people and television with older people, you found that fake news stories were more likely to be encountered on social media and that older viewers were heavier consumers than younger ones.

Jennifer Allen Yes, we did find that. This is a typical finding within the fake news literature, which is that older people tend to be more drawn to fake news for whatever reason. And there’s been work looking at why that might be. Maybe it’s digital literacy. Maybe it’s just more interested in news generally. And it’s true that on social media, there’s more fake and hyper-partisan news than, you know, on the open web.

That being said, I would just emphasize that the dominant … that the majority of news that is consumed even on social media—and even among older Americans—is still mainstream. And so, think your New York Times or your Washington Post instead of your Daily Wire or Breitbart.

Steven Cherry You didn’t find much in the way of fake news on television at all.

Jennifer Allen Yes. And so this is sort of a function, as I was saying before, of the way that we defined fake news. We, by definition, did not find any fake news on television, because the way the fake news has really been studied and in the literature and also talked about sort of in the mainstream media is as this phenomenon of Web sites masquerading as legitimate news outlets. That being said, I definitely believe that there is misinformation that occurs on television. You know, a recent study came out looking at who the biggest spreader of misinformation around the coronavirus was and found it to be Donald Trump. And just because we aren’t defining that content as fake news—because it’s not deceptive in the way that it is presenting itself—doesn’t mean that it is necessarily legitimate to information. It could still be misinformation, even though we do not define it as fake news.

Steven Cherry I think the same thing would end up being true about radio. I mean, there certainly seems to be a large group of voters—including, it’s believed, the core supporters of one of the presidential candidates—who are thought to get a lot of their information, including fake information from talk radio.

Jennifer Allen Yeah, talk radio is unfortunately a hole in our research; we were not able to get a good dataset looking at talk radio. And indeed, you know, talk radio. And, you know, Rush Limbaugh’s talk show, for example, can really be seen as the source of a lot of the polarization in the news and the news environment.

And there’s been work done by Yochai Benkler at the Harvard Berkman Klein Center that looks at the origins of talk radio in creating a polarized and swampy news environment.

Steven Cherry Your third finding, and maybe the most interesting or important one, is and I’m going to quote again, “fake news consumption is a negligible fraction of Americans’ daily information diet.”

Jennifer Allen Yes. So it might be a stat that surprises people. We find that fake news comprises only 0.15 percent of Americans’ daily media diet. Despite the outsized attention that fake news gets in the mainstream media and especially within the academic community: more than half of the journal articles that contain the word news are about fake news in recent years. It is actually just a small fraction of the news that people consume. And also a small fraction of the information that people consume. The vast majority of the content that people are engaging with online is not news at all. It’s YouTube music videos. It’s entertainment. It’s Netflix.

And so I think that it’s an important reminder that when we consider conversations around fake news and its potential impact, for example, on the 2016 election, that we look at this information in the context of the information ecosystem and we look at it not just in terms of the numerator and the raw amount of fake news that people are consuming, but with the denominator as well. So how much of the news that people consume is actually fake news?

Steven Cherry So fake news ends up being only one percent or even less of our overall media diet. What percentage is it of news consumption?

Jennifer Allen It occupied less than one percent of overall news consumption. So that is, including TV. Of course, when you zoom in, for example, to fake news on social media, the scale of the problem gets larger. And so maybe seven to 10 percent—and perhaps more, depending on your definition of fake news—of news that is consumed on social media could be considered what we say is hyper-partisan or fake news. But still, again, to emphasize, the majority of people on Facebook are not seeing any news at all. So, you know, over 50 percent of people on Facebook and in our data don’t click on any news articles that we can see.

Steven Cherry You found that our diet is pretty deficient in news in general. The one question that you weren’t able to answer in your study is whether fake news, albeit just a fraction of our news consumption—and certainly a tiny fraction of our media consumption—still might have an outsized impact compared with regular news.

Jennifer Allen Yeah, that’s very true. And I think here it’s important to distinguish between the primary and secondary impact of fake news. And so in terms of, you know, the primary exposure of people consuming fake news online and seeing a news article about Hillary Clinton running a pedophile ring out of a pizzeria and then changing their vote, I think we see very little data to show that that could be the case. 

That being said, I think there’s a lot we don’t know about the secondary sort of impact of fake news. So what does it mean for our information diets that we now have this concept of fake news that is known to the public and can be used and weaponized?

And so, the extent to which fake news is covered and pointed to by the mainstream media as a problem also gives ammunition to people who oppose journalists, you know, mainstream media and want to erode trust in journalism and give them ammunition to attack information that they don’t agree with. And I think that is a far more dangerous and potentially negatively impactful effect of fake news and perhaps its long-lasting legacy.

The impetus behind this paper was that there’s all this conversation around fake news out of the 2016 election. There is a strong sense that was perpetuated by the mainstream media that fake news on Facebook was responsible for the election of Trump. And that people were somehow tricked into voting for him because of a fake story that they saw online. And I think the reason that we wanted to write this paper is to contradict that narrative because you might read those stories and think people are just living in an alternate fake news reality. I think that this paper really shows that that just isn’t the case.

To the extent that people are misinformed or they make voting decisions that we think are bad for democracy, it is more likely due to the mainstream media or the fact that people don’t read news at all than it is to a proliferation of fake news on social media. And you know, one thing that David [Rothschild]—and in one piece of research that David and Duncan [Watts] did prior to this study that I thought was really resonant was to say that, let’s look at the New York Times. And in the lead-up to the 2016 election, there were more stories about Hillary Clinton’s email scandal in the seven days before the 2016 election than there were about policy at all over the whole scope of the election process. And so instead of zeroing in on fake news, really push our attention to really take a hard look at the way the mainstream media operates. And also, you know, what happens in this news vacuum where people aren’t consuming any news at all.

Steven Cherry So people complain about people living inside information bubbles. What your study shows is fake news, if it’s a problem at all, is really the smallest part of the problem. A bigger part of the problem would be false news—false information that doesn’t rise to the level of fake news. And then finally, the question that you raise here of balance when it comes to the mainstream media. “Balance”—I should even say “emphasis.”

Jennifer Allen Yes. So I think, again, the extent to which people are misinformed, I think that we can look to the mainstream news. And, you know, for example, it’s overwhelming coverage of Trump and the lies that often he spreads. And I think some of the new work that we’re doing is trying to look at the mainstream media and its potential role and not reporting false news that is masquerading as true. But, you know, reporting on people who say false things without appropriately taking the steps to discredit those things and really strongly punch back against them. And so I think that is an area that is really understudied. And I would hope that researchers look at this research and sort of look at the conversation that is happening around Covid and, you know, mail in voting and the 2020 election and really take a hard look at mainstream media, you know, so-called experts or politicians making wild claims in a way that we would not consider them to be fake news, but are still very dangerous.

Steven Cherry Well, Jenny, it’s all too often true that the things we all know to be true aren’t so true. And as usual, the devil is in the details. Thanks for taking a detailed look at fake news, maybe with a better sense of it quantitatively, people can go on and get a better sense of its qualitative impact. So thank you for your work here and thanks for joining us today.

Jennifer Allen Thank you so much. Happy to be here.

We’ve been speaking with Jennifer Allan, lead author of an important new study, “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” This interview was recorded October 7, 2020. Our thanks to Raul at Gotham Podcast’s Studio for our engineering today and to Chad Crouch for our music.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Problem of Filter Bubbles Hasn’t Gone Away

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/the-problem-of-filter-bubbles-hasnt-gone-away

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In 2011, the former executive director of MoveOn gave a widely-viewed TED talk, “Beware Online Filter Bubbles“ that became a 2012 book and a startup. In all the talk of fake news these days, many of us have forgotten the unseen power of filter bubbles in determining the ways in which we think about politics, culture, and society. That startup tried to get people to read news they might otherwise not see by repackaging them with new headlines.

A recent app, called Ground News, has a different approach. It lets you look up a topic and see how it’s covered by media outlets with identifiably left-leaning or right-leaning slants. You can read the coverage itself, right–left–moderate or internationally; look at its distribution; or track a story’s coverage over time. Most fundamentally, it’s a way of seeing stories that you wouldn’t ordinarily come across.

My guest today is Sukh Singh, the chief technology officer of Ground News and one of its co-founders.

Sukh, welcome to the podcast.

Sukh Singh Thanks for having me on, Steven.

Steven Cherry Back in the pre-Internet era, newspapers flourished, but overall news sources were limited. Besides a couple of newspapers in one’s area, there would be two or three television stations you could get, and a bunch of radio stations that were mainly devoted to music. Magazines were on newsstands and delivered by subscription, but only a few concentrated on weekly news. That world is gone forever in favor of a million news sources, many of them suspect. And it seems the Ground News strategy is to embrace that diversity instead of lamenting it, putting stories in the context of who’s delivering them and what their agendas might be, and most importantly, break us out of the bubbles we’re in.

Sukh Singh That’s true that we are embracing the diversity as you mentioned, moving from the print era and the TV era to the Internet era. The costs of having a news outlet or any kind of a media distribution outlet have dropped dramatically to the point of a single-person operation becoming viable. That has had the positive benefit of allowing people to cater to very niche interests that were previously glossed over. But on the negative side, the explosion of a number of outlets out there certainly has—it has a lot of drawbacks. Our approach at Ground News is to take as wide a swath as we meaningfully can and put it in one destination for our subscribers.

Steven Cherry So has the problem of filter bubbles gotten worse since that term was coined a decade ago?

Sukh Singh It has. It has certainly gotten much worse. In fact, I would say by the time it was even coined, the development of filter bubbles were well underway before the phenomena was observed.

And that’s largely because it’s a natural outcome of the algorithms and, later, machine learning models used to determine what is being served, what content is being served to people. In the age of the Internet, personalization of a news feed became possible. And that meant more and more individual personalization and machines picking—virtually handpicking—what anybody get served. By and large, we saw the upstream effect of that being that news publications found out what type of content appealed to a large enough number of people to make to make them viable. And that has resulted in a shift from a sort of aspiration of every news outlet to be the universal record of truth to a more of an erosion of that. And now many outlets, certainly not all of them, but many outlets embracing the fact that they’re not going to cater to everyone; they are going to cater to a certain set of people who agree with their worldview. And their mission then becomes reinforcing that worldview, that agenda, that that specific set of beliefs through reiterated and repeated content for everything that’s happening in the world.

Steven Cherry People complain about the filtering of social media. But that original TED talk was about Google and its search results, which seems even more insidious. Where’s is the biggest problem today?

Sukh Singh I would say that social media has shocked us all in terms of how bad the problem can get. If you think back 10 years, 15 years … Social media as we have it today, would not have been the prediction of most people. If we think back to the origin of, say, Facebook, it was very much in the social networking era over most of the content you were receiving was from friends and family. It’s a relatively recent phenomenon, certainly in this decade. Not the last one, where we saw this chimera of social network plus digital media becoming social media and the news feed there—going back to the personalization—catered to the one-engagement rhetoric, one-success metric being how much time-on-platform could they get the user to spend. Which, comparing against Google, isn’t as much about the success metrics, it’s more getting you to the right page or to the most relevant page as quickly as possible. But with social media, when that becomes a multi-hour-per-day activity, it certainly had more wide-reaching and deeper consequences.

Steven Cherry So I gave just a little sketch of how Ground News works. Do you want to say a little bit more about that?

Sukh Singh Absolutely. So I think you alluded to this in your earlier questions … going from the past age of print and TV media to the Internet age. What we’ve seen is that news delivery today has really bifurcated into two types, into two categories. We have the more traditional, more legacy based news outlets coming to the digital age, to the mobile age, with websites and mobile apps that are tailored along the conventional sections of, here’s the sports section, here’s the entertainment section, here’s a politics section—that was roughly how the world of information was divided up by these publications, and that has carried over. And on the other side, we see the social media feed, which has in many ways blown the legacy model out of the water.

It’s a single-drip feed where the user has to do no work and can just scroll and keep finding more and more and more … an infinite supply of engaging content. That divide doesn’t map exactly to education versus entertainment. Entertainment and sensationalism has been a part of media as far as back media goes. But there certainly is more affinity toward entertainment in a social media feed which caters to engagement.

So we at Ground News serve both those needs, through both those models, with two clearly labeled and divided feeds. One is Top Stories and the other one is My Feed. The Top Stories is more. … there’s a legacy model of, here are universally important news events that you should know about, no matter which walk of life you come from, no matter where you are located, no matter where your interests lie. And the second being My Feed, which is the recognition that ultimately people will care more about certain interests, certain topics, certain issues than other ones. So it is a nod to that personalization within the limits of not delving down to the same spiral of filter bubbles.

Steven Cherry There’s only so many reporters at a newspaper. There’s only so many minutes we have in a day to read the news. So in all of the coverage, for example, of the protests this past month—coverage we should be grateful for of an issue that deserves all the prominence it can get—a lot of other stories got lost. For example, there was a dramatic and sudden announcement of a reduction in our U.S. troop count in Germany. [Note: This episode was recorded June 18, 2020 — Ed.] I happened to catch that story in the New York Times myself. But it was a pretty small headline, buried pretty far down. It was covered in your weekly newsletter, though. I take it you see yourself as having a second mission besides one-sided news. The problem of under-covered news.

Sukh Singh Yes, we do, and that’s been a realization as we’ve made the journey of Ground News. It wasn’t something that we recognized from the onset, but something that we discovered as we were … as our throughput of news increased. We spoke about the problem of filter bubbles. And we initially thought the problem was bias. The problem was that a news event happens, some real concrete event happens in the real world, and then it is passed on as information through various news outlets, each one spinning it or at least wording it in a way that aligned to either their core agenda or to the likings of their audience. More and more, we found that the problem isn’t just bias and spin, it’s also the mission.

So if we look at the wide swath of left-leaning and right-leaning publications, news publications, in America today … If we were to go to the home page of two publications fairly wide apart on the political spectrum, you would not just find the same news stories with different headlines or different lenses, but an entirely different set of news stories. So much so—you mentioned our newsletter to the Blindspot report—in the Blindspot report. We pick each week five to six stories that were covered massively on one side of the political spectrum but entirely omitted from the other.

So in this case—the event that you mentioned about the troop withdrawal from Germany—it did go very unnoticed by certain parts of the political spectrum. So as a consumer, as a consumer who wants to be informed, going to one or two news sources, no matter how valuable, no matter how rigorous they are, will inevitably result in very large parts of the news out there that will be emitted from your field of view. It’s a secondary conversation, whether that’s whether if you’re going to the right set of publications or not. But what a more primary and more concerning conversation is: how do you communicate with your neighbor when they’re coming from a completely different set of news stories and a different worldview informed by them?

Steven Cherry The name Ground News seems to be a reference to the idea that there’s a ground truth. There are ground truths in science and engineering; it would be wonderful, for example, if we could do some truly random testing for coronavirus and get the ground truth on rates of infection. But are there ground truths in the news business anymore? Or are there only counterbalancings of partial truths?

Sukh Singh That’s a good question. I wouldn’t be as cynical to say as that there’s no news publications out there reporting what they truly believe to be the ground truth. But we do find ourselves in a world where … in a world of court counterbalances. We do turn on the TV news networks and we do see a set of three talking heads with a moderator in the middle and differing opinions on either side. So what we do, at Ground News—as you said, the reference to the name—is try to have that flat, even playing field where different perspectives can come and make their case.

So our aspiration is always to take the—whether that’s the ground truth, whether that’s in the world of science or in the world philosophy, whatever you want to call an atomic fact—is, take the real event and then have dozens, typically, on average, we have about 20 different news perspectives, local, national, international, left, right, all across the board covering the same news event. And are our central thesis is that the ultimate solution is reader empowerment, that no publication or technology can truly come to the conclusions for a person. And there’s perhaps a “shouldn’t” in there as well. So our mission really is to take the different news perspectives, present them on an even playing field to the user, to our subscribers, and then allow them to come to their own conclusion.

Steven Cherry So without getting entirely philosophical about this, it seems to me that—let’s say in the language of Plato’s Republic and the allegory of the cave—you’re able to help us look at more than just the shadows that are projected on the wall of the cave. We get to see all of the different people projecting the shadows, but we’re still not going to get to the Platonic forms of the actual truth of what’s happening in the world. Is that fair to say?

Sukh Singh Yes. Keeping with that allegory, I would say that our assertion is not that every single perspective is equally valid. That’s not a value judgment view that we ever make. We don’t label the left-right-moderate biases on any news publication or a platform. We actually source them from three arms-length nonprofit agencies that have the mission of labeling news publications by their demonstrated bias. So we aggregate and use those as labels in our platform. So we never pass a value judgment on any perspective. But my hope personally and ours as a company really is that some perspectives are getting you closer to the glimpse of the outside rather than just being another shadow on the wall. The onus really is on the reader to be able to say which perspective or which coverage they think most closely resembles what the ground truth is.

Steven Cherry I think that’s fair enough. And I think I would also be fair to add that, even for issues for which there really isn’t a pairing of two opposing sides—for example, climate change, responsible journalists pretty much ignore the idea of there being no climate change—but still, it’s important for people politically to understand that there are people out there who have not accepted climate change and that they’re still writing about it and still sharing views and so forth. And so it seems to me that what you’re doing is shining a light on that aspect of it.

Sukh Singh Absolutely, and one of our key aspirations and our mission is to enable people to have those conversations. So even if you are 100 percent convinced that you are going to credible news publications and you’re getting the most vetted and journalistically rigorous news coverage that is available on the free market, it may still be that you might not be able to reach across the aisle or just go next door and talk to your neighbor or your friend, who is living in a very different … different world view. Better or worse, again, we won’t pass judgment, but just having a more expanded scope of news stories that come into your field of view, on your radar, does enable you to have those conversations, even if you feel some of your peers may be misguided.

Steven Cherry The fundamental problem in news is that there are financial incentives for aggregators like Google and Facebook and for the news sources themselves to keep us in the bubbles that we’re in, feeding us only stories that fit our world view and giving us extreme versions of the news instead of more moderate ones. You yourself noted that with Facebook and other social networks, the user does no work in those cases. Using Ground News is something you have to do actively. Do you ever fear that it’s just a sort of Band-Aid that we can place on this gaping social wound?

Sukh Singh So let me deal with that in two parts, the first part is the financial sustainability of journalism. There certainly is a crisis there. And then I think we can have another of several more of these conversations about the financial sustainability in journalism and solutions to that crisis.

But one very easily identifiable problem is the reliance on advertising. I think a lot of news publications all too willingly started publicizing their content on the Internet to increase their reach and any advertising revenue that they could get off of that from Facebook, sorry, from Google and later Facebook, was incremental revenue to their print subscription. And they were, on the whole, very chipper to get incremental revenue by using the Internet. As we’ve seen, that problem has become a more and more of a stranglehold on news publications and media publications in general, where they’re trying to find a fight for these ad dollars. And the natural end of that, that competition is sensationalism and clickbait. That’s speaking to the financial sustainability in journalism there.

I mean, the path we’ve chosen to go down—exactly for that reason—is to charge subscriptions directly to our users. So we have thousands of paying subscribers now paying a dollar a month or ten dollars a year to access the features on Ground News. And that’s a nominal price point. But it also has an ulterior motive to that. It really is about habit-building and getting people to pay for news again. There are many of us have forgotten over the last couple of decades that news paying for news, which almost used to be that the same as paying for electricity or water, that sense of having to pay for news, has disappeared. We’re trying to revive that, which again, will hopefully pay dividends down the line for financial sustainability in journalism.

In terms of being a Band-Aid solution, we do think there is more of a movement for people accepting the responsibility to do the work, to inform themselves, which is direct and stands in direct contrast to the social media feed, which I think most of us have come to distrust, especially in recent years. There was a, I believe, Reuters study two years ago that showed that 2013 was the first year where people went to Facebook for their news, fewer people than to Facebook for their news in twenty eighteen than they did in the year before. And that was the first time in a decade. So I do think there’s a recognition of that. There’s a recognition a social media feed is no longer a viable news delivery mechanism. So people we do see come doing that little bit of work and on our part, we make it as accessible as possible here. Your question reminds me of the kind of adage that as a consumer, if you’re not the customer, you’re the product. And that really is the divide using a free social media feed as opposed to paying for a news delivery mechanism.

Steven Cherry Ground News is actually a service of your earlier startup Snapwise. Do you want to say a little bit about it, what it does.

Sukh Singh My co-founder was a former NASA engineer of NASA’s satellite engineer who worked on earth observation satellites. So she was working on a constellation of satellites that went across the planet every 24 hours and mapped every square foot of the planet for literally the ground truth, what was happening everywhere on the planet. And once she left her space career and she and I was starting to talk about the impact of technology in journalism, we realized that if we can map the entire planet every 24 hours and have an undeniable record of what what’s happening in the world, why can’t we have the same in the news industry? So our earliest iteration of what is now Ground News was much more focused on citizen journalism and getting folks to use their phones to communicate what was happening in the world around them and getting that firsthand data into the information stream, which we consume as news consumers.

If this is starting to sound like Twitter, we ran into several of the same drawbacks, especially when it came to news integrity and verifying the facts and making sure that what people were using as information really was to them to the same grade as professional journalists. And more and more, we realized we couldn’t diminish the role of professional journalists in delivering what the news is. So we started to advocate more and more vetted, credible news publications from across the world. And before we knew it, we had fifty thousand different unique sources of news, local, national, international, left-to-right, all the way down from your town newspaper to you to a giant multi-national press wire service like Thomson Reuters. We were taking all those different news sources and putting them in the same platform. So so that’s really been our evolution, as people trying to solve some of these problems in the journalistic industry.

Steven Cherry How do you identify publications as being on the left or on the right?

Sukh Singh As we started aggregating more and more news sources, we got over to the 10,000 mark. And before we knew what we were up to 50,000 news sources that we were covering. It’s humanly impossible for our small team or imagine even a much, much larger team to really carefully go and label each of them. So we’ve taken that from a number of news monitoring agencies whose mission and entire purpose as organizations is to review and review news publications.

So we use three different ones today. Media Bias Fact Check, AllSides, as well as Ad Fontes Media and all three of these, I would call them rating agencies, if you want to use the stock market analogy, that sort of rate, the political leanings and factual sort of demonstrated factuality of these news organizations. We take that as inputs. We aggregate them. But you do make exactly their original labels available on our platform, to use an analogy from the movie world, where we’re sort of like Metacritic, aggregating ratings from IMDb and Rotten Tomatoes and different platforms and making that all transparently available for consumers.

Steven Cherry You’re based in Canada, in Kitchener, which is a small city about an hour from Toronto. I think Americans think of Canada as having avoided some of the extremisms of the U.S. I mean, other than maybe burning down the White House a couple of centuries ago, it’s been a pretty easy-going get-along kind of place. Do you think being Canadian and looking at the U.S. from a bit of a distance contributed to what you’re doing?

Sukh Singh We had I don’t think we’ve had a belligerent reputation since since the War of 1812. As Canadians, we do enjoy a generally nice-person kind of stereotype. We are, as you said, at arm’s length. And sitting’s not quite a safe distance away, but across the border from everything that happens in the US, but with frequent trips down and just being deeply integrating with the United States as as a country, we do get a very, very close view on what’s happening.

North of the border, we do have our own political … I mean, we do have our own political system to deal with all of its workings and all of its ins and outs. But in terms of where we’ve really seen Ground News deliver value, it certainly has been in the United States. That is are both our biggest market and our largest set of subscribers by far.

Steven Cherry Thank you so much for giving us this time today and explaining a service that’s really providing an essential function in this chaotic news political world.

Sukh Singh Thanks, Steven.

Steven Cherry We’ve been speaking with Sukh Singh, CTO and co-founder of Ground News, an app that helps break us out of our filter bubbles and tries to provide a 360-degree view of the news.

Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

This interview was recorded June 18, 2020.


Spotify, Machine Learning, and the Business of Recommendation Engines



Ad Fontes Media

“Beware Online Filter Bubbles” (TED talk)

The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, by Eli Pariser (Penguin, 2012)

Indoor Air-Quality Monitoring Can Allow Anxious Office Workers to Breathe Easier

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/indoor-airquality-monitoring-can-allow-anxious-office-workers-to-breathe-easier

Although IEEE Spectrum’s offices reopened several weeks ago, I have not been back and continue to work remotely—as I suspect many, many other people are still doing. Reopening offices, schools, gyms, restaurants, or really any indoor space seems like a dicey proposition at the moment.

Generally speaking, masks are better than no masks, and outdoors is better than indoors, but short of remaining holed up somewhere with absolutely no contact with other humans, there’s always that chance of contracting COVID-19. One of the reasons I personally haven’t gone back to the office is because the uncertainty over whether the virus, exhaled by a coworker, is present in the air, doesn’t seem worth the novelty of actually being able to work at my desk again for the first time in months.

Unfortunately, we can’t detect the virus directly, short of administering tests to people. There’s no sensor you can plug in and place on your desk that will detect virus nuclei hanging out in the air around you. What is possible, and might bring people some peace of mind, is installing IoT sensors in office buildings and schools to monitor air quality and ascertain how favorable the indoor environment is to COVID-19 transmission.

One such monitoring solution has been developed by wireless tech company eleven-x and environmental engineering consultant Pinchin. The companies have created a real-time monitoring system that measures carbon dioxide levels, relative humidity, and particulate levels to assess the quality of indoor air.

“COVID is the first time we’ve dealt with a source of environmental hazard that is the people themselves,” says David Shearer, the director of Pinchin’s indoor environmental quality group. But that also gives Pinchin and eleven-x some ideas on how to approach monitoring indoor spaces, in order to determine how likely transmission might be.

Again, because there’s no such thing as a sensor that can directly detect COVID-19 virus nuclei in an environment, the companies have resorted to measuring secondhand effects. The first measurement is carbon dioxide levels. he biggest contributor to CO2 in an indoor space is people exhaling. Higher levels in a room or building suggest more people, and past a certain level, means either there are too many people in the space or the air is not being vented properly. Indirectly, that carbon dioxide suggests that if someone was a carrier of the coronavirus, there’s a good chance there’s more virus nuclei in the air.

The IoT system also measures relative humidity. Dan Mathers, the CEO of eleven-x, says that his company and Pinchin designed their monitoring system based on recommendations by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE’s guidelines [PDF] suggest that a space’s relative humidity should hover between 40 and 60 percent. More or less than that, Shearer explains, and it’s easier for the virus to be picked up by the tissue of the lungs.

A joint sensor measures carbon dioxide levels and relative humidity. A separate sensor measures the level of 2.5-micrometer particles in the air. 2.5 microns (or PM 2.5) is an established particulate size threshold in air quality measuring. And as circumstances would have it, it’s also slightly smaller than the size of a virus nucleus, which about 3 µm. They’re still close enough in size that a high level of 2.5-µm particles in the air means there’s a greater chance that there’s potentially some COVID-19 nuclei in the space.

According to Mathers, the eleven-x sensors are battery-operated and communicate using LoRaWAN, a wireless standard that is most notable for being extremely low power. Mathers says the batteries in eleven-x’s sensors can last for ten years or more without needing to be replaced.

LoRaWAN’s principal drawback is that isn’t a good fit for transmitting high-data messages. But that’s not a problem for eleven-x’s sensors. Mathers says the average message they send is just 10 bytes. Where LoRa excels is in situations where simple updates or measurements need to be sent—measurements such as carbon dioxide, relative humidity, and particulate levels. The sensors communicate on what’s called an interrupt basis, meaning once a measurement passes a particular threshold, the sensor wakes up and sends an update. Otherwise, it whiles away the hours in a very low power sleep mode.

Mathers explains that one particulate sensor is placed near a vent to get a good measurement of 2.5-micron-particle levels. Then, several carbon dioxide/relative humidity sensors are scattered around the indoor space to get multiple measurements. All of the sensors communicate their measurements to LoRaWAN gateways that relay the data to eleven-x and Pinchin for real-time monitoring.

Typically, indoor air quality tests are done in-person, maybe once or twice a year, with measuring equipment brought into the building. Shearer believes that the desire to reopen buildings, despite the specter of COVID-19 outbreaks, will trigger a sea change for real-time air monitoring.

It has to be stressed again that the monitoring system developed by eleven-x and Pinchin is not a “yes or no” style system. “We’re in discussions about creating an index,” says Shearer, “but it probably won’t ever be that simple.” Shifting regional guidelines, and the evolving understanding about how COVID-19 is transmitted will make it essentially impossible to create a hard-and-fast index for what qualities of indoor air are explicitly dangerous with respect to the virus.

Instead, like social distancing and mask wearing recommendations, the measurements are intended to give general guidelines to help people avoid unhealthy situations. What’s more, the sensors will notify building occupants should indoor conditions change from likely safe to likely problematic. It’s another way to make people feel safe if they have to return to work or school.