Tag Archives: Telecom/Wireless

Cuba Jamming Ham Radio? Listen For Yourself

Post Syndicated from Stephen Cass original https://spectrum.ieee.org/tech-talk/telecom/wireless/cuba-jamming-ham-radio-listen-for-yourself

As anti-government protests spilled onto the streets in Cuba on July 11, something strange was happening on the airwaves. Amateur radio operators in the United States found that suddenly parts of the popular 40-meter band were being swamped with grating signals. Florida operators reported the signals were loudest there, enough to make communication with hams in Cuba impossible. Other operators in South America, Africa, and Europe also reported hearing the signal, and triangulation software that anyone with a web browser can try placed the source of the signals as emanating from Cuba.

Cuba has a long history of interfering with broadcast signals, with several commercial radio stations in Florida allowed to operate at higher than normal power levels to combat jamming. But these new mystery signals appeared to be intentionally targeting amateur radio transmissions. A few hours after the protest broke out on the 11th, ham Alex Valladares (W7HU) says he was speaking with a Cuban operator on 7.130 megahertz in the 40-meter band, when their conversation was suddenly overwhelmed with interference. “We moved to 7170, and they jam the frequency there,” he says. Valladares gave up for the night, but the following morning, he says, “I realize that they didn’t turn off those jammers. [Then] we went to [7]140 the next day and they put jamming in there.”

Valladres explains he escaped from Cuba to the United States in a raft in 2005. Like many hams in the large Cuban-American community in Florida, he frequently talks with operators in Cuba, and now he says the government there is “jamming the signal to prevent the Cuban people who listen to us and to prevent them from talking between them[selves].” Valladres has also heard reports that VHF 2-meter band repeaters have been shut down in Cuba.

Two-meter band radios are typically low-power handheld walkie-talkies used for short-range communications. Their range is often extended by using fixed relay repeaters, which retransmit an incoming signal using a more powerful transmitter and a well-placed antenna. Because Florida and Cuba are so close—only about 175 kilometers separates them at their closest point—it’s possible for 2-meter communications to cross the distance. “It was possible to go between Miami and Havana … with an external antenna you can talk to Havana easy because it’s not that far, it’s like 230 miles away,” says Valladres.

The short distance between southern Florida (with its large Cuban American population) and Cuba may also be one reason why the 40-meter band is plagued with interference while other shortwave bands used for long range ham radio, such as the 20-meter band, have been left untouched. Shortwave signals travel long distances by bouncing between the Earth and layers in the ionosphere high above. Signals in different bands are reflected by different layers, and these layers change depending on whether it is day or night. (Dealing with the vagaries of ionospheric propagation is one of the fun challenges of ham radio as a hobby.) The height of these layers also effectively sets a minimum range for a band, known as the skip distance. For the 20-meter band, the skip distance varies between 700 and 1600 kilometers in length, making it unsuitable for communications between southern Florida and Cuba.

During daylight hours, however, the skip distance of the 40-meter band allows much shorter range communications. “The 20-meter band…doesn’t allow you to work with Havana,” says Valladres, “But the 40-meter band… it is possible to work and talk all day long.”

Another factor is that 40-meter transceivers are relatively simple to build and maintain. “In Cuba, there are a lot of people that have homemade radios. And those homemade radios are [for the] 40-meter band, which is the easiest band to make a homemade radio,” says Valladres.

Valladres alerted other hams to the interference, and soon operators were comparing notes on a forum on the QRZ.com website for hams. Hams across the southern United States and as far away as Minnesota and Rhode Island as well as Suriname in South America reported picking up the signals. The hams soon turned to nailing down the source of the signal. In previous decades, locating the source of a distant shortwave signal would have required special direction finding equipment, but modern hams have an ace up their sleeves in the form of the public KiwiSDR network.

A KiwiSDR is a software defined radio board that attaches to a BeagleBoard computer running Linux. It can receive signals from 10 kilohertz to 30 MHz. Networked enabled, anyone with a web browser can access a public KiwiSDR station and tune in to whatever it is receiving. Crucially, the KiwiSDR software allows users to sample multiple stations around the world simultaneously. If at least three stations can hear a given signal, a TDoA (time difference of arrival) algorithm can estimate the origin of the signal.

Josh Nass (KI6NAZ), who is based in California and hosts the Ham Radio Crash Course channel on YouTube, was one of the first to use the network to locate the source. The weekend when the Cuban protests started, he says, “I noticed the interference covering a lot of the [40-meter] band. Then I had individuals reach out to me, Cuban-Americans living around the country but a lot of them out of Miami, and they said ‘we think there’s a coordinated jamming effort going on…’” This made him turn to the KiwiSDR network and the TDoA algorithm. “Sure enough, in parallel a lot of my friends were also doing the same, and we largely had the same [result], that it looks like the signals are coming from the eastern side of Cuba.”

As of this writing, the interference can still be heard via the KiwiSDR network: you can easily use a map interface to pick a station in Florida, such as the one operated by W1NEJ in Boca Raton, and listen in and then try your hand at locating the source.

If you’re new to ham radio or KiwiSDR, here are a few pointers: Set the frequency to 7106.5 to start with in the control box, and make sure to click the button for “LSB” to make sure you’re getting the right kind of demodulation to best hear the signal, which sounds like the unfortunate offspring of a frog and a Dalek. You can then use the “extension” drop down menu to select the “TDoA” option, which will give you a map of other stations you can combine with Boca Raton to localize the source.

“What you try to do is get a good sampling of SDRs that can all receive the signal that you want to detect, and that’s the tricky part of it, because you have to log in all these SDRs, go to the signal you are looking to triangulate, and make sure they can all hear it” before using the TDoA algorithm, advises Nass.

Alain Arocha (K4KKC) is a Florida-based ham who noticed the interference early and who has also been active on the QRZ forum. He agrees that the KiwiSDR software can give misleading results if not used correctly, so he went the extra mile and verified the ability of the KiwiSDR stations that he was using for his location hunting by transmitting his own test signal on another frequency and making sure the SDRs could receive it. Arocha says he’s been disappointed by the lack of response from some hams, who view the interfering signals as curiosity as they can deal with it by simply shifting to other bands or using digital modes unavailable to Cuban operators with their more basic equipment.

“The response has been real strong from the people who talk to Cuba, but some people couldn’t care less [as long it doesn’t affect them], but this is making that part of the spectrum unusable,” says Arocha, saying he’s seen operators from Germany and Spain report the interference. “It gives me a sense of frustration for people not to give a damn about this.” In particular, Arocha is annoyed that the American Radio Relay League (ARRL) is not being vocal about the issue.

“We are aware of reports from radio amateurs of non-amateur signals observed in the amateur bands, and mostly likely originating in the direction of Cuba,” says the ARRL’s Bob Inderblizten (NQ1R). However, because American hams have access to so much spectrum and so the overall impact is limited in the United States, and because the ARRL is a national rather than international organization, “There’s no real role for ARRL” in this situation, says Inderblizten. 

However, he adds, “there’s a mechanism for amateurs to report intruders on the amateur band through the network of amateur network societies, as organized within the International Amateur Radio Union.” The IARU relies on national governments to enforce regulations, so if the interference is indeed being generated by the Cuban government, any notification is likely to have little effect. More pointed complaints would have to come from the US government, “and such bold and publicly reported interference is most certainly known by FCC and other government agencies,” says Inderblizten.

New Technique Turns 5G’s Power Problem Into a PAPR Tiger

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/5gs-power-problem-papr

5G has a power efficiency problem. If you already have a 5G smartphone, you may have noticed the latest generation of wireless drains your device’s battery life, so much so that operators including T-Mobile and Verizon have including “turning off 5G” in their lists of battery-saving tips.

But 5G’s in-built inefficiencies also have implications for the wider networks. The wireless industry group GSM Association predicts that nearly US $1 trillion will be spent on 5G deployments by 2025—a dollar amount driven significantly by the need to compensate for 5G’s power inefficiencies.

The core of the issue stems from deliberate choices made during the development of 3G networks decades ago. At the time, the wireless industry chose to use orthogonal frequency-division multiplexing (OFDM) as the transmission technique for wireless signals.

OFDM is great at reducing distortions caused by signals in neighboring frequency bands, meaning more bands can be used at the same time to increase the total amount of transmitted data. The trade-off is that OFDM requires receivers that can handle occasional but inevitable spikes in signal power several times greater than the typical broadcast power.

This massive difference in what’s called the “peak to average power ratio” (PAPR) is what drains smartphone batteries and requires additional, expensive infrastructure deployments. “Since 3G became available, the world has been trying to solve inefficiency,” says Earl Lum, the founder of EJL Wireless Research.

But now there might be a solution to the PAPR problem. Start-up Spectral DSP has developed and tested a variation on OFDM signals that could significantly lower PAPR. If correct, implementing this new technique in wireless networks would greatly reduce the cost of 5G deployments and extend the battery lives of 5G-enabled smartphones.

First, it’s important to understand how OFDM results in a high PAPR. As the name (orthogonal frequency-division multiplexing, as a reminder) implies, OFDM signals divide the entire signal into portions sent using different frequencies in the frequency band the network is using. These portions’ frequencies are still crammed close together, however, and so to prevent different portions from interfering with one another, they are oriented orthogonally to one another, which just means that they’re oriented in such a way as to minimize interactions and opportunities to cancel one another.

Normally, because of the way data is encoded for OFDM signals, these orthogonal chunks don’t all arrive at a receiver in phase with one another. Occasionally, however, everything lines up and the receiver’s power amplifier is hit with each signal chunk’s peak transmission power at the same time.

If the power amplifier can’t handle that combined peak, it results in the signals bleeding over into nearby frequency bands. As a rule, regulatory bodies like the Federal Communications Commission don’t look kindly on wireless operators who saturate their neighbors’ frequency bands. Thus, receivers’ power amplifiers are designed to handle that peak power amount, even though it doesn’t come about often. This is what drains smartphone batteries and drives up power costs for base stations and cell towers, as for lower-power signals, the power amplifier has to bleed off its excess power as heat.

“If you’re talking about a cell site, it doesn’t matter,” says fred harris (Note: harris has long preferred to spell his name in all lowercase), a professor of electrical engineering at the University of California San Diego. “Bigger things [like cell towers] only mean more heat, more air conditioning, but you’re plugged into Hoover Dam and you can afford to pay. But when you’re on a mobile phone, you don’t have the whole dam connected to your phone.”

The core idea of the new technique hinges on the fact that each signal chunk has a primary peak on the specific frequency it’s using—and that peak is the data you care about—it has smaller and smaller ripples spreading into the higher and lower frequencies on either side of it.

In fact, part of the reason each chunk’s peak is so high is to clearly differentiate between when the receiver is getting an actual signal, as opposed to one of these ripples. So what harris developed was a technique to suppress those ripples, which in turn means that each chunk doesn’t have to have such a large power peak to begin with. The result is a variation on OFDM with a lower PAPR because the maximum power peak isn’t as high, in turn because each chunk’s peak is lower and ultimately adds less to the total.

The technique can reduce the PAPR by 3 to 9 dB between the peak and average power. That means more time on a single battery charge for smartphones, and less money spent by network operators to install expensive power amplifiers and cooling equipment on cell towers. It also means operators need to install fewer towers overall. The lower PAPR effectively means that each tower’s average transmission power can be higher, and therefore extend that tower’s range.

And the technique isn’t just theoretical. “You can prove something mathematically, and then you can prove in MATLAB that the math works,” says Gil Zussman, a professor at Columbia University and the principal investigator for the National Science Foundation’s COSMOS wireless testbed in Manhattan. “Now you also have to prove, using hardware, that the MATLAB results make sense” (Zussman and Lum are also both advisors for Spectral DSP).

Spectral DSP took the step to remotely use the COSMOS testbed during the pandemic to test harris’ theory on the testbed’s cutting edge wireless network that spans dozens of New York City blocks, and found that the mathematical results did indeed make sense.

The next step is to actually find network operators and cellphone manufacturers interested in using the technique. The technique doesn’t require any new hardware to be installed—because it’s a change in signal modulation, it can be easily accomplished with a software update.

Spectral DSP anticipates plenty of interest, as well—not only are lower deployment and operating costs sure to entice network operators on the cell tower side, but what company doesn’t want to promise its customers longer battery life on their smartphones? “There’s only two questions the customer wants to know,” harris says. “How much you’re going to charge me per minute, and how long’s my battery going to last?”

Study: 6G’s Haptic, Holographic Future?

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/wireless/6g-haptic-holography

Journal Watch report logo, link to report landing page

Imagine a teleconference but with holograms instead of a checkerboard of faces. Or envision websites and media outlets across the Internet that allow you to make haptic connections (i.e. those involving touch as well as sight and sound). Researchers studying the future of sixth-generation (6G) wireless communications are now sketching out possibilities—though not certainties—for the kinds of technologies a 6G future could entail. 

Sixth-generation wireless technologysays Harsh Tataria, a communications engineering lecturer at Lund University, Sweden—will be characterized by low latencies and ultrahigh frequencies, with data transfer speeds potentially hitting 100 Gbps. Tataria, along with colleagues from Lund University, Spark New Zealand, University of Southern California (USC), and King’s College London, recently published a paper in Proceedings of the IEEE, presenting a holistic, top-down view of 6G wireless system design. Their study began by considering the challenges and technical requirements of next-generation networks—and forecasting some of the technological possibilities that could be practically realizable within that context.

Such future-casting is to be expected as 5G deployment picks up speed around the world, at which point subsequent generations of wireless technologies come more into focus. Tataria calls this “a natural progression,” to look at the emerging trends in both technology and consumer demands. “When we look at 6G, we’re really look[ing] at vastly connected societies,” he says, “even a step beyond what 5G is capable of doing, such as real-time holographic communications.”

The study outlines what it calls a “high-fidelity holographic society,” one in which “Holographic presence will enable remote users [to be represented] as a rendered local presence. For instance, technicians performing remote troubleshooting and repairs, doctors performing remote surgeries, and improved remote education in classrooms could benefit from hologram renderings.” The authors note that 4G and expected 5G data rates may not enable such technologies—but that 6G might—owing to the fact that “holographic images will need transmission from multiple viewpoints to account for variation in tilts, angles, and observer positions relative to the hologram.”

Even simple phone conversations could involve new levels of multimedia-rich experience. “For example, in this interview…we could be talking to the rendered presence [of each other],” says Mansoor Shafi, another study co-author. “And that would provide a much richer experience than the audio call we are having at the moment.”

Another promising possibility the study teases involves what they call a haptic Internet. “We believe that a variety of sensory experiences may get integrated with holograms,” the authors write. “To this end, using holograms as the medium of communication, emotion-sensing wearable devices capable of monitoring our mental health, facilitating social interactions, and improving our experience as users will become the building blocks of networks of the future.”

Mischa Dohler, another co-author, believes that 6G will consolidate the “Internet of skills” or the ability to transmit skills over the internet. “We can do it with audio and video, but we can’t touch [through] the Internet…[or] move objects.” The consolidation of edge computing, robotics, AI, augmented reality and 6G communications will make this possible, he says. “This next generation Internet…will democratize skills the very same way as the Internet has democratized information.”

Molisch also hopes that 6G will bring better chip-to-chip communication. “As we go to 200 Gbps or more…the cable connections are just not able to keep up,” he says. “As we are [moving] to higher data rates, higher processing speeds…wireless links are one way in which this bottleneck can be overcome.” This also means increased reliability as wireless connections are not impacted by shaking or vibration, and lower costs because replacing cables “might be more expensive than just putting in wireless transceivers on to the chips.”

Other use cases mentioned in the paper involve what they call extremely high-rate “information showers”—hotspots where one can experience terabits-per-second data transfer rates—mobile edge computing, and space-terrestrial integrated networks. But, as Molisch cautions, “There is [still] a lot of research that needs to be done…before the actual standardisation process can start.”

With 6G going up to terahertz frequencies, there will be tremendous challenges in building new hardware as well, the researchers say. Better semiconductor technologies will also be needed for faster devices. Other challenges remain as well, including power consumption.

With frequency bands moving up in the hundreds of gigahertz, “even fundamental things like circuits and substrates to develop circuits [are] extremely tricky,” says Tataria. “So getting all those things right, and going from the fundamental-level details all the way up to building a system is going to be substantially harder than what it first came across.” Their study, therefore, attempts to explore the trade-offs involved in each futuristic technology.

As the authors point out, this study is not a comprehensive or definitive account of 6G’s capabilities and limitations—but rather a documentation of the research conducted to date and the interesting directions for 6G technologies that future researchers could pursue.

How To Build a Radio That Ignores Its Own Transmissions

Post Syndicated from Joel Brand original https://spectrum.ieee.org/telecom/wireless/how-to-build-a-radio-that-ignores-its-own-transmissions

Wireless isn’t actually ubiquitous. We’ve all seen the effects: Calls are dropped and Web pages sometimes take forever to load. One of the most fundamental reasons why such holes in our coverage occur is that wireless networks today are overwhelmingly configured as star networks. This means there’s a centrally located piece of infrastructure, such as a cell tower or a router, that communicates with all of the mobile devices around it in a starburst pattern.

Ubiquitous wireless coverage will happen only when a different type of network, the mesh network, enhances these star networks. Unlike a star network, a mesh network consists of nodes that communicate with one another as well as end-user devices. With such a system, coverage holes in a wireless network can be filled by simply adding a node to carry the signal around the obstruction. A Wi-Fi signal, for example, can be strengthened in a portion of a building with poor reception by installing a node that communicates with the main router.

However, current wireless mesh-network designs have limitations. By far the biggest is that a node in a mesh network will interfere with itself as it relays data if it uses the same frequency to transmit and receive signals. So current designs send and receive on different frequency bands. But spectrum is a scarce resource, especially for the heavily trafficked frequencies used by cellular networks and Wi-Fi. It can be hard to justify devoting so much spectrum to filling in coverage holes when cell towers and Wi-Fi routers do a pretty good job of keeping people connected most of the time.

And yet, a breakthrough here could bring mesh networks into even the most demanding and spectrum-intensive networks, for example ones connecting assembly floor robots, self-driving cars, or drone swarms. And indeed, such a breakthrough technology is now emerging: self-interference cancellation (SIC). As the name implies, SIC makes it possible for a mesh-network node to cancel out the interference it creates by transmitting and receiving on the same frequency. The technology literally doubles a node’s spectral efficiency, by eliminating the need for separate transmit and receive frequencies.

There are now tens of billions of wireless devices in the world. At least 5 billion of them are mobile phones, according to the GSM Association. The Wi-Fi Alliance reports more than 13 billion Wi-Fi equipped devices in use, and the Bluetooth Special Interest Group predicts that more than 7.5 billion Bluetooth devices will be shipped between 2020 and 2024. Now is the time to bring wireless mesh networks to the mainstream, as wireless capability is built in to more products—bathroom scales, tennis shoes, pressure cookers, and too many others to count. Consumers will expect them to work everywhere, and SIC will make that possible, by enabling robust mesh networks without coverage holes. Best of all, perhaps, they’ll do so by using only a modest amount of spectrum.

Cellphones, Wi-Fi routers, and other two-way radios are considered full-duplex radios. This means they are capable of both sending and receiving signals, oftentimes by using separate transmitters and receivers. Typically, radios will transmit and receive signals using either frequency-division duplexing, meaning transmit and receive signals use two different frequencies, or time-division duplexing, meaning transmit and receive signals use the same frequency but at different times. The downside of both duplexing techniques is that each frequency band is theoretically being used to only half of its potential at any given time—in other words, to either send or receive, not both.

It’s been a long-standing goal among radio engineers to develop full duplex on the same frequency, which would be able to make maximal use of spectrum by transmitting and receiving on the same band at the same time. You could think of other full-duplex measures as being like a two-lane highway, with traffic traveling in different directions on different lanes. Full-duplexing on the same frequency would be like building just a single lane with cars driving in both directions at once. That’s nonsensical for traffic, perhaps, but entirely possible for radio engineering.

To be clear, full duplex on the same frequency remains a goal that radio engineers are still working toward. Self-interference cancellation is bringing radios closer to that goal, by enabling a radio to cancel out its own transmissions and hear other signals on the same frequency at the same time, but it is not a perfected technology.

SIC is just now starting to emerge into mainstream use. In the United States, there are at least three startups bringing SIC to real-world applications: GenXComm, Lextrum, and Kumu Networks (where I am vice president of product management). There are also a handful of substantial programs developing self-cancellation techniques at universities, namely Columbia, Stanford (where Kumu Networks got its start), and the University of Texas at Austin.

Upon first consideration, SIC might seem simple. After all, the transmitting radio knows exactly what its transmit signal will be, before the signal is sent. Then, all the transmitting radio has to do is cancel out its own transmit signal from the mixture of signals its antenna is picking up in order to hear signals from other radios, right?

In reality, SIC is more complicated, because a radio signal must go through several steps before transmission that can affect the transmitted signal. A modern radio, such as the one in your smartphone, starts with a digital version of the signal to be transmitted in its software. However, in the process of turning the digital representation into a radio-frequency signal for transmission, the radio’s analog circuitry generates noise that distorts the RF signal, making it impossible to use the signal as-is for self-cancellation. This noise cannot be easily predicted because it partly results from ambient temperatures and subtle manufacturing imperfections.

The difference in magnitude of an interfering transmitted signal’s power in comparison with that of a desired received signal also confounds cancellation. The power transmitted by the radio’s amplifier is many orders of magnitude stronger than the power of received signals. It’s like trying to hear someone several feet away whispering to you while you’re simultaneously shouting at them.

Furthermore, the signal that arrives at the receiving antenna is not quite the same as it was when the radio sent it. By the time it returns, the signal also includes reflections from nearby trees, walls, buildings, or anything else in the radio’s vicinity. The reflections become even more complicated when the signal bounces off of moving objects, such as people, vehicles, or even heavy rain. This means that if the radio simply canceled out the transmit signal as it was when the radio sent it, the radio would fail to cancel out these reflections.

So to be done well, self-interference cancellation techniques depend on a mixture of algorithms and analog tricks to account for signal variations created by both the radio’s components and its local environment. Recall that the goal is to create a signal that is the inverse of the transmit signal. This inverse signal, when combined with the original receive signal, should ideally cancel out the original transmit signal entirely—even with the added noise, distortions, and reflections—leaving only the received signal. In practice, though, the success of any cancellation technique is still measured by how much cancellation it provides.

Kumu’s SIC technique attempts to cancel out the transmit signal at three different times while the radio receives a signal. With this three-tier approach, Kumu’s technique reaches roughly 110 decibels of cancellation, compared with the 20 to 25 dB of cancellation achievable by a typical mesh Wi-Fi access point.

The first step, which is carried out in the analog realm, is at the radio frequency level. Here, a specialized SIC component in the radio samples the transmit signal, just before it reaches the antenna. By this point, the radio has already modulated and amplified the signal. What this means is that any irregularities caused by the radio’s own signal mixer, power amplifier, and other components are already present in the sample and can therefore be canceled out by simply inverting the sample taken and feeding it into the radio’s receiver.

The next step, also carried out in the analog realm, cancels out more of the transmitting signal at the intermediate-frequency (IF) level. Intermediate frequencies, as the name suggests, are a middle step between a radio’s creation of a digital signal and the actual transmitted signal. Intermediate frequencies are commonly used to reduce the cost and complexity of a radio. By using an intermediate frequency, a radio can reuse components like filters, rather than including separate filters for every frequency band and channel on which the radio may operate. Both Wi-Fi routers and cellphones, for example, first convert digital signals to intermediate frequencies in order to reuse components, and convert the signals to their final transmit frequency only later in the process.

Kumu’s SIC technique tackles IF cancellation in the same way it carries out the RF cancellation. The SIC component samples the IF signal in the transmitter before it’s converted to the transmit frequency, modulated, and amplified. The IF signal is then inverted and applied to the receive signal after the receive signal has been converted to the intermediate frequency. An interesting aspect of Kumu’s SIC technique to note here is that the sampling steps and cancellation steps happen as an inverse of one another. In other words, while the SIC component samples the IF signal before the RF signal in the transmitter, the component applies cancellation to the RF signal before the IF signal.

The third and final step in Kumu’s cancellation process applies an algorithm to the received signal after it has been converted to a digital format. The algorithm compares the remaining received signal with the original transmit signal from before the IF and RF steps. The algorithm essentially combs through the received signal for any lingering effects that might possibly have been caused by the transmitter’s components or the transmitted signal reflecting through the nearby environment, and cancels them.

None of these steps is 100 percent effective. But taken together, they can reach a level of cancellation sufficient to remove enough of the transmit signal to enable the reception of other reasonably strong signals on the same frequency. This cancellation is good enough for many key applications of interest, such as the Wi-Fi repeater described earlier.

As I mentioned before, engineers still haven’t fully realized full-duplex-on-the-same-frequency radios. For now, SIC is being deployed in applications where the transmitter and receiver are close to one another, or even in the same physical chassis, but not sharing the same antenna. Let’s take a look at a few important examples.

Kumu’s technology is already commercially deployed in 4G Long Term Evolution (LTE) networks, where a device called a relay node can plug coverage holes, thanks to SIC. A relay node is essentially a pair of two-way radios connected back-to-back. The first radio of the pair, oriented toward a 4G cell tower, receives signals from the network. The second radio, oriented toward the coverage hole, passes on the signals, on the same frequency, to users in the coverage hole. The node also receives signals from users in the coverage hole and relays them—again, on the same frequency—to the cell tower. Relay nodes perform similarly to traditional repeaters and extenders that expand a coverage area by repeating a broadcasted signal farther from its source. The difference is, relay nodes do so without amplifying noise because they decode and regenerate the original signal, rather than just boosting it.

Because a relay node fully retransmits a signal, in order for the node to work properly, the transmitter facing the 4G base station must not interfere with the receiver facing the coverage hole. Remember that a big problem in reusing spectrum is that transmit signals are orders of magnitude louder than receive signals. You don’t want the node to drown out the signals it’s trying to relay from users by its own attempts to retransmit them. Likewise, you don’t want the transmitter facing the coverage hole to overwhelm signals coming in from the cell tower. SIC prevents each radio from drowning out the signals the other is listening for, by canceling out its own transmissions.

Ongoing 5G network deployments offer an even bigger opportunity for SIC. 5G differs from previous cellular generations with the inclusion of small cells, essentially miniature cell towers placed 100 to 200 meters apart. 5G networks require small cells because the cellular generation utilizes higher-frequency, millimeter-wave signals, which do not travel as far as other cellular frequencies. The Small Cell Forum predicts that by 2025, more than 13 million 5G small cells will be installed worldwide. Every single one of those small cells will require a dedicated link, called a backhaul link, that connects it to the rest of the network. The vast majority of those backhaul links will be wireless, because the alternative—fiber optic cable—is more expensive. Indeed, the 5G industry is developing a set of standards called Integrated Access and Backhaul (IAB) to develop more robust and efficient wireless backhaul links.

IAB, as the name suggests, has two components. The first is access, meaning the ability of local devices such as smartphones to communicate with the nearest small cell. The second is backhaul, meaning the ability of the small cell to communicate with the rest of the network. The first proposed schemes for 5G IAB were to either allow access and backhaul communications to take turns on the same high-speed channel, or to use separate channels for the two sets of communications. Both come with major drawbacks. The problem with sharing the same channel is that you’ve introduced time delays for latency sensitive applications like virtual reality and multiplayer gaming. On the other hand, using separate channels also incurs a substantial cost: You’ve doubled the amount of often-expensive wireless spectrum you need to license for the network. In both cases, you’re not making the most efficient use of the wireless capacity.

As in the LTE relay-node example, SIC can cancel the transmit signal from an access radio on a small cell at the backhaul radio’s receiver, and similarly, cancel the transmit signal from a backhaul radio on the same small cell at the access radio’s receiver. The end result is that the cell’s backhaul radio can receive signals from the wider network even as the cell’s access radio is talking to nearby devices.

Kumu’s technology is not yet commercially deployed in 5G networks using IAB, because IAB is still relatively new. The 3rd Generation Partnership Project, which develops protocols for mobile telecommunications, froze the first round of IAB standards in June 2020, and since then, Kumu has been refining its technology through industry trials.

One last technology worth mentioning is Wi-Fi, which is starting to make greater use of mesh networks. A home Wi-Fi network, for example, now needs to reach PCs, TVs, webcams, smartphones, and any smart-home devices, regardless of their location. A single router may be enough to cover a small house, but bigger houses, or a small office building, may require a mesh network with two or three nodes to provide complete coverage.

Current popular Wi-Fi mesh techniques allocate some of the available wireless bands for dedicated internal communication between the mesh nodes. By doing so, they give up on some of the capacity that otherwise could have been offered to users. Once again, SIC can improve performance by making it possible for internal communications and signals from devices to use the same frequencies simultaneously. Unfortunately, this application is still a way off compared with 4G and 5G applications. As it stands, it’s not currently cost effective to develop SIC technology for Wi-Fi mesh networks, because these networks typically handle much lower volumes of traffic than 4G and 5G base stations.

Mesh networks are being increasingly deployed in both cellular and Wi-Fi networks. The two technologies are becoming more and more similar in how they function and how they’re used, and mesh networks can address the coverage and backhaul issues experienced by both. Mesh networks are also easy to deploy and “self-healing,” meaning that data can be automatically routed around a failed node. Truly robust 4G LTE mesh networks are already being greatly improved by full duplex on the same frequency. I expect the same to happen in the near future with 5G and Wi-Fi networks.

And it will arrive just in time. The tech trend in wireless is to squeeze more and more performance out of the same amount of spectrum. SIC is quite literally doubling the amount of available spectrum, and by doing so it is helping to usher in entirely new categories of wireless applications.

This article appears in the March 2021 print issue as “The Radio That Can Hear Over Itself.”

About the Author

Joel Brand is the vice president of product management at Kumu Networks, in Sunnyvale, Calif.

When Our Connected Devices Lack Their Connections

Post Syndicated from Mark Pesce original https://spectrum.ieee.org/telecom/wireless/when-our-connected-devices-lack-their-connections

QR (Quick Response) codes have undergone something of a renaissance during the pandemic: Where I live in Australia, we use them to check in at a restaurant or cinema or classroom. They anchor the digital record of our comings and goings to better assist contact tracers. It’s annoying to have to check into such venues this way—and even more annoying when it doesn’t work.

Not long ago, I went to a local café, only to have the check-in process fail. I thought the problem might be my smartphone, so my companion gave it a go, using a different phone, OS, and carrier. No luck. We later learned the entire check-in infrastructure for our state had gone down. Millions of people were similarly vexed—I would argue completely needlessly.

Nearly all the apps on our smartphones—and on our desktop computers—rest on networked foundations. Over the last generation, network access has become ubiquitous, useful, cheap, and stable. While such connectivity has obvious benefits, it seems to have simultaneously encouraged a form of shortsightedness among app developers, who find it hard to imagine that our phones might ever be disconnected.

Before the arrival of the smartphone, we encountered connected devices only rarely. Now they number in the tens of billions—each designed around a persistently available network. Cut off from the network, these devices usually fail completely, as the contact-tracing app did for a time for so many Australians.

People who develop firmware or apps for connected devices tend to write code for two eventualities: One assumes perfect connectivity; the other assumes no connectivity at all. Yet our devices nearly always reside in a gray zone. Sometimes things work perfectly, sometimes they don’t work at all, and sometimes they only work intermittently.

It’s that third situation we need to keep front of mind. I could have checked into my café, only to have the data documenting my visit uploaded later (after some panicked IT administrator had located and rebooted the failed server). But the developers of this system weren’t thinking in those terms. They should have taken better care to design an app that could fail gracefully, retaining as much utility as possible, while patiently waiting for a network connection to be reestablished.

Later that same day, I tried to summon my smartphone-app-based loyalty card when checking out at the supermarket. The app wouldn’t launch. “Yeah,” the teenage cashier said, “The mobile signal is horrible in here.”

“But the app should work, even with a crappy signal,” I insisted.

“Everything’s connected,” he counseled, leaving it there.

Like fish in water, we now swim in a sea of our own connectivity. We can’t see it, except in those moments when the water suddenly recedes, leaving us flapping around on the bottom, gasping for breath.

Typically, such episodes are not much more than a temporary annoyance, but there are times when they become much more serious—such as when a smartwatch detects atrial fibrillation or a sudden fall and needs to signal that the wearer is in distress. In a life-or-death situation, an app needs to provide defense in depth to get its message out by every conceivable path: Bluetooth, mesh network—even an audio alarm.

As engineers, we need to remember that wireless networking is seldom perfect and design for the shadowy world of the sometimes connected. Our aim should be to build devices that do as well as they possibly can, connected or not.

This article appears in the March 2021 print issue as “Spotty Connections.”

Alphabet’s Loon Failed to Bring Internet to the World. What Went Wrong?

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/why-did-alphabets-loon-fail-to-bring-internet-to-the-world

Loon’s soaring promise to bring Internet access to the world via high-altitude balloons deflated last week, when the company announced that it will be shutting down. With the announcement, Loon became the latest in a list of tech companies that have been unable to realize the lofty goal of universal Internet access.

The company, a subsidiary of Alphabet (which also includes the subsidiary of Google), sought to bring the Internet to remote communities that were otherwise too difficult to connect. 

It’s not entirely surprising that Loon wasn’t able to close the global connectivity gap, even though the shutdown announcement itself seemingly came out of the blue. While the company had experienced some success in early trials and initial deployments, the reality is that the inspiring mission to “connect the next (or last) billion users” touted by tech companies is more difficult than they often realize.

But first, why did Loon fail? According to the January 21 blog post by Loon CEO Alastair Westgarth that announced the company’s fate, Loon wasn’t able to find a way “to get the costs low enough to build a long-term, sustainable business.”

Loon, originally spun out of Alphabet’s research subsidiary X, seemed to have some good things going for it. For one, the balloon-based business model promised to skirt the need for heavy infrastructure investments. Instead, each balloon could provide 4G LTE coverage over 11,000 square kilometers—an area that would otherwise require 200 cell towers to cover. The balloons could communicate with each other using wireless millimeter-wave backhaul links. All that was needed was one or two robust ground connections to the Internet backbone, and the balloons would theoretically be able to cover an entire country wirelessly.

The Loon team even figured out how to get the balloons to more or less stay put once they were in the air, which Sal Candido, Loon’s chief technology officer, once told IEEE Spectrum was the biggest technical challenge facing the concept.

Loon first rose to prominence when its balloons provided wireless coverage to Puerto Rico after Hurricane Maria devastated the island’s telecommunications infrastructure in September 2017. The company also created mobile hotspots in Peru in the wake of natural disasters on several occasions. Most auspiciously, the company deployed a commercial network in Kenya in partnership with service provider Telkom Kenya in July 2020.

Any network deployment is going to cost money, of course. And part of the problem with companies like Loon, according to Sonia Jorge, is that they expect unreasonable returns. Jorge is the executive director of the Alliance for Affordable Internet (A4AI), a global initiative to reduce the costs of Internet access, particularly in developing areas. In the tech industry, “very fast growth has yielded expectations of very high and very fast returns,” Jorge says, but in reality, such returns are uncommon. That goes double for companies connecting poorer or more remote communities.

It’s also possible that Loon, while lowering the cost of delivering wireless coverage in a country like Kenya, failed to lower it enough. For example, if the typical monthly cost of Internet access is US 50 dollars more than the average person can afford, and Loon is able to lower the cost to 25 dollars more than that person can afford, it’s still too expensive.

Which is not to say that Loon’s efforts were foolish or meaningless. “We absolutely need to experiment and innovate,” says Jorge. “People at Loon will take their knowledge and carry it elsewhere, in ways we can’t even imagine yet.”  Loon’s technological developments are already being carried over to another X project, Taara, which hopes to replace expensive fiber optic cable deployments with cheaper, over-the-air optical beams. Loon’s tech also has potential applications in satellite communications.

Technological innovation can only be part of the solution. According to Jorge, bringing affordable access to everyone also requires innovative policy, and being realistic about assumptions like how quickly you can reallocate spectrum for a new wireless use. It also means working with and taking inspiration from small-scale efforts by rural providers that are already making affordable Internet a reality.

And although tech companies like Loon may plummet because they underestimate the challenges facing them, the reality is that providing global Internet access is not impossible. According to a 2020 report from A4AI, the cost of providing affordable 4G-equivalent access to everyone over the age of 10 on the planet by 2030 is about $428 billion—about the same amount of money the world spends on soda per year.

The biggest piece of advice Jorge would give to tech companies like Loon that are looking to make a serious impact in expanding affordable access is to know what they’re getting into. In other words, don’t guess at how easily you’ll be able to access spectrum, or deploy in an area, or anything else. Work with local partners. “Just because you’re a wonderful inventor,” she says, “don’t think you know it all. Because you don’t.”

Smart Surface Sorts Signals—The “Metaprism” Evaluated

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/wireless/could-you-receive-data-using-a-metaprism

The search for better ways to optimize and direct wireless communication is never-ending. These days the uses of metamaterials—marvels of material science engineered to have properties not found in naturally-occurring stuff—also seem to be similarly boundless. Why not use the latter to attempt the former? 

This was the basic idea behind a recent theoretical study, published earlier this month in IEEE Transactions on Wireless Communications. In it, Italian scientists describe a means of directing wireless communication signals using a reflective surface made from a metamaterial. This meta “prism” differently redirects microwave and radio signals depending on their frequencies, much the way conventional prisms bend light differently depending on a light beam’s color.

Earlier generations of wireless networks rely on lower frequency electromagnetic waves that can easily penetrate objects and that are omni-directional, meaning a signal can be sent in a general area and does not require a lot of directional precision to be received.

But as these lower frequencies became overburdened with increasing demand, network providers have started using shorter wavelengths, which yield substantially higher rates of data transmission—but require more directional precision. These higher frequencies also must be relayed around objects that the waves cannot penetrate. 5G networks, now being rolled out, require such precision. Future networks that rely on even higher frequencies will too. 

Davide Dardari is an Associate Professor at the University of Bologna, Italy, who co-authored the new study. “The idea is rather simple,” he says. “The metaprism we proposed… [is] made of metamaterials capable of modifying the way an impinging electromagnetic field is reflected.” 

Just like a regular prism refracts white light to create a rainbow, this proposed metaprism could be used to create and direct an array of different frequencies, or sub-carrier channels. Sub-carrier frequencies would then be assigned to different users and devices based on their positioning.

A major advantage of this approach is that metaprisms, if distributed properly around an area, could passively direct wireless signals, without requiring any power supply. “One or more surfaces designed to act as metaprisms can be deployed in the environment, for example on walls, to generate artificial reflections that can be exploited to enhance the coverage in obstructed areas,” explains Dandari.

He also notes that, because the metaprism would direct the multiple, parallel communication channels in different directions, this will help reduce latency when multiple users are present. “As a [result of these advantages], we expect the cost of the prisms will be much lower than that of the other solutions,” he says.

While this work is currently theoretical, Dandari plans to collaborate with other experts to develop tailored metamaterials required for the metaprism. He also plans to explore the advantages of the metaprism as a means of controlling signal interference.

Ensure 5G Systems Integrity by using Multiphysics Analysis of Chips, Packages, and Systems

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/ensure-5g-systems-integrity-by-using-multiphysics-analysis-of-chips-packages-and-systems

The evolution of 5G systems is rooted in the consistently increasing need for more data.  Whether the application is future medical systems, autonomous vehicles, smart cities, AR/VR, IoT, or standard mobile communications systems, all require an ever-increasing amount of data.  For 5G systems to reliably work, the physical data pathways must be well understood and reliably designed.  Whether the pathway is in the chip or the wireless channel, large amounts of data must seamlessly flow unimpeded.  This presentation will discuss the various data paths in 5G systems and how Multiphysics analysis can help design these paths to allow for maximum data integrity.


Presenter name 1
Wade Smith, Manager, 
Application Engineering

Wade Smith is an Applications Engineer Manager at ANSYS, Inc. in the High Frequency Electromagnetics Division where he specializes in Signal Integrity applications via tools such as HFSS, SIwave and Q3D.  Prior to joining ANSYS, Wade worked with Sciperio, Inc. where he conducted multiple projects including antenna, FSS, and wireless sensor applications using 3D Printed fabrication techniques and electronic textiles.  Prior to Sciperio, Wade worked at Harris Corporation where he was involved in projects such as wireless sensor applications, high-speed data-rate systems, 3D embedded RF Filter designs, antennas, and SIP/MCM applications used in the miniaturization of Software Defined Radio systems.  Wade has an M.S.E.E. from the University of Central Florida.

5G Cellular Spectrum Auction—Can’t Tell the Players Without a Scorecard

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/wireless/5g-cellular-spectrum-auctioncant-tell-the-players-without-a-scorecard

Steven Cherry Today’s episode may confuse people and search engines alike—we’ve titled this podcast series Radio Spectrum, but our topic for today is the radio spectrum. Could you here the capital letters the first time I said “radio spectrum” and all lower case letters the second time? I thought not.

Anyway, let’s dive in. Here’s a quote and a 10-point quiz question—who said this?

“The most pressing communications problem at this particular time, however, is the scarcity of radio frequencies in relation to the steadily growing demand.”

Everyone who guessed that it was a U.S. President give yourselves a point. If you guessed it was President Harry Truman, in 1950, give yourself the other 9 points.

At the time, he was talking mainly about commercial radio and the nascent technology of television, ham radios. We didn’t even yet have Telestar, a satellite linkage in the land-based telephone monopoly. That would come a decade later. A decade after that, Motorola researcher Martin Cooper would make the first mobile telephone call.

By the late ’70s, the Federal Communications Commission was set to allocate spectrum specifically for these new mobile devices. But it’s first cellular spectrum allocation was a messy affair. The U.S. was divided up into 120 cellular markets, with two licenses each, and in some cases, hundreds of bidders. By 1984, the FCC had switched over to a lottery system. Unsurprisingly, people gamed the system. The barriers to enter the lottery were low, and many of the 37,000 applications—yes, 37,000 applications—simply wanted to flip the spectrum for a profit if they won.

The FCC would soon move to an auction system. Overnight, the barrier to entry went from very low to very high. One observer noted that these auctions were not “for the weak of heart or those with shallow pockets.”

Cellular adoption grew at a pace no one could anticipate. In 1990 there were 12 million mobile subscriptions worldwide and no data services. Twenty-five years later, there were more than 7 billion subscriber accounts sending and receiving about 50 exabytes per day and accounting for something like four percent of global GDP.

Historically, cellular has occupied a chunk of the radio spectrum that had television transmissions on the one side and satellite use on the other. It should come as no surprise that to meet all that demand, our cellular systems have been muscling out their neighbors for some time.

The FCC is on the verge of yet another auction, to start on December 8. Some observers think this will be the last great auction, for at least a while. It’s for the lower portion of what’s called the C-band, which stretches from 3.7–4.2 gigahertz.

Here to sort out the who, what, when, why, and a bit of the how of this auction is Mark Gibson, Senior Director for Business Development and Spectrum Policy at CommScope, a venerable North Carolina-based manufacturer of cellular and other telecommunications equipment—the parent company of Arris, which might be a more recognizable name to our listeners. And most importantly, he’s an EE and an IEEE member.

Mark, welcome to the podcast.

Mark Gibson Thank you, Steven. I love that intro. I learned something. I got one point, but I didn’t get the other nine. So …  but thank you, that was very good.

Steven Cherry Thank you. Mark, maybe we could start with why this band and this particular 280 MHz portion of it is so important that one observer called it the Louisiana Purchase of cellular?

Mark Gibson Well, that’s a great question. Probably the best reason is how much spectrum is being offered. I believe this is the largest contiguous spectrum in the mid-bands, not considering the millimeter wave, which is the spectrum, about 28 gigahertz. Those were offered in large GHz chunks, but they have propagation issues. So this is the mid-band and of course, meaning you know, mid-band spectrum has superior propagation characteristics over millimeter wave. So you have 280 MHz of spectrum that’s in basically the sweet spot of the range—spectrum that Harry Truman didn’t think about when he talked about the dearth. And so that’s the primary reason. A large amount of spectrum in the middle—the upper-middle of the mid-band range—is one of the reasons this is of so much interest.

Steven Cherry I’m going to ask you a question about 5G that’s maybe a bit cynical. 5G makes our phones more complex, it uses a ton of battery life, it will often increase latency, something we had a recent podcast episode about, it’s fabulously expensive for providers to roll out, and therefore it will probably increase costs for customers, or at least prevent our already-too-high costs from decreasing. When, if ever, will it be a greater plus than a minus?

Mark Gibson That’s a great question, because when you consider the C-band—the Spectrum at 3700 to 3980—that … Let me back up a minute. One of the reasons the issues exist for 5G has to do with the fact that for all intents and purposes, 5G has been deployed in the millimeter wave bands. If you, for example, read a lot of the consumer press, the Wall Street Journal, and others, when they talk about 5G, they talk about it in the context of millimeter-wave.

They talk about small cells, indoor capability, and the like. When you think about 5G in the context of this spectrum, the C-band, a lot of the concerns you’re talking about, at least in terms of some of the complexity with respect to propagation, begin to diminish. Now, the inherent complexities with 5G still exist in some of those are overcome with a lot of spectrum, especially the fact that the spectrum will be TTD, time division duplex. But I’d say that for the most part, the fact that this band will be the 5G—will be deployed in this mid-band range—will give it superior characteristics over millimeter wave, at least in the fact you’ve got a lot of spectrum and also have it in the lower mid-band so you can travel further.

The other inherent concerns, I guess you can say about 5G or the same things we heard about 4G versus 3G. So I think it’s just a question of getting used to the Gs as they get bigger. The one main thing that will help this out is this band of spectrum as well as the band that’s right below it, which is the CPRS band that’s supposed to be a 5G band as well.

Steven Cherry There was already an auction this summer of the band just above this at 3.5 GHz. The process of December’s auction started back in 2018. The auction itself will be held at the end of 2020. What took so long?

Mark Gibson Well, part of the problem is that the band is occupied by … right now I think the number is around 16 500 Earth stations on the ground and then twenty-some satellites in the air. So it took a long time to figure out how to accommodate them. They’re incumbents, and so the age-old question, when new spectrum is made available: What to do with the incumbents? Do you share, do you relocate, do you do both and call it transitional sharing. There was a lot of discussion in that regard around the owners of the spectrum.

The interesting thing here, you have an Earth station on the ground that transmits to the satellite. That could be anybody, it could be for a lot of these are broadcasters. NPR has a network, Disney, ESPN, the regular cable broadcasters, and whatnot. Then you have satellite owners who are different and then you have Earth station owners who are different. So in this ecosystem, there’s three different classes of owners and trying to sort out their rights to the spectrum was complicated. And then how you deal with them in regard to sharing relocation. In the end, the commission came down to relocation and so now the complexity is around, “how do you relocate?” And “relocate” is sort of a generic term. It means to move in spectrum, basically repack them. But how do you repack 16 000 Earth stations into the band above? That would be into the 4.0 to 4.2 gigahertz band. So that’s taken a long time to sort out who has the rights to the spectrum; how do we make it equitable; how do we repack; and then how do we craft an auction? Those are some of the main reasons it’s taken so long.

Steven Cherry Auction rules have evolved over time, but there’s still a little bit of the problem of bidders bidding with the intention of flipping the spectrum instead of using it themselves. Can you say a word about the 2016 auction?

Mark Gibson Well, that’s a good question. With all the spectrum auctions, the commission the SEC manages, what they try to do is establish what’s called substantial service requirements to ensure that those that are going to buy the spectrum put it to some use. But that doesn’t eliminate totally speculation. And so with the TV band auction, there was a fair amount of speculation, mostly because that 600 MHz spectrum is really very valuable. That auction went for $39 billion, if I’m not mistaken. But there was another auction that was complicated because of the situation of TV stations having to be moved. But we saw this in the AWS-3, which is the 1.70 to 2.1 gig bands. A lot of speculation went on and we saw this in great detail, a great sense in the millimeter-wave. In fact, there had been a lot of speculation in the millimeter-wave bands even before the auctions occurred, with companies having acquired the spectrum and hanging onto it, hoping that they could sell it. In fact, several of them did and made a lot of money.

But the way the commission tries to address that primarily is through substantial service requirements over the period of their license term, which is typically 10 years. And what it says loosely is that in order for a licensee to claim their spectrum, they have to put it to some substantial service, which is usually a percent of the rural and percent of the urban population over a period of time. Those changed somewhat, but mostly this is what they try to do.

Steven Cherry Yeah, the 2016 auction was gamed by hedge funds and others who figured out that particular slices in the spectrum, that particular TV stations, if they could hold them, they could keep another buyer from holding a continuous span of spectrum. I should point out that the designers of the 2016 auction won a Nobel Economic Prize for their innovations. But —.

Mark Gibson Yes.

Steven Cherry — I’m just curious, the hedge funds sold these slices at a premium so that there wouldn’t be a gap in a range of spectrum. What would have happened if that were to be those little gaps?

Mark Gibson Well, if you have gaps in spectrum, that’s not it’s not terrible. It’s just you don’t then have a robust use of the spectrum. What our regulator and most regulators endeavor to do in any spectrum auction is to make allocations or the assignments, if you will, contiguous. In other words, there’s no gaps. You know, if there are gaps, what it means is you have a couple of things. One is if you have a gap in the spectrum for whatever reason and that gap, the people using that, then you have adjacent channel concerns. And these are concerns that could give rise to adjacent channel interference from dissimilar services.

With the way that spectrum auction is run, they were able to separate, if you will, bifurcate the auction so that there was a low and a high, and they sold them both at once. Base station frequencies were in the upper portion and the handset frequencies were in a lower portion of that band. And so if you have gaps, then you have problems with who’s in those gaps, and that gives rise to sharing issues. The other thing is, if there’s gaps, the value of the spectrum tends to get diminished because it’s not contiguous. And we’ve seen that in some instances in some of the other bands, and somewhat arcane bands like AWS-4 [Advanced Wireless Service], some of the bands that are owned by Dish and whatnot.

Steven Cherry There’s been a lot of consolidation in the cellular market. In fact, we have only three major players now that T-Mobile acquired Sprint, a merger that finally finished merging earlier this year. So who are the buyers in this auction?

Mark Gibson Well, they run the gamut. I mean, there’s a lot of them there. What’s interesting, a couple of interesting things have come out of it. The Tier 1s are well represented by AT&T, T-Mobile, Sprint, and Verizon. So we expect them all to participate. And interestingly enough, T-Mobile only bought eight markets. If you want to compare this to the spectrum right below it, with the CBRS [Citizens Broadband Radio Service] for at least a spectrum comparison. T-Mobile only bought eight markets and that.

So anyhow, the Tier— the Tier 1s are well represented, but so are the MVNOs [Mobile Virtual Network Operators]—the cable operators. And what’s interesting is that Charter and Comcast are bidding together. Now, ostensibly, that’s because they want to make sure they de-conflict some of the spectrum situations in market area boundaries. In other words, they’re generally separate in terms of their coverage areas. But there were some concerns that there may have some problems with interference de-confliction at the edges of the boundaries. So they decided to participate together, which is interesting.

You know, at the end of the day, there’s 58 companies that qualified to bid and there are a lot of wireless Internet service providers or we call WISPs, for whom this would be a fairly expensive endeavor, the WISPs did participate in the CBRS auction, but that, by comparison, will probably be easily an order of magnitude below what the C-band will be.

But you look at it, it’s the usual suspects. It’s most everybody that’s participated in a major spectrum auction going way back to probably the first ones, although their names have changed. You have a lot of rural telcos, rural telephone companies, a lot of smaller cable operators. Just generally people that could be speculating. Don’t see a ton of that in this band and this spectrum. But there are some that look like they could be some speculators. But I don’t see a lot of that.

Steven Cherry I think people think of Verizon as the biggest cellular provider in the U.S. With the biggest, best coverage overall, but they’ve been playing catch up when it comes to spectrum.

Well, their position is interesting. They act like they’re playing catch up, but they do have a lot of a spectrum. They have a lot of spectrum in the millimeter-wave band, they have a lot of 28 gig spectrum from the acquisition of LMDS [Local Multiband Distribution Service], which is a defunct service. And so they have a lot of 28 gig spectrum. They got a lot of spectrum in the AWS [i.e., Advanced Wireless Service-3] auction, they got a lot of spectrum. In fact, they were the largest spectrum acquirer and I’m sorry, second-largest spectrum acquirer in the CBRS auction. They got a lot of spectrum in the 600 MHz auction. So I don’t know that they’re spectrum poor compared perhaps to some of the others, but they just want to have a broader spectrum footprint, it looks like, or spectrum position.

Steven Cherry You mentioned Dish Network before … The T-Mobile–Sprint merger requires that it give up some spectrum to help Dish become a bigger competitor.

Mark Gibson Right.

Steven Cherry I think the government’s intent was that it be a fourth major to replace Sprint. Did that happen? Is that happening? Is Dish heavily involved in this auction?

Mark Gibson Ah, Dish is involved … I’m not sure to what extent heavily. We haven’t seen anything come out yet with respect to down payments and down payments, give you some indication of their interest in the spectrum. I will say that they spent the most amount of money, over a billion dollars in the CBRS auction. So they were very interested in that. And their spectrum position is, like I said, it’s really a mélange of spectrum across the bands. They have a bunch of spectrum in the AWS-4, a bunch of spectrum in the old 1.4 gig. Their spectrum holdings are really sort of all over the place and they have some spectrum that they risk losing because of this whole substantial service situation. They haven’t built some of their networks out and they have a unique situation because they’re a company who has not built a wireless network yet.

And so most of us expect that Dish again, will build their spectrum portfolio as needed, and then they can decide what they want to do. But you’re correct, they were considered by the commission as possibly being able to help with the 2.5 gig spectrum and some of the other spectrum that was made available through the Sprint-T-Mobile acquisition or merger.

Steven Cherry Auctions are about as exciting as the British game of cricket, which is incredibly boring if you don’t know the rules of the game —

Mark Gibson And going about as long!

Steven Cherry — Right! But it is very exciting if you do understand the rules and the teams and the best players and all that, what else should we be looking out for to get maximum excitement out of this auction?

Mark Gibson Well, I don’t think anybody who’s not interested in spectrum auctions is going to find this exciting other than the potential amount of monies coming into the Treasury. And at the end of the day, $28 to $35 billion dollars is not chump change. So there’s that interest there. And you compare that to CBRS and other auctions, you know, the auctions generate a fair amount of money. I think that’s something to keep in mind, just to watch the auction, are the posturing by the various key players, certainly the Tier 1s and the cable companies.

And just to watch what they do, it’ll be interesting to also watch what the WISPs do, and how the WISPs manage their bidding in some of their market regions. It was interesting in the CBRS auction, the one location that generated the highest per-POP [Point of Presence] revenue was a place called Loving, Tx., which has a total population, depending upon who you ask, of between 82 and 140 people. Yet that license went for many millions of dollars. So the dollar-per-MHz-POP, which is a measure of how much the spectrum was worth, was in the hundreds of dollars. So there’s that stuff that happens, which is we all looked at that and kind of scratched our heads. For this one, like I said, it’ll be the typical looking at what the Tier 1s here choose to do, for that matter, and all the telcos, what they’re going to do, certainly what Dish is going to do and how they do that.

But you won’t know that during the auction because you don’t know who’s bidding on what per se. A lot of the bidding is closed, and so you won’t know, for example, that AT&T or T-Mobile or Dish or whoever are bidding on a given market. You only know that at the end. What you will know, though, is at the end of the rounds how much the markets are going for. Then you can start making educated guesses as to what’s happening there based on what you might know about a given entity.

It’s also interesting to look at the entity names because there’s the applicant name and then there’s the name of the company and there’s sort of a lot of—it’s not chicanery, but hidden stuff going on. For example, Dish is bidding as Little Bear Wireless. So that’s interesting because they had a different name for the CBRS auction. So anyhow, I will be—and my cohorts will be—watching this auction to sort of seeing, first of all, how high it goes, how high some of the markets go. And then at the end, when the auction is all done, who bid for what? And then try to piece together what all that means.

Steven Cherry So what will Comcast’s role be in the auction and what would be a good outcome?

Mark Gibson Well, you know, it’s interesting. Comcast does not own spectrum per se they do a lot of leasing. I think they may have won some spectrum, a little bit of spectrum, in the 600-megahertz band, but they participated in AWS—AWS-1—as a consortium called SpectrumCo, and they were there with Cox and two other smaller cable companies. And they ended up getting a bunch of spectrum. I think they ended up bidding $4 billion in that auction and won a bunch but couldn’t figure out what to do with it. The association dissolved and the spectrum then was left to various other companies.

So I think for for Comcast it’ll be good if they get spectrum in the markets that they want to participate in, obviously. As you probably are aware, the auction is going to be done in two phases or the spectrum going to be awarded in two—and I think the auction is going to be done in two phases as well, the phase one is for the lower—the A Block, which is the 3700 to 3800 [MHz], the lower 100, in the top 46 PEAs, partial economic areas. So that auction will happen and then the rest of it will be sold—the rest of the 180. So the question is, well where will they be bidding? Will they be bidding in their DMAs, which is their market areas overlaid with the PEAs they’ll be bidding. I don’t know that, but that would be interesting to watch. But I think to the extent that Comcast gets any spectrum anywhere, I think it’ll be good for them that I’m pretty sure they can turn around and put it to good use. They have interesting plans, you know, with strand mounted approaches which are really very interesting. So we’ll see. That will be interesting to watch.

Steven Cherry And what will your company’s role in the auction be—and what would be a good outcome for CommScope?

Mark Gibson Our role is … We don’t participate in the auction. I, and my spectrum colleagues, will be watching it closely just to kind of discern sort of the parlor game of who’s doing what. Of course, when the auction ends, we find out we’re all wrong, but that’s okay. But we’ll be we’ll be looking at who’s doing what kind of trying to figure out who—to anticipate who’s doing what. Obviously, CommScope will be selling all of the auction winners what we do, which is infrastructure equipment, base station antennas, and everything in our potpourri —

What would be nice for you guys is if Dish and the cable companies do well since they have to start from scratch when it comes to equipment.

True. That’s true. Yeah, that would be good. And we work with all of these folks, so it’ll be interesting to see how that comes together.

And other than the money in the Treasury, what would be some good outcomes for consumers and citizens?

Well, that’s a good question. Having this spectrum now for 5G, and actually now we’re hearing 6G, in this band will be very useful for consumers. It’ll mean that your handsets will now be able to operate without putting your head in a weird position to accommodate the millimeter wave antennas that are inherent in the handsets. It’s right above CBRS, so there’s some contiguity. There’s another band that’s being looked at. There’s a rulemaking open on it right now. That’s 3450 to 3550. That’s right below CBRS. So that’s 100 megahertz.

So when you consider that plus the CBRS band plus the C-band, you’re looking now at 530 megahertz of mid-band spectrum possible. And so for consumers that will open up a lot of the use cases that are unique to 5G, things like IoT, vehicular technology use cases, that can only be partially addressed with the 5.9 GHz band. If you’ve been following that. C-V2X [Cellular Vehicle-to-Everything] capabilities to push more data to the vehicle will be much more doable across these swaths of spectrum—certainly in this band. We’ve seen interesting applications that were borne out of CBRS that should really see a good life in this band, things like precision agriculture and that sort of thing. So I think consumers will benefit from having the 280 contiguous megahertz of mid-band spectrum to enable all of the use cases that 5G claims to enable.

Steven Cherry Well, Mark, as the saying goes, in the American form of cricket—that is, baseball—you can’t tell the players without a scorecard. Thanks for giving us the scorecard and for joining us today.

Mark Gibson Well, my pleasure, Steven. Very good questions. I really enjoyed it. Thanks again.

Steven Cherry That’s very nice of you to say. We’ve been speaking with Mark Gibson of telecommunications equipment provider CommScope about the upcoming auction, December 8th, of more cellular spectrum in the C-band.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers, a professional organization dedicated to advancing technology for the benefit of humanity. This interview was recorded November 30th, 2020. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on the Spectrum website—where you can sign up for alerts or for our newsletter—or on Spotify, Apple Podcast, and wherever else you get your podcasts. We welcome your feedback on the Web or in social media.

For Radio Spectrum about the radio spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.


We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

NYU Wireless Picks Up Its Own Baton to Lead the Development of 6G

Post Syndicated from NYU Wireless original https://spectrum.ieee.org/telecom/wireless/nyu_wireless_picks_up_its_own_baton_to_lead_the_development_of_6g

The fundamental technologies that have made 5G possible are unequivocally massive MIMO (multiple-input multiple-output) and millimeter wave (mmWave) technologies. Without these two technologies there would be no 5G network as we now know it.

The two men, who were the key architects behind these fundamental technologies for 5G, have been leading one of the premier research institutes in mobile telephony since 2012: NYU Wireless, a part of NYU’s Tandon School of Engineering

Ted Rappaport is the founding director of NYU Wireless, and one of the key researchers in the development of mmWave technology. Rappaport also served as the key thought leader for 5G by planting the flag in the ground nearly a decade ago that argued mmWave would be a key enabling technology for the next generation of wireless.  His earlier work at two other wireless centers that he founded at Virginia Tech and The University of Austin laid the early groundwork that helped NYU Wireless catapult into one of the premier wireless institutions in the world.

Thomas Marzetta, who now serves as the director of NYU Wireless, was the scientist who led the development of massive MIMO while he was at Bell Labs and has championed its use in 5G to where it has become a key enabling technology for it.

These two researchers, who were so instrumental in developing the technologies that have enabled 5G, are now turning their attention to the next generation of mobile communications, and, according to them both, we are facing some pretty steep technical challenges to realizing a next generation of mobile communications.

“Ten years ago, Ted was already pushing mobile mmWave, and I at Bell Labs was pushing massive MIMO,” said Marzetta.  “So we had two very promising concepts ready for 5G. The research concepts that the wireless community is working on for 6G are not as mature at this time, making our focus on 6G even more important.”

This sense of urgency is reflected by both men, who are pushing against any sense of complacency that may exist in starting the development of 6G technologies as soon as possible. With this aim in mind, Rappaport, just as he did 10 years ago, has planted a new flag in the world of mobile communications with his publication last year of an article with the IEEE, entitled “Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond” 

“In this paper, we said for the first time that 6G is going to be in the sub-terahertz frequencies,” said Rappaport. “We also suggested the idea of wireless cognition where human thought and brain computation could be sent over wireless in real time. It’s a very visionary look at something. Our phones, which right now are flashlights, emails, TV browsers, calendars, are going to be become much more.”

While Rappaport feels confident that they have the right vision for 6G, he is worried about the lack of awareness of how critical it is for the US Government funding agencies and companies to develop the enabling technologies for its realization. In particular, both Rappaport and Marzetta are concerned about the economic competitiveness of the US and the funding challenges that will persist if it is not properly recognized as a priority.

“These issues of funding and awareness are critical for research centers, like NYU Wireless,” said Rappaport. “The US needs to get behind NYU Wireless to foster these ideas and create these cutting-edge technologies.”

With this funding support, Rappaport argues, teaching research institutes like NYU Wireless can create the engineers that end up going to companies and making technologies like 6G become a reality. “There are very few schools in the world that are even thinking this far ahead in wireless; we have the foundations to make it happen,” he added.

Both Rappaport and Marzetta also believe that making national centers of excellence in wireless could help to create an environment in which students could be exposed constantly to a culture and knowledge base for realizing the visionary ideas for the next generation of wireless.

“The Federal government in the US needs to pick a few winners for university centers of excellence to be melting pots, to be places where things are brought together,” said Rappaport. “The Federal government has to get together and put money into these centers to allow them to hire talent, attract more faculty, and become comparable to what we see in other countries where huge amounts of funding is going in to pick winners.”

While research centers, like NYU Wireless, get support from industry to conduct their research, Rappaport and Marzetta see that a bump in Federal funding could serve as both amplification and a leverage effect for the contribution of industrial affiliates. NYU Wireless currently has 15 industrial affiliates with a large number coming from outside the US, according to Rappaport.

“Government funding could get more companies involved by incentivizing them through a financial multiplier,” added Rappaport

Of course, 6G is not simply about setting out a vision and attracting funding, but also tackling some pretty big technical challenges.

Both men believe that we will need to see the development of new forms of MIMO, such as holographic MIMO, to enable more efficient use of the sub 6 GHz spectrum. Also, solutions will need to be developed to overcome the blockage problems that occur with mmWave and higher frequencies.

Fundamental to these technology challenges is accessing new frequency spectrums so that a 6G network operating in the sub-terahertz frequencies can be achieved. Both Rappaport and Marzetta are confident that technology will enable us to access even more challenging frequencies.

“There’s nothing technologically stopping us right now from 30, and 40, and 50 gigahertz millimeter wave, even up to 700 gigahertz,” said Rappaport. “I see the fundamentals of physics and devices allowing us to take us easily over the next 20 years up to 700 or 800 gigahertz.”

Marzetta added that there is much more that can be done in the scarce and valuable sub-6GHz spectrum. While massive MIMO is the most spectrally efficient wireless scheme ever devised, it is based on extremely simplified models of how antennas create electromagnetic signals that propagate to another location, according to Marzetta, adding, “No existing wireless system or scheme is operating close at all to limits imposed by nature.”

While expanding the spectrum of frequencies and making even better use of the sub-6GHz spectrum are  the foundation for the realization of future networks, Rappaport and Marzetta also expect that we will see increased leveraging of AI and machine learning. This will enable the creation of intelligent networks that can manage themselves with much greater efficiency than today’s mobile networks.

“Future wireless networks are going to evolve with greater intelligence,” said Rappaport. An example of this intelligence, according to Rappaport, is the new way in which the Citizens Broadband Radio Service (CBRS) spectrum is going to be used in a spectrum access server (SAS) for the first time ever.

“It’s going to be a nationwide mobile system that uses these spectrum access servers that mobile devices talk to in the 3.6 gigahertz band,” said Rappaport. “This is going to allow enterprise networks to be a cross of old licensed cellular and old unlicensed Wi-Fi. It’s going to be kind of somewhere in the middle. This serves as an early indication of how mobile communications will evolve over the next decade.”

These intelligent networks will become increasingly important when 6G moves towards so-called cell-less (“cell-free”) networks.

Currently, mobile network coverage is provided through hundreds of roughly circular cells spread out across an area. Now with 5G networks, each of these cells will be equipped with a massive MIMO array to serve the users within the cell. But with a cell-less 6G network the aim would be to have hundreds of thousands, or even millions, of access points, spread out more or less randomly, but with all the networks operating cooperatively together.

“With this system, there are no cell boundaries, so as a user moves across the city, there’s no handover or handoff from cell to cell because the whole city essentially constitutes one cell,” explained Marzetta. “All of the people receiving mobile services in a city get it through these access points, which in principle, every user is served by every access point all at once.”

One of the obvious challenges of this cell-less architecture is just the economics of installing so many access points all over the city. You have to get all of the signals to and from each access point from or to one sort of central point that does all the computing and number crunching.

While this all sounds daunting when thought of in the terms of traditional mobile networks, it conceptually sounds far more approachable when you consider that the Internet of Things (IoT) will create this cell-less network.

“We’re going to go from 10 or 20 devices today to hundreds of devices around us that we’re communicating with, and that local connectivity is what will drive this cell-less world to evolve,” said Rappaport. “This is how I think a lot of 5G and 6G use cases in this wide swath of spectrum are going to allow these low-power local devices to live and breathe.”

To realize all of these technologies, including intelligent networks, cell-less networks, expanded radio frequencies, and wireless cognition, the key factor will be training future engineers. 

To this issue, Marzetta noted: “Wireless communications is  a growing and dynamic field that  is a real opportunity for the next generation of young engineers.”

4G on the Moon: One Small Leap, One Giant Step

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/wireless/4g-moon

Standing up a 4G/LTE network might seem well below the pay grade of a legendary innovation hub like Nokia Bell Labs. However, standing up that same network on the moon is another matter. 

Nokia and 13 other companies — including SpaceX and Lockheed-Martin — have won five-year contracts totaling over US$ 370 million from NASA to demonstrate key infrastructure technologies on the lunar surface. All of which is part of NASA’s Artemis program to return humans to the moon.

NASA also wants the moon to be a stepping stone for further explorations in the solar system, beginning with Mars.

For that, says L.K. Kubendran, lead for commercial space technology partnerships, a lunar outpost would need, at the very least, power, shelter, and a way to communicate.

And the moon’s 4G network will not only be carrying voice and data signals like those found in terrestrial applications, it’ll also be handling remote operations like controlling lunar rovers from a distance. The antennas and base stations will need to be ruggedized for the harsh, radiation-filled lunar environment. Plus, over the course of a typical lunar day-night cycle (28 earth days), the network infrastructure will face temperature extremes of over 250ºC from sunlight to shadow. Of course, every piece of hardware in the network must first be transported to the moon, too. 

“The equipment needs to be hardened for environmental stresses such as vibration, shock and acceleration, especially during launch and landing procedures, and the harsh conditions experienced in space and on the lunar surface,” says Thierry Klein, head of the Enterprise & Industrial Automation Lab, Nokia Bell Labs. 

Oddly enough, it’ll probably all work better on the moon, too. 

“The propagation model will be different,” explains Dola Saha, assistant professor in Electrical and Computer Engineering at the University at Albany, SUNY. She adds that the lack of atmosphere as well as the absence of typical terrestrial obstructions like trees and buildings will likely mean better signal propagation. 

To ferry the equipment for their lunar 4G network, Nokia will be collaborating with Intuitive Machines — who’s also developing a lunar ice drill for the moon’s south pole. “Intuitive Machines is building a rover that they want to land on the moon in late 2022,” says Kubendran. That rover mission now seems likely to be the rocket that’ll begin hauling 4G infrastructure to the lunar surface.    

For all the technological innovation a lunar 4G network might bring about, its signals could unfortunately also mean bad news for radio astronomy. Radio telescopes are notoriously vulnerable to interference (a.k.a. radio frequency interference, or RFI). For instance, a stray phone signal from Mars carries power enough to interfere with the Jodrell Bank radio observatory in Manchester, U.K. So, asks Emma Alexander, an astrophysicist writing for The Conversation, “how would it fare with an entire 4G network on the moon?”

That depends, says Saha. 

“Depends on which frequencies you’re using, and…the side lobes and…filters [at] the front end of the of these networks,” Saha says. On Earth, she continues, there are so many applications that the FCC in the US, and corresponding bodies in other countries, allocate frequencies to each of them. 

“RFI can be mitigated at the source with appropriate shielding and precision in the emission of signals,” Alexander writes. “Astronomers are constantly developing strategies to cut RFI from their data. But this increasingly relies on the goodwill of private companies.” As well as regulations by governments, she adds, looking to shield earthly radio telescopes from human-generated interference from above. 

IoT Network Companies Have Cracked Their Chicken-and-Egg Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-network-companies-have-cracked-their-chicken-and-egg-problem

Along with everything else going on, we may look back at 2020 as the year that companies finally hit upon a better business model for Internet of Things (IoT) networks. Established network companies such as the Things Network and Helium, and new players such as Amazon, have seemingly given up on the idea of making money from selling network connectivity. Instead, they’re focused on getting the network out there for developers to use, assuming in the process that they’ll benefit from the effort in the long run.

IoT networks have a chicken-and-egg problem. Until device makers see widely available networks, they don’t want to build products that run on the network. And customers don’t want to pay for a network, and thus, fund its development, if there aren’t devices available to use. So it’s hard to raise capital to build a new wide-area network (WAN) that provides significant coverage and supports a plethora of devices that are enticing to use.

It certainly didn’t help such network companies as Ingenu, MachineQ, Senet, and SigFox that they’re all marketing half a dozen similar proprietary networks. Even the cellular carriers, which are promoting both LTE-M for machine-to-machine networks and NB-IoT for low-data-rate networks, have historically struggled to justify their investments in IoT network infrastructure. After COVID-19 started spreading in Japan, NTT DoCoMo called it quits on its NB-IoT network, citing a lack of demand.

“Personally, I don’t believe in business models for [low-power WANs],” says Wienke Giezeman, the CEO and cofounder of the Things Network. His company does deploy long-range low-power WAN gateways for customers that use the LoRa Alliance’s LoRaWAN specifications. (“LoRa” is short for “long-range.”) But Giezeman sees that as the necessary element for later providing the sensors and software that deliver the real IoT applications customers want to buy. Trying to sell both the network and the devices is like running a restaurant that makes diners buy and set up the stove before it cooks their meals.

The Things Network sets up the “stove” and includes the cost of operating it in the “meal.” But, because Giezeman is a big believer in the value of open source software and creating a sense of abundance around last-mile connectivity, he’s also asking customers to opt in to turning their networks into public networks.

Senet does something similar, letting customers share their networks. Helium is using cryptocurrencies to entice people to set up LoRaWAN hotspots on their networks and rewarding them with tokens for keeping the networks operational. When someone uses data from an individual’s Helium node, that individual also gets tokens that might be worth something one day. I actually run a Helium hotspot in my home, although I’m more interested in the LoRa coverage than the potential for wealth.

And there’s Amazon, which plans to embed its own version of a low-power WAN into its Echo and its Ring security devices. Whenever someone buys one of these devices they’ll have the option of adding it as a node on the Amazon Sidewalk network. Amazon’s plan is to build out a decentralized network for IoT devices, starting with a deal to let manufacturer Tile use the network for its ­Bluetooth tracking devices.

After almost a decade of following various low-power IoT networks, I’m excited to see them abandon the idea that the network is the big value, and instead recognize that it’s the things on the network that entice people. Let’s hope this year marks a turning point for low-power WANs.

This article appears in the November 2020 print issue as “Network Included.”

Breaking the Latency Barrier

Post Syndicated from Shivendra Panwar original https://spectrum.ieee.org/telecom/wireless/breaking-the-latency-barrier

For communications networks, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging texts on our phones, but we now think nothing of streaming videos from YouTube and Netflix. No wonder, then, that video now consumes up to 60 percent of Internet bandwidth. If this trend continues, we might yet see full-motion holography delivered to our mobiles—a techie dream since Princess Leia’s plea for help in Star Wars.

Recently, though, high bandwidth has begun to share the spotlight with a different metric of merit: low latency. The amount of latency varies drastically depending on how far in a network a signal travels, how many routers it passes through, whether it uses a wired or wireless connection, and so on. The typical latency in a 4G network, for example, is 50 milliseconds. Reducing latency to 10 milliseconds, as 5G and Wi-Fi are currently doing, opens the door to a whole slew of applications that high bandwidth alone cannot. With virtual-reality headsets, for example, a delay of more than about 10 milliseconds in rendering and displaying images in response to head movement is very perceptible, and it leads to a disorienting experience that is for some akin to seasickness.

Multiplayer games, autonomous vehicles, and factory robots also need extremely low latencies. Even as 5G and Wi-Fi make 10 milliseconds the new standard for latency, researchers, like my group at New York University’s NYU Wireless research center, are already working hard on another order-of-magnitude reduction, to about 1 millisecond or less.

Pushing latencies down to 1 millisecond will require reengineering every step of the communications process. In the past, engineers have ignored sources of minuscule delay because they were inconsequential to the overall latency. Now, researchers will have to develop new methods for encoding, transmitting, and routing data to shave off even the smallest sources of delay. And immutable laws of physics—specifically the speed of light—will dictate firm restrictions on what networks with 1-millisecond latencies will look like. There’s no one-size-fits-all technique that will enable these extremely low-latency networks. Only by combining solutions to all these sources of latency will it be possible to build networks where time is never wasted.

Until the 1980s, latency-sensitive technologies used dedicated end-to-end circuits. Phone calls, for example, were carried on circuit-switched networks that created a dedicated link between callers to guarantee minimal delays. Even today, phone calls need to have an end-to-end delay of less than 150 milliseconds, or it’s difficult to converse comfortably.

At the same time, the Internet was carrying delay-tolerant traffic, such as emails, using technologies like packet switching. Packet switching is the Internet equivalent of a postal service, where mail is routed through post offices to reach the correct mailbox. Packets, or bundles of data between 40 and 1,500 bytes, are sent from point A to point B, where they are reassembled in the correct order. Using the technology available in the 1980s, delays routinely exceeded 100 milliseconds, with the worst delays well over 1 second.

Eventually, Voice over Internet Protocol (VoIP) technology supplanted circuit-switched networks, and now the last circuit switches are being phased out by providers. Since VoIP’s triumph, there have been further reductions in latency to get us into the range of tens of milliseconds.

Latencies below 1 millisecond would open up new categories of applications that have long been sought. One of them is haptic communications, or communicating a sense of touch. Imagine balancing a pencil on your fingertip. The reaction time between when you see the pencil beginning to tip over and then moving your finger to keep it balanced is measured in milliseconds. A human-controlled tele­operated robot with haptic feedback would need a similar latency level.

Robots that aren’t human controlled would also benefit from 1-millisecond latencies. Just like a person, a robot can avoid falling over or dropping something only if it reacts within a millisecond. But the powerful computers that process real-time reactions and the batteries that run them are heavy. Robots could be lighter and operate longer if their “brains” were kept elsewhere on a low-latency wireless network.

Before I get into all the ways engineers might build ultralow-latency networks, it would help to understand how data travels from one device to another. Of course, the signals have to physically travel along a transmission link between the devices. That journey isn’t limited by just the speed of light, because delays are caused by switches and routers along the way. And even before that, data must be converted and prepped on the device itself for the journey. All of these parts of the process will need to be redesigned to consistently achieve submillisecond latencies.

I should mention here that my research is focused on what’s called the transport layer in the multiple layers of protocols that govern the exchange of data on the Internet. That means I’m concerned with the portion of a communications network that controls transmission rates and checks to make sure packets reach their destination in the correct order and without errors. While there are certainly small sources of delay on devices themselves, I’m going to focus on network delays.

The first issue is frame duration. A wireless access link—the link that connects a device to the larger, wired network—schedules transmissions within periodic intervals called frames. For a 4G link, the typical frame duration is 1 millisecond, so you could potentially lose that much time just waiting for your turn to transmit. But 5G has shrunk frame durations, lessening their contribution to delay.

Wi-Fi functions differently. Rather than using frames, Wi-Fi networks use random access, where a device transmits immediately and reschedules the transmission if it collides with another device’s transmission in the link. The upside is that this method has shorter delays if there is no congestion, but the delays build up quickly as more device transmissions compete for the same channel. The latest version of Wi-Fi, Wi-Fi Certified 6 (based on the draft standard IEEE P802.11ax) addresses the congestion problem by introducing scheduled transmissions, just as 4G and 5G networks do.

When a data packet is traveling through the series of network links connecting its origin to its destination, it is also subject to congestion delays. Packets are often forced to queue up at links as both the amount of data traveling through a link and the amount of available bandwidth fluctuate naturally over time. In the evening, for example, more data-heavy video streaming could cause congestion delays through a link. Sending data packets through a series of wireless links is like drinking through a straw that is constantly changing in length and diameter. As delays and bandwidth change, one second you may be sucking up dribbles of data, while the next you have more packets than you can handle.

Congestion delays are unpredictable, so they cannot be avoided entirely. The responsibility for mitigating congestion delays falls to the Transmission Control Protocol (TCP), one part of the collection of Internet protocols that governs how computers communicate with one another. The most common implementations of TCP, such as TCP Cubic, measure congestion by sensing when the buffers in network routers are at capacity. At that point, packets are being lost because there is no room to store them. Think of it like a bucket with a hole in it, placed underneath a faucet. If the faucet is putting more water into the bucket than is draining through the hole, it fills up until eventually the bucket overflows. The bucket in this example is the router buffer, and if it “overflows,” packets are lost and need to be sent again, adding to the delay. The sender then adjusts its transmission rate to try and avoid flooding the buffer again.

The problem is that even if the buffer doesn’t overflow, data can still be stuck there queuing for its turn through the bucket “hole.” What we want to do is allow packets to flow through the network without queuing up in buffers. ­YouTube uses a variation of TCP developed at Google called TCP BBR, short for Bottleneck Bandwidth and Round-Trip propagation time, with this exact goal. It works by adjusting the transmission rate until it matches the rate at which data is passing through routers. Going back to our bucket analogy, it constantly adjusts the flow of water out of the faucet to keep it the same as the flow out of the hole in the bucket.

For further reductions in congestion, engineers have to deal with previously ignorable tiny delays. One example of such a delay is the minuscule variation in the amount of time it takes each specific packet to transmit during its turn to access the wireless link. Another is the slight differences in computation times of different software protocols. These can both interfere with TCP BBR’s ability to determine the exact rate to inject packets into the connection without leaving capacity unused or causing a traffic jam.

A focus of my research is how to redesign TCP to deliver low delays combined with sharing bandwidth fairly with other connections on the network. To do so, my team is looking at existing TCP versions, including BBR and others like the Internet Engineering Task Force’s L4S (short for Low Latency Low Loss ­Scalable throughput). We’ve found that these existing versions tend to focus on particular applications or situations. BBR, for example, is optimized for ­YouTube videos, where video data typically arrives faster than it is being viewed. That means that BBR ignores bandwidth variations because a buffer of excess data has already made it to the end user. L4S, on the other hand, allows routers to prioritize time-sensitive data like real-time video packets. But not all routers are capable of doing this, and L4S is less useful in those situations.

My group is figuring out how to take the most effective parts of these TCP versions and combine them into a more versatile whole. Our most promising approach so far has devices continually monitoring for data-packet delays and reducing transmission rates when delays are building up. With that information, each device on a network can independently figure out just how much data it can inject into the network. It’s somewhat like shoppers at a busy grocery store observing the checkout lines to see which ones are moving faster, or have shorter queues, and then choosing which lines to join so that no one line becomes too long.

For wireless networks specifically, reliably delivering latencies below 1 millisecond is also hampered by connection handoffs. When a cellphone or other device moves from the coverage area of one base station (commonly referred to as a cell tower) to the coverage of a neighboring base station, it has to switch its connection. The network initiates the switch when it detects a drop in signal strength from the first station and a corresponding rise in the second station’s signal strength. The handoff occurs during a window of several seconds or more as the signal strengths change, so it rarely interrupts a connection.

But 5G is now tapping into millimeter waves—the frequencies above 20 ­gigahertz—which bring unprecedented hurdles. These frequency bands have not been used before because they typically don’t propagate as far as lower frequencies, a shortcoming that has only recently been addressed with technologies like beamforming. Beamforming works by using an array of antennas to point a narrow, focused transmission directly at the intended receiver. Unfortunately, obstacles like pedestrians and vehicles can entirely block these beams, causing frequent connection interruptions.

One option to avoid interruptions like these is to have multiple millimeter-wave base stations with overlapping coverage, so that if one base station is blocked, another base station can take over. However, unlike regular cell-tower handoffs, these handoffs are much more frequent and unpredictable because they are caused by the movement of people and traffic.

My team at NYU is developing a solution in which neighboring base stations are also connected by a fiber-optic ring network. All of the traffic to this group of base stations would be injected into, and then circulate around, the ring. Any base station connected to a cellphone or other wireless device can copy the data as it passes by on the ring. If one base station becomes blocked, no problem: The next one can try to transmit to the device. If it, too, is blocked, the one after that can have a try. Think of a baggage carousel at an airport, with a traveler standing near it waiting for her luggage. She may be “blocked” by others crowding around the carousel, so she might have to move to a less crowded spot. Similarly, the fiber-ring system would allow reception as long as the mobile device is anywhere in the range of at least one unblocked base station.

There’s one final source of latency that can no longer be ignored, and it’s immutable: the speed of light. Because light travels so quickly, it used to be possible to ignore it in the presence of other, larger sources of delay. It cannot be disregarded any longer.

Take for example the robot we discussed earlier, with its computational “brain” in a server in the cloud. Servers and large data centers are often located hundreds of kilometers away from end devices. But because light travels at a finite speed, the robot’s brain, in this case, cannot be too far away. Moving at the speed of light, wireless communications travel about 300 kilometers per millisecond. Signals in optical fiber are even slower, at about 200 km/ms. Finally, considering that the robot would need round-trip communications below 1 millis­econd, the maximum possible distance between the robot and its brain is about 100 km. And that’s ignoring every other source of possible delay.

The only solution we see to this problem is to simply move away from traditional cloud computing toward what is known as edge computing. Indeed, the push for submillisecond latencies is driving the development of edge computing. This method places the actual computation as close to the devices as possible, rather than in server farms hundreds of kilo­meters away. However, finding the real estate for these servers near users will be a costly challenge to service providers.

There’s no doubt that as researchers and engineers work toward 1-millisecond delays, they will find more sources of latency I haven’t mentioned. When every microsecond matters, they’ll have to get creative to whittle away the last few slivers on the way to 1 millisecond. Ultimately, the new techniques and technologies that bring us to these ultralow latencies will be driven by what people want to use them for.

When the Internet was young, no one knew what the killer app might be. Universities were excited about remotely accessing computing power or facilitating large file transfers. The U.S. Department of Defense saw value in the Internet’s decentralized nature in the event of an attack on communications infrastructure. But what really drove usage turned out to be email, which was more relevant to the average person. Similarly, while factory robots and remote surgery will certainly benefit from submillisecond latencies, it is entirely possible that neither emerges as the killer app for these technologies. In fact, we probably can’t confidently predict what will turn out to be the driving force behind vanishingly small delays. But my money is on multiplayer gaming.

This article appears in the November 2020 print issue as “Breaking the Millisecond Barrier.”

About the Author

Shivendra Panwar is director of the New York State Center for Advanced Technology in Telecommunications and a professor in the electrical and computer engineering department at New York University.

Indoor Air-Quality Monitoring Can Allow Anxious Office Workers to Breathe Easier

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/indoor-airquality-monitoring-can-allow-anxious-office-workers-to-breathe-easier

Although IEEE Spectrum’s offices reopened several weeks ago, I have not been back and continue to work remotely—as I suspect many, many other people are still doing. Reopening offices, schools, gyms, restaurants, or really any indoor space seems like a dicey proposition at the moment.

Generally speaking, masks are better than no masks, and outdoors is better than indoors, but short of remaining holed up somewhere with absolutely no contact with other humans, there’s always that chance of contracting COVID-19. One of the reasons I personally haven’t gone back to the office is because the uncertainty over whether the virus, exhaled by a coworker, is present in the air, doesn’t seem worth the novelty of actually being able to work at my desk again for the first time in months.

Unfortunately, we can’t detect the virus directly, short of administering tests to people. There’s no sensor you can plug in and place on your desk that will detect virus nuclei hanging out in the air around you. What is possible, and might bring people some peace of mind, is installing IoT sensors in office buildings and schools to monitor air quality and ascertain how favorable the indoor environment is to COVID-19 transmission.

One such monitoring solution has been developed by wireless tech company eleven-x and environmental engineering consultant Pinchin. The companies have created a real-time monitoring system that measures carbon dioxide levels, relative humidity, and particulate levels to assess the quality of indoor air.

“COVID is the first time we’ve dealt with a source of environmental hazard that is the people themselves,” says David Shearer, the director of Pinchin’s indoor environmental quality group. But that also gives Pinchin and eleven-x some ideas on how to approach monitoring indoor spaces, in order to determine how likely transmission might be.

Again, because there’s no such thing as a sensor that can directly detect COVID-19 virus nuclei in an environment, the companies have resorted to measuring secondhand effects. The first measurement is carbon dioxide levels. he biggest contributor to CO2 in an indoor space is people exhaling. Higher levels in a room or building suggest more people, and past a certain level, means either there are too many people in the space or the air is not being vented properly. Indirectly, that carbon dioxide suggests that if someone was a carrier of the coronavirus, there’s a good chance there’s more virus nuclei in the air.

The IoT system also measures relative humidity. Dan Mathers, the CEO of eleven-x, says that his company and Pinchin designed their monitoring system based on recommendations by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE’s guidelines [PDF] suggest that a space’s relative humidity should hover between 40 and 60 percent. More or less than that, Shearer explains, and it’s easier for the virus to be picked up by the tissue of the lungs.

A joint sensor measures carbon dioxide levels and relative humidity. A separate sensor measures the level of 2.5-micrometer particles in the air. 2.5 microns (or PM 2.5) is an established particulate size threshold in air quality measuring. And as circumstances would have it, it’s also slightly smaller than the size of a virus nucleus, which about 3 µm. They’re still close enough in size that a high level of 2.5-µm particles in the air means there’s a greater chance that there’s potentially some COVID-19 nuclei in the space.

According to Mathers, the eleven-x sensors are battery-operated and communicate using LoRaWAN, a wireless standard that is most notable for being extremely low power. Mathers says the batteries in eleven-x’s sensors can last for ten years or more without needing to be replaced.

LoRaWAN’s principal drawback is that isn’t a good fit for transmitting high-data messages. But that’s not a problem for eleven-x’s sensors. Mathers says the average message they send is just 10 bytes. Where LoRa excels is in situations where simple updates or measurements need to be sent—measurements such as carbon dioxide, relative humidity, and particulate levels. The sensors communicate on what’s called an interrupt basis, meaning once a measurement passes a particular threshold, the sensor wakes up and sends an update. Otherwise, it whiles away the hours in a very low power sleep mode.

Mathers explains that one particulate sensor is placed near a vent to get a good measurement of 2.5-micron-particle levels. Then, several carbon dioxide/relative humidity sensors are scattered around the indoor space to get multiple measurements. All of the sensors communicate their measurements to LoRaWAN gateways that relay the data to eleven-x and Pinchin for real-time monitoring.

Typically, indoor air quality tests are done in-person, maybe once or twice a year, with measuring equipment brought into the building. Shearer believes that the desire to reopen buildings, despite the specter of COVID-19 outbreaks, will trigger a sea change for real-time air monitoring.

It has to be stressed again that the monitoring system developed by eleven-x and Pinchin is not a “yes or no” style system. “We’re in discussions about creating an index,” says Shearer, “but it probably won’t ever be that simple.” Shifting regional guidelines, and the evolving understanding about how COVID-19 is transmitted will make it essentially impossible to create a hard-and-fast index for what qualities of indoor air are explicitly dangerous with respect to the virus.

Instead, like social distancing and mask wearing recommendations, the measurements are intended to give general guidelines to help people avoid unhealthy situations. What’s more, the sensors will notify building occupants should indoor conditions change from likely safe to likely problematic. It’s another way to make people feel safe if they have to return to work or school.

Apptricity Beams Bluetooth Signals Over 30 Kilometers

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/apptricity-beams-bluetooth-signals-over-20-miles

When you think about Bluetooth, you probably think about things like wireless headphones, computer mice, and other personal devices that utilize the short-range, low-power technology. That’s where Bluetooth has made its mark, after all—as an alternative to Wi-Fi, using unlicensed spectrum to make quick connections between devices.

But it turns out that Bluetooth can go much farther than the couple of meters for which most people rely on it. Apptricity, a company that provides asset and inventory tracking technologies, has developed a Bluetooth beacon that can transmit signals over 32 kilometers (20 miles). The company believes its beacon is a cheaper, secure alternative to established asset and inventory tracking technologies.

A quick primer, if you’re not entirely clear on asset tracking versus inventory tracking: There’s some gray areas in the middle, but by and large, “asset tracking” refers to an IT department registering which employee has which laptop, or a construction company keeping tabs on where its backhoes are on a large construction site. Inventory tracking refers more to things like a retail store keeping correct product counts on the shelves, or a hospital noting how quickly it’s going through its store of gloves.

Asset and inventory tracking typically use labor-intensive techniques like barcode or passive RFID scanning, which are limited both by distance (a couple of meters at most, in both cases) and the fact that a person has to be directly involved in scanning. Alternatively, companies can use satellite or LTE tags to keep track of stuff. While such tags don’t require a person to actively track items, they are far more expensive, requiring a costly subscription to either a satellite or LTE network.

So, the burning question: How does one send a Bluetooth signal over 30-plus kilometers? Typically, Bluetooth’s distance is limited because large distances would require a prohibitive amount of power, and its use of unlicensed spectrum means that the greater the distance, the more likely it will interfere with other wireless signals.

The key new wrinkle, according to Apptricity’s CEO Tim Garcia, is precise tuning within the Bluetooth spectrum. Garcia says it’s the same principle as a tightly-focused laser beam. A laser beam will travel farther without its signal weakening beyond recovery if the photons making up the beam are all as close to a specific frequency as possible. Apptricity’s Bluetooth beacons use firmware developed by the company to achieve such precise tuning, but with Bluetooth signals instead of photons. Thus, data can be sent and received by the beacons without interfering and without requiring unwieldy amounts of power.

Garcia says RFID tags and barcode scanning don’t actively provide information about assets or inventory. Bluetooth, however, can not only pinpoint where something is, it can send updates about a piece of equipment that needs maintenance or just requires a routine check-up.

By its own estimation, Apptricity’s Bluetooth beacons are 90 percent cheaper than LTE or satellite tags, specifically because Bluetooth devices don’t require paying for a subscription to an established network.

The company’s current transmission distance record for its Bluetooth beacons is 38 kilometers (23.6 miles). The company has also demonstrated non-commercial versions of the beacons for the U.S. Department of Defense with broadcast ranges between 80 and 120 kilometers.

Unlock Wireless Test Capabilities On Your RF Gear

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/unlock_wireless_test_capabilities_on_your_rf_gear

Discover how easy it is to update your instrument to test the latest wireless standards.

We’re offering 30-day software trials that evolve test capabilities on your signal analyzers and signal generators. Automatically generate or analyze signals for many wireless applications.

Choose from our more popular applications:

  • Bluetooth ®
  • WLAN 802.11
  • Vector Modulation Analysis
  • And more

Building Your Own Cellphone Network Can Be Empowering, and Also Problematic

Post Syndicated from Roberto J. González original https://spectrum.ieee.org/telecom/wireless/building-your-own-cellphone-network-can-be-empowering-and-problematic

During the summer of 2013, news reports recounted how a Mexican pueblo launched its own do-it-yourself cellphone network. The people of Talea de Castro, population 2,400, created a mini-telecom company, the first of its kind in the world, without help from the government or private companies. They built it after Mexico’s telecommunications giants failed to provide mobile service to the people in the region. It was a fascinating David and Goliath story that pitted the country’s largest corporation against indigenous villagers, many of whom were Zapotec-speaking subsistence farmers with little formal education.

The reports made a deep impression on me. In the 1990s, I’d spent more than two years in Talea, located in the mountains of Oaxaca in southern Mexico, working as a cultural anthropologist.

Reading about Talea’s cellular network inspired me to learn about the changes that had swept the pueblo since my last visit. How was it that, for a brief time, the villagers became international celebrities, profiled by USA Today, BBC News, Wired, and many other media outlets? I wanted to find out how the people of this community managed to wire themselves into the 21st century in such an audacious and dramatic way—how, despite their geographic remoteness, they became connected to the rest of the world through the wondrous but unpredictably powerful magic of mobile technology.

I naively thought it would be the Mexican version of a familiar plot line that has driven many Hollywood films: Small-town men and women fearlessly take on big, bad company. Townspeople undergo trials, tribulations, and then…triumph! And they all lived happily ever after.

I discovered, though, that Talea’s cellphone adventure was much more complicated—neither fairy tale nor cautionary tale. And this latest phase in the pueblo’s centuries-long effort to expand its connectedness to the outside world is still very much a work in progress.

Sometimes it’s easy to forget that 2.5 billion people—one-third of the planet’s population—don’t have cellphones. Many of those living in remote regions want mobile service, but they can’t have it for reasons that have to do with geography, politics, or economics.

In the early 2000s, a number of Taleans began purchasing cellphones because they traveled frequently to Mexico City or other urban areas to visit relatives or to conduct business. Villagers began petitioning Telcel and Movistar, Mexico’s largest and second-largest wireless operators, for cellular service, and they became increasingly frustrated by the companies’ refusal to consider their requests. Representatives from the firms claimed that it was too expensive to build a mobile network in remote locations—this, in spite of the fact that Telcel’s parent company, América Móvil, is Mexico’s wealthiest corporation. The board of directors is dominated by the Slim family, whose patriarch, Carlos Slim, is among the world’s richest men.

But in November 2011, an opportunity arose when Talea hosted a three-day conference on indigenous media. Kendra Rodríguez and Abrám Fernández, a married couple who worked at Talea’s community radio station, were among those who attended. Rodríguez is a vivacious and articulate young woman who exudes self-confidence. Fernández, a powerfully built man in his mid-thirties, chooses his words carefully. Rodríguez and Fernández are tech savvy and active on social media. (Note: These names are pseudonyms.)

Also present were Peter Bloom, a U.S.-born rural-development expert, and Erick Huerta, a Mexican lawyer specializing in telecommunications policy. Bloom had done human rights work in Nigeria in rural areas where “it wasn’t possible to use existing networks because of security concerns and lack of finances,” he explains. “So we began to experiment with software that would enable communication between phones without relying upon any commercial companies.” Huerta had worked with the United Nations’ International Telecommunication Union.

The two men founded Rhizomatica, a nonprofit focused on expanding cellphone service in indigenous areas around the world. At the November 2011 conference, they approached Rodríguez and Fernández about creating a community cell network in Talea, and the couple enthusiastically agreed to help.

In 2012, Rhizomatica assembled a ragtag team of engineers and hackers to work on the project. They decided to experiment with open-source software called OpenBTS—Open Base Transceiver Station—which allows cell phones to communicate with each other if they are within range of a base station. Developed by David Burgess and Harvind Samra, cofounders of the San Francisco Bay Area startup Range Networks, the software also allows networked phones to connect over the Internet using VoIP, or Voice over Internet Protocol. OpenBTS had been successfully deployed at Burning Man and on Niue, a tiny Polynesian island nation.

Bloom’s team began adapting OpenBTS for Talea. The system requires electrical power and an Internet connection to work. Fortunately, Talea had enjoyed reliable electricity for nearly 40 years and had gotten Internet service in the early 2000s.

In the meantime, Huerta pulled off a policy coup: He convinced federal regulators that indigenous communities had a legal right to build their own networks. Although the government had granted telecom companies access to nearly the entire radio-frequency spectrum, small portions remained unoccupied. Bloom and his team decided to insert Talea’s network into these unused bands. They would become digital squatters.

As these developments were under way, Rodríguez and Fernández were hard at work informing Taleans about the opportunity that lay before them. Rodríguez used community radio to pique interest among her fellow villagers. Fernández spoke at town hall meetings and coordinated public information sessions at the municipal palace. Soon they had support from hundreds of villagers eager to get connected. Eventually, they voted to invest 400,000 pesos (approximately US $30,000) of municipal funding for equipment to build the cellphone network. By doing so, the village assumed a majority stake in the venture.

In March 2013, the cellphone network was finally ready for a trial run. Volunteers placed an antenna near the town center, in a location they hoped would provide adequate coverage of the mountainside pueblo. As they booted up the system, they looked for the familiar signal bars on their cellphone screens, hoping for a strong reception.

The bars light up brightly—success!

The team then walked through the town’s cobblestone streets in order to detect areas with poor reception. As they strolled up and down the paths, they heard laughter and shouting from several houses. People emerged from their homes, stunned.

“We’ve got service!” exclaimed a woman in disbelief.

“It’s true, I called the state capital!” cried a neighbor.

“I just connected to my son in Guadalajara!” shouted another.

Although Rodríguez, Fernández, and Bloom knew that many Taleans had acquired cellphones over the years, they didn’t realize the extent to which the devices were being used every day—not for communications but for listening to music, taking photos and videos, and playing games. Back at the base station, the network computer registered more than 400 cellphones within the antenna’s reach. People were already making calls, and the number was increasing by the minute as word spread throughout town. The team shut down the system to avoid an overload.

As with any experimental technology, there were glitches. Outlying houses had weak reception. Inclement weather affected service, as did problems with Internet connectivity. Over the next six months, Bloom and his team returned to Talea every week, making the 4-hour trip from Oaxaca City in a red Volkswagen Beetle.

Their efforts paid off. The community network proved to be immensely popular, and more than 700 people quickly subscribed. The service was remarkably inexpensive: Local calls and text messages were free, and calls to the United States were only 1.5 cents per minute, approximately one-tenth of what it cost using landlines. For example, a 20-minute call to United States from the town’s phone kiosk might cost a campesino a full day’s wages.

Initially, the network could handle only 11 phone calls at a time, so villagers voted to impose a 5-minute limit to avoid overloads. Even with these controls, many viewed the network as a resounding success. And patterns of everyday life began to change, practically overnight: Campesinos who walked 2 hours to get to their corn fields could now phone family members if they needed provisions. Elderly women collecting firewood in the forest had a way to call for help in case of injury. Youngsters could send messages to one another without being monitored by their parents, teachers, or peers. And the pueblo’s newest company—a privately owned fleet of three-wheeled mototaxis—increased its business dramatically as prospective passengers used their phones to summon drivers.

Soon other villages in the region followed Talea’s lead and built their own cellphone networks—first, in nearby Zapotec-speaking pueblos and later, in adjacent communities where Mixe and Mixtec languages are spoken. By late 2015, the villages had formed a cooperative, Telecomunicaciones Indígenas Comunitarias, to help organize the villages’ autonomous networks. Today TIC serves more than 4,000 people in 70 pueblos.

But some in Talea de Castro began asking questions about management and maintenance of the community network. Who could be trusted with operating this critical innovation upon which villagers had quickly come to depend? What forms of oversight would be needed to safeguard such a vital part of the pueblo’s digital infrastructure? And how would villagers respond if outsiders tried to interfere with the fledgling network?

On a chilly morning in May 2014, Talea’s Monday tianguis, or open-air market, began early and energetically. Some merchants descended upon the mountain village by foot alongside cargo-laden mules. But most arrived on the beds of clattering pickup trucks. By daybreak, dozens of merchants were displaying an astonishing variety of merchandise: produce, fresh meat, live farm animals, eyeglasses, handmade huaraches, electrical appliances, machetes, knockoff Nike sneakers, DVDs, electronic gadgets, and countless other items.

Hundreds of people from neighboring villages streamed into town on this bright spring morning, and the plaza came alive. By midmorning, the pueblo’s restaurants and cantinas were overflowing. The tianguis provided a welcome opportunity to socialize over coffee or a shot of fiery mezcal.

As marketgoers bustled through the streets, a voice blared over the speakers of the village’s public address system, inviting people to attend a special event—the introduction of a new, commercial cellphone network. Curious onlookers congregated around the central plaza, drawn by the sounds of a brass band.

As music filled the air, two young men in polo shirts set up inflatable “sky dancers,” then assembled tables in front of the stately municipal palace. The youths draped the tables with bright blue tablecloths, each imprinted with the logo of Movistar, Mexico’s second-largest wireless provider. Six men then filed out of the municipal palace and sat at the tables, facing the large audience.

What followed was a ceremony that was part political rally, part corporate event, part advertisement—and part public humiliation. Four of the men were politicians affiliated with the authoritarian Institutional Revolutionary Party, which has dominated Oaxaca state politics for a century. They were accompanied by two Movistar representatives. A pair of young, attractive local women stood nearby, wearing tight jeans and Movistar T-shirts.

The politicians and company men had journeyed to the pueblo to inaugurate a new commercial cellphone service, called Franquicias Rurales (literally, “Rural Franchises”). As the music faded, one of the politicians launched a verbal attack on Talea’s community network, declaring it to be illegal and fraudulent. Then the Movistar representatives presented their alternative plan.

A cellphone antenna would soon be installed to provide villagers with Movistar cellphone service—years after a group of Taleans had unsuccessfully petitioned the company to do precisely that. Movistar would rent the antenna to investors, and the franchisees would sell service plans to local customers in turn. Profits would be divided between Movistar and the franchise owners.

The grand finale was a fiesta, where the visitors gave away cheap cellphones, T-shirts, and other swag as they enrolled villagers in the company’s mobile-phone plans.

The Movistar event came at a tough time for the pueblo’s network. From the beginning, it had experienced technical problems. The system’s VoIP technology required a stable Internet connection, which was not always available in Talea. As demand grew, the problems continued, even after a more powerful system manufactured by the Canadian company Nutaq/NuRAN Wireless was installed. Sometimes, too many users saturated the network. Whenever the electricity went out, which is not uncommon during the stormy summer months, technicians would have to painstakingly reboot the system.

In early 2014, the network encountered a different threat: opposition from the community. Popular support for the network was waning amidst a growing perception that those in charge of maintaining the network were mismanaging the enterprise. Soon, village officials demanded that the network repay the money granted by the municipal treasury. When the municipal government opted to become Talea’s Movistar franchise owner, they used these repaid funds to pay for the company’s antenna. The homegrown network lost more than half of its subscribers in the months following the company’s arrival. In March 2019, it shut down for good.

No one forced Taleans to turn their backs on the autonomous cell network that had brought them international fame. There were many reasons behind the pueblo’s turn to Movistar. Although the politicians who condemned the community network were partly responsible, the cachet of a globally recognized brand also helped Movistar succeed. The most widely viewed television events in the pueblo (as in much of the rest of the world) are World Cup soccer matches. As Movistar employees were signing Taleans up for service during the summer of 2014, the Mexican national soccer team could be seen on TV wearing jerseys that prominently featured the telecom company’s logo.

But perhaps most importantly, young villagers were drawn to Movistar because of a burning desire for mobile data services. Many teenagers from Talea attended high school in the state capital, Oaxaca City, where they became accustomed to the Internet and all that it offers.

Was Movistar’s goal to smash an incipient system of community-controlled communication, destroy a bold vision for an alternative future, and prove that villagers are incapable of managing affairs themselves? It’s hard to say. The company’s goals were probably motivated more by long-term financial objectives: to increase market share by extending its reach into remote regions.

And yet, despite the demise of Talea’s homegrown network, the idea of community-based telecoms had taken hold. The pueblo had paved the way for significant legal and regulatory changes that made it possible to create similar networks in nearby communities, which continue to thrive.

The people of Talea have long been open to outside ideas and technologies, and have maintained a pragmatic attitude and a willingness to innovate. The same outlook that led villagers to enthusiastically embrace the foreign hackers from Rhizomatica also led them to give Movistar a chance to redeem itself after having ignored the pueblo’s earlier appeals. In this town, people greatly value convenience and communication. As I tell students at my university, located in the heart of Silicon Valley, villagers for many years have longed to be reliably and conveniently connected to each other—and to the rest of the world. In other words, Taleans want cellphones because they want to be like you.

This article is based on excerpts from Connected: How a Mexican Village Built Its Own Cell Phone Network (University of California Press, 2020).

About the Author

Roberto J. González is chair of the anthropology department at San José State University.

How To Use Wi-Fi Networks To Ensure a Safe Return to Campus

Post Syndicated from Jan Dethlefs original https://spectrum.ieee.org/view-from-the-valley/telecom/wireless/want-to-return-to-campus-safely-tap-wifi-network

IEEE COVID-19 coverage logo, link to landing page

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Universities, together with other operators of large spaces like shopping malls or airports, are currently facing a dilemma: how to return to business as fast as possible, while at the same time providing a safe environment?

Given that the behavior of individuals is the driving factor for viral spread, key to answering this question will be understanding and monitoring the dynamics of how people move and interact while on campus, whether indoors or outdoors.

Fortunately, universities already have the perfect tool in place:  the campus-wide Wi-Fi network. These networks typically cover every indoor and most outdoor spaces on campus, and users are already registered. All that needs to be added is data analytics to monitor on-campus safety.

We, a team of researchers from the University of Melbourne and the startup Nexulogy, have developed the necessary algorithms that, when fed data already gathered by campus Wi-Fi networks, can help keep universities safe. We have already tested these algorithms successfully at several campuses and other large yet contained environments.

To date, little attention has been paid to using Wi-Fi networks to track social distancing. Countries like Australia have rolled out smartphone apps to support contact tracing, typically using Bluetooth to determine proximity. A recent Google/Apple collaboration, also using Bluetooth, led to a decentralized protocol for contact monitoring.

Yet the success of these apps mainly relies on people voluntarily downloading them. A study by the University of Oxford estimated that more than 70 percent of smartphone users in the United Kingdom would have to install the app for it to be effective. But adoption is not happening at anything near that scale; the Australian COVIDSafe app, for example, released in April 2020, has only been downloaded by 6 million people by mid-June 2020, or about 24 percent of the population.

Furthermore, this kind of Bluetooth-based tracking does not relate the contacts to a physical location, such as a classroom. This makes it hard to satisfy the requirements of running a safe campus. And data collected by the Bluetooth tracking apps is generally not readily available to the campus owners, so it doesn’t help make their own spaces safer.

Our Wi-Fi based algorithms provide the least privacy-intrusive monitoring mechanisms thus far, because they use only anonymous device addresses; no individual user names are necessary to understand crowd densities and proximity. In the case of a student or campus staff member reporting a positive coronavirus test, the device addresses determined to belong to someone at risk can be passed on authorities with the appropriate privacy clearance. Only then would names be matched to devices, with people at risk informed individually and privately.

Wi-Fi presents the best solution for universities for a couple of reasons: wireless coverage is already campus wide; it is a safe assumption that everyone on campus is carrying at least one Wi-Fi capable device; and virtually everyone living and working on campus registers their devices to have internet access. Such tracking is possible without onerous user-facing app downloads.

Often university executives already have the rights to use the information collected in the wireless system included as part of its Terms and Conditions. In the midst of this pandemic, they now also have a legal or, at least a moral obligation, to use such data to their best ability to improve safety and well-being of everyone on campus.

The process starts by collecting the time and symbolic location (also known as network access point) of Wi-Fi capable devices when they are first detected by the Wi-Fi infrastructure, for example, when a student enters the campus’ Wi-Fi  environment, and then during regular intervals or when they change locations. Then, after consolidating any multiple devices of a single user, our algorithms calculate the number of occupants in a given area. That provides a quick insight into crowd density in any building or outdoor plaza.

Our algorithms can also reconstruct the journey of any user within the Wi-Fi infrastructure, and from that can derive exposure time to other users, spaces visited and transmission risk.

This information lets campus managers do several things. For one, our algorithms can easily identify high risk areas by time and location and flag areas where social distance limits are regularly exceeded. This helps campus managers focus resources on areas that may need more hand sanitizers or more frequent deep cleaning.

For another, imagine an infected staff member has been identified by public health authorities. Based on the health authorities’ request, the university can identify possible infection pathways by tracking the individual’s journeys through the campus. It is possible to backtrack the movement history since the start of the data collection, for days or even weeks if necessary. 

To illustrate the methodology, we assumed one of us had a COVID infection and we reconstructed his journey and exposure to other university visitors during a recent visit to a busy university campus. In the heat map below, you can see that only the conference building showed a significant number of contacts (red bar) with an exposure time of, in this example, more than 30 minutes. While we only detected other individuals by Wi-Fi devices, the university executives could now notify people that potentially have been exposed to an infectious user.

This system isn’t without technical challenges. The biggest problem is noise in the system that needs to be removed before the data is useful.

For example, depending on the building layout and wireless network configurations, wireless counts inside specific zones can include passing outside traffic or static devices (like desktop computers). We have developed algorithms to eliminate both without requiring access to any user information.

Even though the identification and management of infection risks is limited to the area covered by the wireless infrastructure, the timely identification of a COVID event naturally benefits areas beyond the campus. Campuses, and even residential campuses in lockdown, have a daily influx of external visitors —staff, contractors, and family members. Those individuals could be identified via telecommunication providers if requested by health authorities.

In the case of someone on campus testing positive for the virus, people they came in contact with, their contact times and places they went to can be identified within minutes. This allows the initiation of necessary measures (e.g. COVID testing or decontamination) in a targeted, timely and cost-effective way.

Since the analytics can happen in the cloud, the software can easily be updated to reflect on new or refined medical knowledge or health regulations, say a new exposure time threshold or physical distancing guidelines.

Privacy is paramount in this process. Understanding population densities and crowd management is done anonymously. Only in the case of someone on campus reporting a confirmed case of the coronavirus  do authorities with the necessary privacy clearance need to connect the devices and the individuals.  Our algorithm operates separately from the identification and notification process.

As universities around the world are eager to welcome students back to the campus, proactive plans need to be in place to ensure the safety and wellbeing of everyone. Wi-Fi is available and ready to help.

Jan Dethlefs is a data scientist and Simon Wei is a data engineer, both at Nexulogy in Melbourne, Australia.  Stephan Winter is a professor and Martin Tomko is a senior lecturer, both in the Department of Infrastructure Engineering at the University of Melbourne, where they work with geospatial data analytics.

With 5G Rollout Lagging, Research Looks Ahead to 6G

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/telecom/wireless/with-5g-rollout-lagging-research-looks-ahead-to-6g

Amid a 5G rollout that has faced its fair share of challenges, it might seem somewhat premature to start looking ahead at 6G, the next generation of mobile communications. But 6G development is happening now, and it’s being pursued in earnest by both industry and academia.

Much of the future landscape for 6G was mapped out in an article published in March of this year in an article published by IEEE Communications titled “Toward 6G Networks: Use Cases and Technologies.”  The article presents the requirements, the enabling technologies and the use cases for adopting a systematic approach to overcoming the research challenges for 6G.

“6G research activities are envisioning radically new communication technologies, network architectures, and deployment models,” said Michele Zorzi,  a professor at the University of Padua in Italy, and one of the authors of the IEEE Communications article. “Although some of these solutions have already been examined in the context of 5G, they were intentionally left out of initial 5G standards developments and will not be part of early 5G commercial rollout mainly because markets are not mature enough to support them.”

The foundational difference between 5G and 6G networks, according to Zorzi, will be the increased role that intelligence will play in 6G networks. It will go beyond merely classification and prediction tasks as is the case in legacy and/or 5G systems.

While machine-learning-driven networks are now still in their infancy, they will likely represent a fundamental component of the 6G ecosystem, which will shift towards a fully-user-centric architecture where end terminals will be able to make autonomous network decisions without supervision from centralized controllers.

This decentralization of control will enable sub-millisecond latency as required by several 6G services (which is below the already challenging 1-millisecond requirement of emerging 5G systems). This is expected to yield more responsive network management.

To achieve this new kind of performance, the underlying technologies of 6G will be fundamentally different from 5G. For example, says Marco Giordani, a researcher at the University of Padua and co-author of the IEEE Communications article, even though 5G networks have been designed to operate at extremely high frequencies in the millimeter-wave bands, 6G will exploit even higher-spectrum technologies—terahertz and optical communications being two examples.

At the same time, Giordani explains that 6G will have a new cell-less network architecture that is a clear departure from current mobile network designs. The cell-less paradigm can promote seamless mobility support, targeting interruption-free communication during handovers, and can provide quality of service (QoS) guarantees that are in line with the most challenging mobility requirements envisioned for 6G, according to Giordani.

Giordani adds: “While 5G networks (and previous generations) have been designed to provide connectivity for an essentially bi-dimensional space, future 6G heterogeneous architectures will provide three-dimensional coverage by deploying non-terrestrial platforms (e.g., drones, HAPs, and satellites) to complement terrestrial infrastructures.”

Key Industry and Academic Initiatives in 6G Development:

Cognitive Radios Will Go Where No Deep-Space Mission Has Gone Before

Post Syndicated from Sven Bilén original https://spectrum.ieee.org/telecom/wireless/cognitive-radios-will-go-where-no-deepspace-mission-has-gone-before

Space seems empty and therefore the perfect environment for radio communications. Don’t let that fool you: There’s still plenty that can disrupt radio communications. Earth’s fluctuating ionosphere can impair a link between a satellite and a ground station. The materials of the antenna can be distorted as it heats and cools. And the near-vacuum of space is filled with low-level ambient radio emanations, known as cosmic noise, which come from distant quasars, the sun, and the center of our Milky Way galaxy. This noise also includes the cosmic microwave background radiation, a ghost of the big bang. Although faint, these cosmic sources can overwhelm a wireless signal over interplanetary distances.

Depending on a spacecraft’s mission, or even the particular phase of the mission, different link qualities may be desirable, such as maximizing data throughput, minimizing power usage, or ensuring that certain critical data gets through. To maintain connectivity, the communications system constantly needs to tailor its operations to the surrounding environment.

Imagine a group of astronauts on Mars. To connect to a ground station on Earth, they’ll rely on a relay satellite orbiting Mars. As the space environment changes and the planets move relative to one another, the radio settings on the ground station, the satellite orbiting Mars, and the Martian lander will need continual adjustments. The astronauts could wait 8 to 40 minutes—the duration of a round trip—for instructions from mission control on how to adjust the settings. A better alternative is to have the radios use neural networks to adjust their settings in real time. Neural networks maintain and optimize a radio’s ability to keep in contact, even under extreme conditions such as Martian orbit. Rather than waiting for a human on Earth to tell the radio how to adapt its systems—during which the commands may have already become outdated—a radio with a neural network can do it on the fly.

Such a device is called a cognitive radio. Its neural network autonomously senses the changes in its environment, adjusts its settings accordingly—and then, most important of all, learns from the experience. That means a cognitive radio can try out new configurations in new situations, which makes it more robust in unknown environments than a traditional radio would be. Cognitive radios are thus ideal for space communications, especially far beyond Earth orbit, where the environments are relatively unknown, human intervention is impossible, and maintaining connectivity is vital.

Worcester Polytechnic Institute and Penn State University, in cooperation with NASA, recently tested the first cognitive radios designed to operate in space and keep missions in contact with Earth. In our tests, even the most basic cognitive radios maintained a clear signal between the International Space Station (ISS) and the ground. We believe that with further research, more advanced, more capable cognitive radios can play an integral part in successful deep-space missions in the future, where there will be no margin for error.

Future crews to the moon and Mars will have more than enough to do collecting field samples, performing scientific experiments, conducting land surveys, and keeping their equipment in working order. Cognitive radios will free those crews from the onus of maintaining the communications link. Even more important is that cognitive radios will help ensure that an unexpected occurrence in deep space doesn’t sever the link, cutting the crew’s last tether to Earth, millions of kilometers away.

Cognitive radio as an idea was first proposed by Joseph Mitola III at the KTH Royal Institute of Technology, in Stockholm, in 1998. Since then, many cognitive radio projects have been undertaken, but most were limited in scope or tested just a part of a system. The most robust cognitive radios tested to date have been built by the U.S. Department of Defense.

When designing a traditional wireless communications system, engineers generally use mathematical models to represent the radio and the environment in which it will operate. The models try to describe how signals might reflect off buildings or propagate in humid air. But not even the best models can capture the complexity of a real environment.

A cognitive radio—and the neural network that makes it work—learns from the environment itself, rather than from a mathematical model. A neural network takes in data about the environment, such as what signal modulations are working best or what frequencies are propagating farthest, and processes that data to determine what the radio’s settings should be for an optimal link. The key feature of a neural network is that it can, over time, optimize the relationships between the inputs and the result. This process is known as training.

For cognitive radios, here’s what training looks like. In a noisy environment where a signal isn’t getting through, the radio might first try boosting its transmission power. It will then determine whether the received signal is clearer; if it is, the radio will raise the transmission power more, to see if that further improves reception. But if the signal doesn’t improve, the radio may try another approach, such as switching frequencies. In either case, the radio has learned a bit about how it can get a signal through its current environment. Training a cognitive radio means constantly adjusting its transmission power, data rate, signal modulation, or any other settings it has in order to learn how to do its job better.

Any cognitive radio will require initial training before being launched. This training serves as a guide for the radio to improve upon later. Once the neural network has undergone some training and it’s up in space, it can autonomously adjust the radio’s settings as necessary to maintain a strong link regardless of its location in the solar system.

To control its basic settings, a cognitive radio uses a wireless system called a software-defined radio. Major functions that are implemented with hardware in a conventional radio are accomplished with software in a software-defined radio, including filtering, amplifying, and detecting signals. That kind of flexibility is essential for a cognitive radio.

There are several basic reasons why cognitive radio experiments are mostly still limited in scope. At their core, the neural networks are complex algorithms that need enormous quantities of data to work properly. They also require a lot of computational horsepower to arrive at conclusions quickly. The radio hardware must be designed with enough flexibility to adapt to those conclusions. And any successful cognitive radio needs to make these components work together. Our own effort to create a proof-of-concept cognitive radio for space communications was possible only because of the state-of-the-art Space Communications and Navigation (SCaN) test bed on the ISS.

NASA’s Glenn Research Center created the SCaN test bed specifically to study the use of software-defined radios in space. The test bed was launched by the Japan Aerospace Exploration Agency and installed on the main lattice frame of the space station in July 2012. Until its decommissioning in June 2019, the SCaN test bed allowed researchers to test how well ­software-defined radios could meet the demands expected of radios in space—such as real-time reconfiguration for orbital operations, the development and verification of new software for custom space networks, and, most relevant for our group, cognitive communications.

The test bed consisted of three software-defined radios broadcasting in the S-band (2 to 4 gigahertz) and Ka-band (26.5 to 40 GHz) and receiving in the L-band (1 to 2 GHz). The SCaN test bed could communicate with NASA’s Tracking and Data Relay Satellite System in low Earth orbit and a ground station at NASA’s Glenn Research Center, in Cleveland.

Nobody has ever used a cognitive radio system on a deep-space mission before—nor will they, until the technology has been thoroughly vetted. The SCaN test bed offered the ideal platform for testing the tech in a less hostile environment close to Earth. In 2017, we built a cognitive radio system to communicate between ground-based modems and the test bed. Ours would be the first-ever cognitive radio experiments conducted in space.

In our experiments, the SCaN test bed was a stand-in for the radio on a deep-space probe. It’s essential for a deep-space probe to maintain contact with Earth. Otherwise, the entire mission could be doomed. That’s why our primary goal was to prove that the radio could maintain a communications link by adjusting its radio settings autonomously. Maintaining a high data rate or a robust signal were lower priorities.

A cognitive radio at the ground station would decide on an “action,” or set of operating parameters for the radio, which it would send to the test-bed transmitter and to two modems at the ground station. The action dictated a specific data rate, modulation scheme, and power level for the test-bed transmitter and the ground station modems that would most likely be effective in maintaining the wireless link.

We completed our first tests during a two-week window in May 2017. That wasn’t much time, and we typically ended up with only two usable passes per day, each lasting just 8 or 9 minutes. The ISS doesn’t pass over the same points on Earth during each orbit, so there’s a limited number of opportunities to get a line-of-sight connection to a particular location. Despite the small number of passes, though, our radio system experienced plenty of dynamic and challenging link conditions, including fluctuations in the atmosphere and weather. Often, the solar panels and other protrusions on the ISS created large numbers of echoes and reflections that our system had to take into account.

During each pass, the neural network would compare the quality of the communications link with data from previous passes. The network would then select the previous pass with the conditions that were most similar to those of the current pass as a jumping-off point for setting the radio. Then, as if fiddling with the knob on an FM radio, the neural network would adjust the radio’s settings to best fit the conditions of the current pass. These settings included all of the elements of the wireless signal, including the data rate and modulation.

The neural network wasn’t limited to drawing on just one previous pass. If the best option seemed to be taking bits and pieces of multiple passes to create a bespoke solution, the network would do just that.

During the tests, the cognitive radio clearly showed that it could learn how to maintain a communications link. The radio autonomously selected settings to avoid losing contact, and the link remained stable even as the radio adjusted itself. It also managed a signal power strong enough to send data, even though that wasn’t a primary goal for us.

Overall, the success of our tests on the SCaN test bed demonstrated that cognitive radios could be used for deep-space missions. However, our experiments also uncovered several problems that will have to be solved before such radios blast off for another planet.

The biggest problem we encountered was something called “catastrophic forgetting.” This happens when a neural network receives too much new information too quickly and so forgets a lot of the information it already possessed. Imagine you’re learning algebra from a textbook and by the time you reach the end, you’ve already forgotten the first half of the material, so you start over. When this happened in our experiments, the cognitive radio’s abilities degraded significantly because it was basically retraining itself over and over in response to environmental conditions that kept getting overwritten.

Our solution was to implement a tactic called ensemble learning in our cognitive radio. Ensemble learning is a technique, still largely experimental, that uses a collection of “learner” neural networks, each of which is responsible for training under a limited set of conditions—in our case, on a specific type of communications link. One learner may be best suited for ISS passes with heavy interference from solar particles, while another may be best suited for passes with atmospheric distortions caused by thunderstorms. An overarching meta–neural network decides which learner networks to use for the current situation. In this arrangement, even if one learner suffers from catastrophic forgetting, the cognitive radio can still function.

To understand why, let’s say you’re learning how to drive a car. One way to practice is to spend 100 hours behind the wheel, assuming you’ll learn everything you need to know in the process. That is currently how cognitive radios are trained; the hope is that what they learn during their training will be applicable to any situation they encounter. But what happens when the environments the cognitive radio encounters in the real world differ significantly from the training environments? It’s like practicing to drive on highways for 100 hours but then getting a job as a delivery truck driver in a city. You may excel at driving on highways, but you might forget the basics of driving in a stressful urban environment.

Now let’s say you’re practicing for a driving test. You might identify what scenarios you’ll likely be tested on and then make sure you excel at them. If you know you’re bad at parallel parking, you may prioritize that over practicing rights-of-way at a 4-way stop sign. The key is to identify what you need to learn rather than assume you will practice enough to eventually learn everything. In ensemble learning, this is the job of the meta–neural network.

The meta–neural network may recognize that the radio is in an environment with a high amount of ionizing radiation, for example, and so it will select the learner networks for that environment. The neural network thus starts from a baseline that’s much closer to reality. It doesn’t have to replace information as quickly, making catastrophic forgetting much less likely.

We implemented a very basic version of ensemble learning in our cognitive radio for a second round of experiments in August 2018. We found that the technique resulted in fewer instances of catastrophic forgetting. Nevertheless, there are still plenty of questions about ensemble learning. For one, how do you train the meta–neural network to select the best learners for a scenario? And how do you ensure that a learner, if chosen, actually masters the scenario it’s being selected for? There aren’t solid answers yet for these questions.

In May 2019, the SCaN test bed was decommissioned to make way for an X-ray communications experiment, and we lost the opportunity for future cognitive-radio tests on the ISS. Fortunately, NASA is planning a three-CubeSat constellation to further demonstrate cognitive space communications. If the mission is approved, the constellation could launch in the next several years. The goal is to use the constellation as a relay system to find out how multiple cognitive radios can work together.

Those planning future missions to the moon and Mars know they’ll need a more intelligent approach to communications and navigation than we have now. Astronauts won’t always have direct-to-Earth communications links. For example, signals sent from a radio telescope on the far side of the moon will require relay satellites to reach Earth. NASA’s planned orbiting Lunar Gateway, aside from serving as a staging area for surface missions, will be a major communications relay.

The Lunar Gateway is exactly the kind of space communications system that will benefit from cognitive radio. The round-trip delay for a signal between Earth and the moon is about 2.5 seconds. Handing off radio operations to a cognitive radio aboard the Lunar Gateway will save precious seconds in situations when those seconds really matter, such as maintaining contact with a robotic lander during its descent to the surface.

Our experiments with the SCaN test bed showed that cognitive radios have a place in future deep-space communications. As humanity looks again at exploring the moon, Mars, and beyond, guaranteeing reliable connectivity between planets will be crucial. There may be plenty of space in the solar system, but there’s no room for dropped calls.

This article appears in the August 2020 print issue as “Where No Radio Has Gone Before.”

About the Authors

Alexander Wyglinski is a professor of electrical engineering and robotics engineering at Worcester Polytechnic Institute. Sven Bilén is a professor of engineering design, electrical engineering, and aerospace engineering at the Pennsylvania State University. Dale Mortensen is an electronics engineer and Richard Reinhart is a senior communications engineer at NASA’s Glenn Research Center, in Cleveland.