Tag Archives: Telecom/Wireless

NYU Wireless Picks Up Its Own Baton to Lead the Development of 6G

Post Syndicated from NYU Wireless original https://spectrum.ieee.org/telecom/wireless/nyu_wireless_picks_up_its_own_baton_to_lead_the_development_of_6g

The fundamental technologies that have made 5G possible are unequivocally massive MIMO (multiple-input multiple-output) and millimeter wave (mmWave) technologies. Without these two technologies there would be no 5G network as we now know it.

The two men, who were the key architects behind these fundamental technologies for 5G, have been leading one of the premier research institutes in mobile telephony since 2012: NYU Wireless, a part of NYU’s Tandon School of Engineering

Ted Rappaport is the founding director of NYU Wireless, and one of the key researchers in the development of mmWave technology. Rappaport also served as the key thought leader for 5G by planting the flag in the ground nearly a decade ago that argued mmWave would be a key enabling technology for the next generation of wireless.  His earlier work at two other wireless centers that he founded at Virginia Tech and The University of Austin laid the early groundwork that helped NYU Wireless catapult into one of the premier wireless institutions in the world.

Thomas Marzetta, who now serves as the director of NYU Wireless, was the scientist who led the development of massive MIMO while he was at Bell Labs and has championed its use in 5G to where it has become a key enabling technology for it.

These two researchers, who were so instrumental in developing the technologies that have enabled 5G, are now turning their attention to the next generation of mobile communications, and, according to them both, we are facing some pretty steep technical challenges to realizing a next generation of mobile communications.

“Ten years ago, Ted was already pushing mobile mmWave, and I at Bell Labs was pushing massive MIMO,” said Marzetta.  “So we had two very promising concepts ready for 5G. The research concepts that the wireless community is working on for 6G are not as mature at this time, making our focus on 6G even more important.”

This sense of urgency is reflected by both men, who are pushing against any sense of complacency that may exist in starting the development of 6G technologies as soon as possible. With this aim in mind, Rappaport, just as he did 10 years ago, has planted a new flag in the world of mobile communications with his publication last year of an article with the IEEE, entitled “Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond” 

“In this paper, we said for the first time that 6G is going to be in the sub-terahertz frequencies,” said Rappaport. “We also suggested the idea of wireless cognition where human thought and brain computation could be sent over wireless in real time. It’s a very visionary look at something. Our phones, which right now are flashlights, emails, TV browsers, calendars, are going to be become much more.”

While Rappaport feels confident that they have the right vision for 6G, he is worried about the lack of awareness of how critical it is for the US Government funding agencies and companies to develop the enabling technologies for its realization. In particular, both Rappaport and Marzetta are concerned about the economic competitiveness of the US and the funding challenges that will persist if it is not properly recognized as a priority.

“These issues of funding and awareness are critical for research centers, like NYU Wireless,” said Rappaport. “The US needs to get behind NYU Wireless to foster these ideas and create these cutting-edge technologies.”

With this funding support, Rappaport argues, teaching research institutes like NYU Wireless can create the engineers that end up going to companies and making technologies like 6G become a reality. “There are very few schools in the world that are even thinking this far ahead in wireless; we have the foundations to make it happen,” he added.

Both Rappaport and Marzetta also believe that making national centers of excellence in wireless could help to create an environment in which students could be exposed constantly to a culture and knowledge base for realizing the visionary ideas for the next generation of wireless.

“The Federal government in the US needs to pick a few winners for university centers of excellence to be melting pots, to be places where things are brought together,” said Rappaport. “The Federal government has to get together and put money into these centers to allow them to hire talent, attract more faculty, and become comparable to what we see in other countries where huge amounts of funding is going in to pick winners.”

While research centers, like NYU Wireless, get support from industry to conduct their research, Rappaport and Marzetta see that a bump in Federal funding could serve as both amplification and a leverage effect for the contribution of industrial affiliates. NYU Wireless currently has 15 industrial affiliates with a large number coming from outside the US, according to Rappaport.

“Government funding could get more companies involved by incentivizing them through a financial multiplier,” added Rappaport

Of course, 6G is not simply about setting out a vision and attracting funding, but also tackling some pretty big technical challenges.

Both men believe that we will need to see the development of new forms of MIMO, such as holographic MIMO, to enable more efficient use of the sub 6 GHz spectrum. Also, solutions will need to be developed to overcome the blockage problems that occur with mmWave and higher frequencies.

Fundamental to these technology challenges is accessing new frequency spectrums so that a 6G network operating in the sub-terahertz frequencies can be achieved. Both Rappaport and Marzetta are confident that technology will enable us to access even more challenging frequencies.

“There’s nothing technologically stopping us right now from 30, and 40, and 50 gigahertz millimeter wave, even up to 700 gigahertz,” said Rappaport. “I see the fundamentals of physics and devices allowing us to take us easily over the next 20 years up to 700 or 800 gigahertz.”

Marzetta added that there is much more that can be done in the scarce and valuable sub-6GHz spectrum. While massive MIMO is the most spectrally efficient wireless scheme ever devised, it is based on extremely simplified models of how antennas create electromagnetic signals that propagate to another location, according to Marzetta, adding, “No existing wireless system or scheme is operating close at all to limits imposed by nature.”

While expanding the spectrum of frequencies and making even better use of the sub-6GHz spectrum are  the foundation for the realization of future networks, Rappaport and Marzetta also expect that we will see increased leveraging of AI and machine learning. This will enable the creation of intelligent networks that can manage themselves with much greater efficiency than today’s mobile networks.

“Future wireless networks are going to evolve with greater intelligence,” said Rappaport. An example of this intelligence, according to Rappaport, is the new way in which the Citizens Broadband Radio Service (CBRS) spectrum is going to be used in a spectrum access server (SAS) for the first time ever.

“It’s going to be a nationwide mobile system that uses these spectrum access servers that mobile devices talk to in the 3.6 gigahertz band,” said Rappaport. “This is going to allow enterprise networks to be a cross of old licensed cellular and old unlicensed Wi-Fi. It’s going to be kind of somewhere in the middle. This serves as an early indication of how mobile communications will evolve over the next decade.”

These intelligent networks will become increasingly important when 6G moves towards so-called cell-less (“cell-free”) networks.

Currently, mobile network coverage is provided through hundreds of roughly circular cells spread out across an area. Now with 5G networks, each of these cells will be equipped with a massive MIMO array to serve the users within the cell. But with a cell-less 6G network the aim would be to have hundreds of thousands, or even millions, of access points, spread out more or less randomly, but with all the networks operating cooperatively together.

“With this system, there are no cell boundaries, so as a user moves across the city, there’s no handover or handoff from cell to cell because the whole city essentially constitutes one cell,” explained Marzetta. “All of the people receiving mobile services in a city get it through these access points, which in principle, every user is served by every access point all at once.”

One of the obvious challenges of this cell-less architecture is just the economics of installing so many access points all over the city. You have to get all of the signals to and from each access point from or to one sort of central point that does all the computing and number crunching.

While this all sounds daunting when thought of in the terms of traditional mobile networks, it conceptually sounds far more approachable when you consider that the Internet of Things (IoT) will create this cell-less network.

“We’re going to go from 10 or 20 devices today to hundreds of devices around us that we’re communicating with, and that local connectivity is what will drive this cell-less world to evolve,” said Rappaport. “This is how I think a lot of 5G and 6G use cases in this wide swath of spectrum are going to allow these low-power local devices to live and breathe.”

To realize all of these technologies, including intelligent networks, cell-less networks, expanded radio frequencies, and wireless cognition, the key factor will be training future engineers. 

To this issue, Marzetta noted: “Wireless communications is  a growing and dynamic field that  is a real opportunity for the next generation of young engineers.”

4G on the Moon: One Small Leap, One Giant Step

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/wireless/4g-moon

Standing up a 4G/LTE network might seem well below the pay grade of a legendary innovation hub like Nokia Bell Labs. However, standing up that same network on the moon is another matter. 

Nokia and 13 other companies — including SpaceX and Lockheed-Martin — have won five-year contracts totaling over US$ 370 million from NASA to demonstrate key infrastructure technologies on the lunar surface. All of which is part of NASA’s Artemis program to return humans to the moon.

NASA also wants the moon to be a stepping stone for further explorations in the solar system, beginning with Mars.

For that, says L.K. Kubendran, lead for commercial space technology partnerships, a lunar outpost would need, at the very least, power, shelter, and a way to communicate.

And the moon’s 4G network will not only be carrying voice and data signals like those found in terrestrial applications, it’ll also be handling remote operations like controlling lunar rovers from a distance. The antennas and base stations will need to be ruggedized for the harsh, radiation-filled lunar environment. Plus, over the course of a typical lunar day-night cycle (28 earth days), the network infrastructure will face temperature extremes of over 250ºC from sunlight to shadow. Of course, every piece of hardware in the network must first be transported to the moon, too. 

“The equipment needs to be hardened for environmental stresses such as vibration, shock and acceleration, especially during launch and landing procedures, and the harsh conditions experienced in space and on the lunar surface,” says Thierry Klein, head of the Enterprise & Industrial Automation Lab, Nokia Bell Labs. 

Oddly enough, it’ll probably all work better on the moon, too. 

“The propagation model will be different,” explains Dola Saha, assistant professor in Electrical and Computer Engineering at the University at Albany, SUNY. She adds that the lack of atmosphere as well as the absence of typical terrestrial obstructions like trees and buildings will likely mean better signal propagation. 

To ferry the equipment for their lunar 4G network, Nokia will be collaborating with Intuitive Machines — who’s also developing a lunar ice drill for the moon’s south pole. “Intuitive Machines is building a rover that they want to land on the moon in late 2022,” says Kubendran. That rover mission now seems likely to be the rocket that’ll begin hauling 4G infrastructure to the lunar surface.    

For all the technological innovation a lunar 4G network might bring about, its signals could unfortunately also mean bad news for radio astronomy. Radio telescopes are notoriously vulnerable to interference (a.k.a. radio frequency interference, or RFI). For instance, a stray phone signal from Mars carries power enough to interfere with the Jodrell Bank radio observatory in Manchester, U.K. So, asks Emma Alexander, an astrophysicist writing for The Conversation, “how would it fare with an entire 4G network on the moon?”

That depends, says Saha. 

“Depends on which frequencies you’re using, and…the side lobes and…filters [at] the front end of the of these networks,” Saha says. On Earth, she continues, there are so many applications that the FCC in the US, and corresponding bodies in other countries, allocate frequencies to each of them. 

“RFI can be mitigated at the source with appropriate shielding and precision in the emission of signals,” Alexander writes. “Astronomers are constantly developing strategies to cut RFI from their data. But this increasingly relies on the goodwill of private companies.” As well as regulations by governments, she adds, looking to shield earthly radio telescopes from human-generated interference from above. 

IoT Network Companies Have Cracked Their Chicken-and-Egg Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-network-companies-have-cracked-their-chicken-and-egg-problem

Along with everything else going on, we may look back at 2020 as the year that companies finally hit upon a better business model for Internet of Things (IoT) networks. Established network companies such as the Things Network and Helium, and new players such as Amazon, have seemingly given up on the idea of making money from selling network connectivity. Instead, they’re focused on getting the network out there for developers to use, assuming in the process that they’ll benefit from the effort in the long run.

IoT networks have a chicken-and-egg problem. Until device makers see widely available networks, they don’t want to build products that run on the network. And customers don’t want to pay for a network, and thus, fund its development, if there aren’t devices available to use. So it’s hard to raise capital to build a new wide-area network (WAN) that provides significant coverage and supports a plethora of devices that are enticing to use.

It certainly didn’t help such network companies as Ingenu, MachineQ, Senet, and SigFox that they’re all marketing half a dozen similar proprietary networks. Even the cellular carriers, which are promoting both LTE-M for machine-to-machine networks and NB-IoT for low-data-rate networks, have historically struggled to justify their investments in IoT network infrastructure. After COVID-19 started spreading in Japan, NTT DoCoMo called it quits on its NB-IoT network, citing a lack of demand.

“Personally, I don’t believe in business models for [low-power WANs],” says Wienke Giezeman, the CEO and cofounder of the Things Network. His company does deploy long-range low-power WAN gateways for customers that use the LoRa Alliance’s LoRaWAN specifications. (“LoRa” is short for “long-range.”) But Giezeman sees that as the necessary element for later providing the sensors and software that deliver the real IoT applications customers want to buy. Trying to sell both the network and the devices is like running a restaurant that makes diners buy and set up the stove before it cooks their meals.

The Things Network sets up the “stove” and includes the cost of operating it in the “meal.” But, because Giezeman is a big believer in the value of open source software and creating a sense of abundance around last-mile connectivity, he’s also asking customers to opt in to turning their networks into public networks.

Senet does something similar, letting customers share their networks. Helium is using cryptocurrencies to entice people to set up LoRaWAN hotspots on their networks and rewarding them with tokens for keeping the networks operational. When someone uses data from an individual’s Helium node, that individual also gets tokens that might be worth something one day. I actually run a Helium hotspot in my home, although I’m more interested in the LoRa coverage than the potential for wealth.

And there’s Amazon, which plans to embed its own version of a low-power WAN into its Echo and its Ring security devices. Whenever someone buys one of these devices they’ll have the option of adding it as a node on the Amazon Sidewalk network. Amazon’s plan is to build out a decentralized network for IoT devices, starting with a deal to let manufacturer Tile use the network for its ­Bluetooth tracking devices.

After almost a decade of following various low-power IoT networks, I’m excited to see them abandon the idea that the network is the big value, and instead recognize that it’s the things on the network that entice people. Let’s hope this year marks a turning point for low-power WANs.

This article appears in the November 2020 print issue as “Network Included.”

Breaking the Latency Barrier

Post Syndicated from Shivendra Panwar original https://spectrum.ieee.org/telecom/wireless/breaking-the-latency-barrier

For communications networks, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging texts on our phones, but we now think nothing of streaming videos from YouTube and Netflix. No wonder, then, that video now consumes up to 60 percent of Internet bandwidth. If this trend continues, we might yet see full-motion holography delivered to our mobiles—a techie dream since Princess Leia’s plea for help in Star Wars.

Recently, though, high bandwidth has begun to share the spotlight with a different metric of merit: low latency. The amount of latency varies drastically depending on how far in a network a signal travels, how many routers it passes through, whether it uses a wired or wireless connection, and so on. The typical latency in a 4G network, for example, is 50 milliseconds. Reducing latency to 10 milliseconds, as 5G and Wi-Fi are currently doing, opens the door to a whole slew of applications that high bandwidth alone cannot. With virtual-reality headsets, for example, a delay of more than about 10 milliseconds in rendering and displaying images in response to head movement is very perceptible, and it leads to a disorienting experience that is for some akin to seasickness.

Multiplayer games, autonomous vehicles, and factory robots also need extremely low latencies. Even as 5G and Wi-Fi make 10 milliseconds the new standard for latency, researchers, like my group at New York University’s NYU Wireless research center, are already working hard on another order-of-magnitude reduction, to about 1 millisecond or less.

Pushing latencies down to 1 millisecond will require reengineering every step of the communications process. In the past, engineers have ignored sources of minuscule delay because they were inconsequential to the overall latency. Now, researchers will have to develop new methods for encoding, transmitting, and routing data to shave off even the smallest sources of delay. And immutable laws of physics—specifically the speed of light—will dictate firm restrictions on what networks with 1-millisecond latencies will look like. There’s no one-size-fits-all technique that will enable these extremely low-latency networks. Only by combining solutions to all these sources of latency will it be possible to build networks where time is never wasted.

Until the 1980s, latency-sensitive technologies used dedicated end-to-end circuits. Phone calls, for example, were carried on circuit-switched networks that created a dedicated link between callers to guarantee minimal delays. Even today, phone calls need to have an end-to-end delay of less than 150 milliseconds, or it’s difficult to converse comfortably.

At the same time, the Internet was carrying delay-tolerant traffic, such as emails, using technologies like packet switching. Packet switching is the Internet equivalent of a postal service, where mail is routed through post offices to reach the correct mailbox. Packets, or bundles of data between 40 and 1,500 bytes, are sent from point A to point B, where they are reassembled in the correct order. Using the technology available in the 1980s, delays routinely exceeded 100 milliseconds, with the worst delays well over 1 second.

Eventually, Voice over Internet Protocol (VoIP) technology supplanted circuit-switched networks, and now the last circuit switches are being phased out by providers. Since VoIP’s triumph, there have been further reductions in latency to get us into the range of tens of milliseconds.

Latencies below 1 millisecond would open up new categories of applications that have long been sought. One of them is haptic communications, or communicating a sense of touch. Imagine balancing a pencil on your fingertip. The reaction time between when you see the pencil beginning to tip over and then moving your finger to keep it balanced is measured in milliseconds. A human-controlled tele­operated robot with haptic feedback would need a similar latency level.

Robots that aren’t human controlled would also benefit from 1-millisecond latencies. Just like a person, a robot can avoid falling over or dropping something only if it reacts within a millisecond. But the powerful computers that process real-time reactions and the batteries that run them are heavy. Robots could be lighter and operate longer if their “brains” were kept elsewhere on a low-latency wireless network.

Before I get into all the ways engineers might build ultralow-latency networks, it would help to understand how data travels from one device to another. Of course, the signals have to physically travel along a transmission link between the devices. That journey isn’t limited by just the speed of light, because delays are caused by switches and routers along the way. And even before that, data must be converted and prepped on the device itself for the journey. All of these parts of the process will need to be redesigned to consistently achieve submillisecond latencies.

I should mention here that my research is focused on what’s called the transport layer in the multiple layers of protocols that govern the exchange of data on the Internet. That means I’m concerned with the portion of a communications network that controls transmission rates and checks to make sure packets reach their destination in the correct order and without errors. While there are certainly small sources of delay on devices themselves, I’m going to focus on network delays.

The first issue is frame duration. A wireless access link—the link that connects a device to the larger, wired network—schedules transmissions within periodic intervals called frames. For a 4G link, the typical frame duration is 1 millisecond, so you could potentially lose that much time just waiting for your turn to transmit. But 5G has shrunk frame durations, lessening their contribution to delay.

Wi-Fi functions differently. Rather than using frames, Wi-Fi networks use random access, where a device transmits immediately and reschedules the transmission if it collides with another device’s transmission in the link. The upside is that this method has shorter delays if there is no congestion, but the delays build up quickly as more device transmissions compete for the same channel. The latest version of Wi-Fi, Wi-Fi Certified 6 (based on the draft standard IEEE P802.11ax) addresses the congestion problem by introducing scheduled transmissions, just as 4G and 5G networks do.

When a data packet is traveling through the series of network links connecting its origin to its destination, it is also subject to congestion delays. Packets are often forced to queue up at links as both the amount of data traveling through a link and the amount of available bandwidth fluctuate naturally over time. In the evening, for example, more data-heavy video streaming could cause congestion delays through a link. Sending data packets through a series of wireless links is like drinking through a straw that is constantly changing in length and diameter. As delays and bandwidth change, one second you may be sucking up dribbles of data, while the next you have more packets than you can handle.

Congestion delays are unpredictable, so they cannot be avoided entirely. The responsibility for mitigating congestion delays falls to the Transmission Control Protocol (TCP), one part of the collection of Internet protocols that governs how computers communicate with one another. The most common implementations of TCP, such as TCP Cubic, measure congestion by sensing when the buffers in network routers are at capacity. At that point, packets are being lost because there is no room to store them. Think of it like a bucket with a hole in it, placed underneath a faucet. If the faucet is putting more water into the bucket than is draining through the hole, it fills up until eventually the bucket overflows. The bucket in this example is the router buffer, and if it “overflows,” packets are lost and need to be sent again, adding to the delay. The sender then adjusts its transmission rate to try and avoid flooding the buffer again.

The problem is that even if the buffer doesn’t overflow, data can still be stuck there queuing for its turn through the bucket “hole.” What we want to do is allow packets to flow through the network without queuing up in buffers. ­YouTube uses a variation of TCP developed at Google called TCP BBR, short for Bottleneck Bandwidth and Round-Trip propagation time, with this exact goal. It works by adjusting the transmission rate until it matches the rate at which data is passing through routers. Going back to our bucket analogy, it constantly adjusts the flow of water out of the faucet to keep it the same as the flow out of the hole in the bucket.

For further reductions in congestion, engineers have to deal with previously ignorable tiny delays. One example of such a delay is the minuscule variation in the amount of time it takes each specific packet to transmit during its turn to access the wireless link. Another is the slight differences in computation times of different software protocols. These can both interfere with TCP BBR’s ability to determine the exact rate to inject packets into the connection without leaving capacity unused or causing a traffic jam.

A focus of my research is how to redesign TCP to deliver low delays combined with sharing bandwidth fairly with other connections on the network. To do so, my team is looking at existing TCP versions, including BBR and others like the Internet Engineering Task Force’s L4S (short for Low Latency Low Loss ­Scalable throughput). We’ve found that these existing versions tend to focus on particular applications or situations. BBR, for example, is optimized for ­YouTube videos, where video data typically arrives faster than it is being viewed. That means that BBR ignores bandwidth variations because a buffer of excess data has already made it to the end user. L4S, on the other hand, allows routers to prioritize time-sensitive data like real-time video packets. But not all routers are capable of doing this, and L4S is less useful in those situations.

My group is figuring out how to take the most effective parts of these TCP versions and combine them into a more versatile whole. Our most promising approach so far has devices continually monitoring for data-packet delays and reducing transmission rates when delays are building up. With that information, each device on a network can independently figure out just how much data it can inject into the network. It’s somewhat like shoppers at a busy grocery store observing the checkout lines to see which ones are moving faster, or have shorter queues, and then choosing which lines to join so that no one line becomes too long.

For wireless networks specifically, reliably delivering latencies below 1 millisecond is also hampered by connection handoffs. When a cellphone or other device moves from the coverage area of one base station (commonly referred to as a cell tower) to the coverage of a neighboring base station, it has to switch its connection. The network initiates the switch when it detects a drop in signal strength from the first station and a corresponding rise in the second station’s signal strength. The handoff occurs during a window of several seconds or more as the signal strengths change, so it rarely interrupts a connection.

But 5G is now tapping into millimeter waves—the frequencies above 20 ­gigahertz—which bring unprecedented hurdles. These frequency bands have not been used before because they typically don’t propagate as far as lower frequencies, a shortcoming that has only recently been addressed with technologies like beamforming. Beamforming works by using an array of antennas to point a narrow, focused transmission directly at the intended receiver. Unfortunately, obstacles like pedestrians and vehicles can entirely block these beams, causing frequent connection interruptions.

One option to avoid interruptions like these is to have multiple millimeter-wave base stations with overlapping coverage, so that if one base station is blocked, another base station can take over. However, unlike regular cell-tower handoffs, these handoffs are much more frequent and unpredictable because they are caused by the movement of people and traffic.

My team at NYU is developing a solution in which neighboring base stations are also connected by a fiber-optic ring network. All of the traffic to this group of base stations would be injected into, and then circulate around, the ring. Any base station connected to a cellphone or other wireless device can copy the data as it passes by on the ring. If one base station becomes blocked, no problem: The next one can try to transmit to the device. If it, too, is blocked, the one after that can have a try. Think of a baggage carousel at an airport, with a traveler standing near it waiting for her luggage. She may be “blocked” by others crowding around the carousel, so she might have to move to a less crowded spot. Similarly, the fiber-ring system would allow reception as long as the mobile device is anywhere in the range of at least one unblocked base station.

There’s one final source of latency that can no longer be ignored, and it’s immutable: the speed of light. Because light travels so quickly, it used to be possible to ignore it in the presence of other, larger sources of delay. It cannot be disregarded any longer.

Take for example the robot we discussed earlier, with its computational “brain” in a server in the cloud. Servers and large data centers are often located hundreds of kilometers away from end devices. But because light travels at a finite speed, the robot’s brain, in this case, cannot be too far away. Moving at the speed of light, wireless communications travel about 300 kilometers per millisecond. Signals in optical fiber are even slower, at about 200 km/ms. Finally, considering that the robot would need round-trip communications below 1 millis­econd, the maximum possible distance between the robot and its brain is about 100 km. And that’s ignoring every other source of possible delay.

The only solution we see to this problem is to simply move away from traditional cloud computing toward what is known as edge computing. Indeed, the push for submillisecond latencies is driving the development of edge computing. This method places the actual computation as close to the devices as possible, rather than in server farms hundreds of kilo­meters away. However, finding the real estate for these servers near users will be a costly challenge to service providers.

There’s no doubt that as researchers and engineers work toward 1-millisecond delays, they will find more sources of latency I haven’t mentioned. When every microsecond matters, they’ll have to get creative to whittle away the last few slivers on the way to 1 millisecond. Ultimately, the new techniques and technologies that bring us to these ultralow latencies will be driven by what people want to use them for.

When the Internet was young, no one knew what the killer app might be. Universities were excited about remotely accessing computing power or facilitating large file transfers. The U.S. Department of Defense saw value in the Internet’s decentralized nature in the event of an attack on communications infrastructure. But what really drove usage turned out to be email, which was more relevant to the average person. Similarly, while factory robots and remote surgery will certainly benefit from submillisecond latencies, it is entirely possible that neither emerges as the killer app for these technologies. In fact, we probably can’t confidently predict what will turn out to be the driving force behind vanishingly small delays. But my money is on multiplayer gaming.

This article appears in the November 2020 print issue as “Breaking the Millisecond Barrier.”

About the Author

Shivendra Panwar is director of the New York State Center for Advanced Technology in Telecommunications and a professor in the electrical and computer engineering department at New York University.

Indoor Air-Quality Monitoring Can Allow Anxious Office Workers to Breathe Easier

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/indoor-airquality-monitoring-can-allow-anxious-office-workers-to-breathe-easier

Although IEEE Spectrum’s offices reopened several weeks ago, I have not been back and continue to work remotely—as I suspect many, many other people are still doing. Reopening offices, schools, gyms, restaurants, or really any indoor space seems like a dicey proposition at the moment.

Generally speaking, masks are better than no masks, and outdoors is better than indoors, but short of remaining holed up somewhere with absolutely no contact with other humans, there’s always that chance of contracting COVID-19. One of the reasons I personally haven’t gone back to the office is because the uncertainty over whether the virus, exhaled by a coworker, is present in the air, doesn’t seem worth the novelty of actually being able to work at my desk again for the first time in months.

Unfortunately, we can’t detect the virus directly, short of administering tests to people. There’s no sensor you can plug in and place on your desk that will detect virus nuclei hanging out in the air around you. What is possible, and might bring people some peace of mind, is installing IoT sensors in office buildings and schools to monitor air quality and ascertain how favorable the indoor environment is to COVID-19 transmission.

One such monitoring solution has been developed by wireless tech company eleven-x and environmental engineering consultant Pinchin. The companies have created a real-time monitoring system that measures carbon dioxide levels, relative humidity, and particulate levels to assess the quality of indoor air.

“COVID is the first time we’ve dealt with a source of environmental hazard that is the people themselves,” says David Shearer, the director of Pinchin’s indoor environmental quality group. But that also gives Pinchin and eleven-x some ideas on how to approach monitoring indoor spaces, in order to determine how likely transmission might be.

Again, because there’s no such thing as a sensor that can directly detect COVID-19 virus nuclei in an environment, the companies have resorted to measuring secondhand effects. The first measurement is carbon dioxide levels. he biggest contributor to CO2 in an indoor space is people exhaling. Higher levels in a room or building suggest more people, and past a certain level, means either there are too many people in the space or the air is not being vented properly. Indirectly, that carbon dioxide suggests that if someone was a carrier of the coronavirus, there’s a good chance there’s more virus nuclei in the air.

The IoT system also measures relative humidity. Dan Mathers, the CEO of eleven-x, says that his company and Pinchin designed their monitoring system based on recommendations by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE’s guidelines [PDF] suggest that a space’s relative humidity should hover between 40 and 60 percent. More or less than that, Shearer explains, and it’s easier for the virus to be picked up by the tissue of the lungs.

A joint sensor measures carbon dioxide levels and relative humidity. A separate sensor measures the level of 2.5-micrometer particles in the air. 2.5 microns (or PM 2.5) is an established particulate size threshold in air quality measuring. And as circumstances would have it, it’s also slightly smaller than the size of a virus nucleus, which about 3 µm. They’re still close enough in size that a high level of 2.5-µm particles in the air means there’s a greater chance that there’s potentially some COVID-19 nuclei in the space.

According to Mathers, the eleven-x sensors are battery-operated and communicate using LoRaWAN, a wireless standard that is most notable for being extremely low power. Mathers says the batteries in eleven-x’s sensors can last for ten years or more without needing to be replaced.

LoRaWAN’s principal drawback is that isn’t a good fit for transmitting high-data messages. But that’s not a problem for eleven-x’s sensors. Mathers says the average message they send is just 10 bytes. Where LoRa excels is in situations where simple updates or measurements need to be sent—measurements such as carbon dioxide, relative humidity, and particulate levels. The sensors communicate on what’s called an interrupt basis, meaning once a measurement passes a particular threshold, the sensor wakes up and sends an update. Otherwise, it whiles away the hours in a very low power sleep mode.

Mathers explains that one particulate sensor is placed near a vent to get a good measurement of 2.5-micron-particle levels. Then, several carbon dioxide/relative humidity sensors are scattered around the indoor space to get multiple measurements. All of the sensors communicate their measurements to LoRaWAN gateways that relay the data to eleven-x and Pinchin for real-time monitoring.

Typically, indoor air quality tests are done in-person, maybe once or twice a year, with measuring equipment brought into the building. Shearer believes that the desire to reopen buildings, despite the specter of COVID-19 outbreaks, will trigger a sea change for real-time air monitoring.

It has to be stressed again that the monitoring system developed by eleven-x and Pinchin is not a “yes or no” style system. “We’re in discussions about creating an index,” says Shearer, “but it probably won’t ever be that simple.” Shifting regional guidelines, and the evolving understanding about how COVID-19 is transmitted will make it essentially impossible to create a hard-and-fast index for what qualities of indoor air are explicitly dangerous with respect to the virus.

Instead, like social distancing and mask wearing recommendations, the measurements are intended to give general guidelines to help people avoid unhealthy situations. What’s more, the sensors will notify building occupants should indoor conditions change from likely safe to likely problematic. It’s another way to make people feel safe if they have to return to work or school.

Apptricity Beams Bluetooth Signals Over 30 Kilometers

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/apptricity-beams-bluetooth-signals-over-20-miles

When you think about Bluetooth, you probably think about things like wireless headphones, computer mice, and other personal devices that utilize the short-range, low-power technology. That’s where Bluetooth has made its mark, after all—as an alternative to Wi-Fi, using unlicensed spectrum to make quick connections between devices.

But it turns out that Bluetooth can go much farther than the couple of meters for which most people rely on it. Apptricity, a company that provides asset and inventory tracking technologies, has developed a Bluetooth beacon that can transmit signals over 32 kilometers (20 miles). The company believes its beacon is a cheaper, secure alternative to established asset and inventory tracking technologies.

A quick primer, if you’re not entirely clear on asset tracking versus inventory tracking: There’s some gray areas in the middle, but by and large, “asset tracking” refers to an IT department registering which employee has which laptop, or a construction company keeping tabs on where its backhoes are on a large construction site. Inventory tracking refers more to things like a retail store keeping correct product counts on the shelves, or a hospital noting how quickly it’s going through its store of gloves.

Asset and inventory tracking typically use labor-intensive techniques like barcode or passive RFID scanning, which are limited both by distance (a couple of meters at most, in both cases) and the fact that a person has to be directly involved in scanning. Alternatively, companies can use satellite or LTE tags to keep track of stuff. While such tags don’t require a person to actively track items, they are far more expensive, requiring a costly subscription to either a satellite or LTE network.

So, the burning question: How does one send a Bluetooth signal over 30-plus kilometers? Typically, Bluetooth’s distance is limited because large distances would require a prohibitive amount of power, and its use of unlicensed spectrum means that the greater the distance, the more likely it will interfere with other wireless signals.

The key new wrinkle, according to Apptricity’s CEO Tim Garcia, is precise tuning within the Bluetooth spectrum. Garcia says it’s the same principle as a tightly-focused laser beam. A laser beam will travel farther without its signal weakening beyond recovery if the photons making up the beam are all as close to a specific frequency as possible. Apptricity’s Bluetooth beacons use firmware developed by the company to achieve such precise tuning, but with Bluetooth signals instead of photons. Thus, data can be sent and received by the beacons without interfering and without requiring unwieldy amounts of power.

Garcia says RFID tags and barcode scanning don’t actively provide information about assets or inventory. Bluetooth, however, can not only pinpoint where something is, it can send updates about a piece of equipment that needs maintenance or just requires a routine check-up.

By its own estimation, Apptricity’s Bluetooth beacons are 90 percent cheaper than LTE or satellite tags, specifically because Bluetooth devices don’t require paying for a subscription to an established network.

The company’s current transmission distance record for its Bluetooth beacons is 38 kilometers (23.6 miles). The company has also demonstrated non-commercial versions of the beacons for the U.S. Department of Defense with broadcast ranges between 80 and 120 kilometers.

Unlock Wireless Test Capabilities On Your RF Gear

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/unlock_wireless_test_capabilities_on_your_rf_gear

Discover how easy it is to update your instrument to test the latest wireless standards.

We’re offering 30-day software trials that evolve test capabilities on your signal analyzers and signal generators. Automatically generate or analyze signals for many wireless applications.

Choose from our more popular applications:

  • Bluetooth ®
  • WLAN 802.11
  • Vector Modulation Analysis
  • And more

Building Your Own Cellphone Network Can Be Empowering, and Also Problematic

Post Syndicated from Roberto J. González original https://spectrum.ieee.org/telecom/wireless/building-your-own-cellphone-network-can-be-empowering-and-problematic

During the summer of 2013, news reports recounted how a Mexican pueblo launched its own do-it-yourself cellphone network. The people of Talea de Castro, population 2,400, created a mini-telecom company, the first of its kind in the world, without help from the government or private companies. They built it after Mexico’s telecommunications giants failed to provide mobile service to the people in the region. It was a fascinating David and Goliath story that pitted the country’s largest corporation against indigenous villagers, many of whom were Zapotec-speaking subsistence farmers with little formal education.

The reports made a deep impression on me. In the 1990s, I’d spent more than two years in Talea, located in the mountains of Oaxaca in southern Mexico, working as a cultural anthropologist.

Reading about Talea’s cellular network inspired me to learn about the changes that had swept the pueblo since my last visit. How was it that, for a brief time, the villagers became international celebrities, profiled by USA Today, BBC News, Wired, and many other media outlets? I wanted to find out how the people of this community managed to wire themselves into the 21st century in such an audacious and dramatic way—how, despite their geographic remoteness, they became connected to the rest of the world through the wondrous but unpredictably powerful magic of mobile technology.

I naively thought it would be the Mexican version of a familiar plot line that has driven many Hollywood films: Small-town men and women fearlessly take on big, bad company. Townspeople undergo trials, tribulations, and then…triumph! And they all lived happily ever after.

I discovered, though, that Talea’s cellphone adventure was much more complicated—neither fairy tale nor cautionary tale. And this latest phase in the pueblo’s centuries-long effort to expand its connectedness to the outside world is still very much a work in progress.

Sometimes it’s easy to forget that 2.5 billion people—one-third of the planet’s population—don’t have cellphones. Many of those living in remote regions want mobile service, but they can’t have it for reasons that have to do with geography, politics, or economics.

In the early 2000s, a number of Taleans began purchasing cellphones because they traveled frequently to Mexico City or other urban areas to visit relatives or to conduct business. Villagers began petitioning Telcel and Movistar, Mexico’s largest and second-largest wireless operators, for cellular service, and they became increasingly frustrated by the companies’ refusal to consider their requests. Representatives from the firms claimed that it was too expensive to build a mobile network in remote locations—this, in spite of the fact that Telcel’s parent company, América Móvil, is Mexico’s wealthiest corporation. The board of directors is dominated by the Slim family, whose patriarch, Carlos Slim, is among the world’s richest men.

But in November 2011, an opportunity arose when Talea hosted a three-day conference on indigenous media. Kendra Rodríguez and Abrám Fernández, a married couple who worked at Talea’s community radio station, were among those who attended. Rodríguez is a vivacious and articulate young woman who exudes self-confidence. Fernández, a powerfully built man in his mid-thirties, chooses his words carefully. Rodríguez and Fernández are tech savvy and active on social media. (Note: These names are pseudonyms.)

Also present were Peter Bloom, a U.S.-born rural-development expert, and Erick Huerta, a Mexican lawyer specializing in telecommunications policy. Bloom had done human rights work in Nigeria in rural areas where “it wasn’t possible to use existing networks because of security concerns and lack of finances,” he explains. “So we began to experiment with software that would enable communication between phones without relying upon any commercial companies.” Huerta had worked with the United Nations’ International Telecommunication Union.

The two men founded Rhizomatica, a nonprofit focused on expanding cellphone service in indigenous areas around the world. At the November 2011 conference, they approached Rodríguez and Fernández about creating a community cell network in Talea, and the couple enthusiastically agreed to help.

In 2012, Rhizomatica assembled a ragtag team of engineers and hackers to work on the project. They decided to experiment with open-source software called OpenBTS—Open Base Transceiver Station—which allows cell phones to communicate with each other if they are within range of a base station. Developed by David Burgess and Harvind Samra, cofounders of the San Francisco Bay Area startup Range Networks, the software also allows networked phones to connect over the Internet using VoIP, or Voice over Internet Protocol. OpenBTS had been successfully deployed at Burning Man and on Niue, a tiny Polynesian island nation.

Bloom’s team began adapting OpenBTS for Talea. The system requires electrical power and an Internet connection to work. Fortunately, Talea had enjoyed reliable electricity for nearly 40 years and had gotten Internet service in the early 2000s.

In the meantime, Huerta pulled off a policy coup: He convinced federal regulators that indigenous communities had a legal right to build their own networks. Although the government had granted telecom companies access to nearly the entire radio-frequency spectrum, small portions remained unoccupied. Bloom and his team decided to insert Talea’s network into these unused bands. They would become digital squatters.

As these developments were under way, Rodríguez and Fernández were hard at work informing Taleans about the opportunity that lay before them. Rodríguez used community radio to pique interest among her fellow villagers. Fernández spoke at town hall meetings and coordinated public information sessions at the municipal palace. Soon they had support from hundreds of villagers eager to get connected. Eventually, they voted to invest 400,000 pesos (approximately US $30,000) of municipal funding for equipment to build the cellphone network. By doing so, the village assumed a majority stake in the venture.

In March 2013, the cellphone network was finally ready for a trial run. Volunteers placed an antenna near the town center, in a location they hoped would provide adequate coverage of the mountainside pueblo. As they booted up the system, they looked for the familiar signal bars on their cellphone screens, hoping for a strong reception.

The bars light up brightly—success!

The team then walked through the town’s cobblestone streets in order to detect areas with poor reception. As they strolled up and down the paths, they heard laughter and shouting from several houses. People emerged from their homes, stunned.

“We’ve got service!” exclaimed a woman in disbelief.

“It’s true, I called the state capital!” cried a neighbor.

“I just connected to my son in Guadalajara!” shouted another.

Although Rodríguez, Fernández, and Bloom knew that many Taleans had acquired cellphones over the years, they didn’t realize the extent to which the devices were being used every day—not for communications but for listening to music, taking photos and videos, and playing games. Back at the base station, the network computer registered more than 400 cellphones within the antenna’s reach. People were already making calls, and the number was increasing by the minute as word spread throughout town. The team shut down the system to avoid an overload.

As with any experimental technology, there were glitches. Outlying houses had weak reception. Inclement weather affected service, as did problems with Internet connectivity. Over the next six months, Bloom and his team returned to Talea every week, making the 4-hour trip from Oaxaca City in a red Volkswagen Beetle.

Their efforts paid off. The community network proved to be immensely popular, and more than 700 people quickly subscribed. The service was remarkably inexpensive: Local calls and text messages were free, and calls to the United States were only 1.5 cents per minute, approximately one-tenth of what it cost using landlines. For example, a 20-minute call to United States from the town’s phone kiosk might cost a campesino a full day’s wages.

Initially, the network could handle only 11 phone calls at a time, so villagers voted to impose a 5-minute limit to avoid overloads. Even with these controls, many viewed the network as a resounding success. And patterns of everyday life began to change, practically overnight: Campesinos who walked 2 hours to get to their corn fields could now phone family members if they needed provisions. Elderly women collecting firewood in the forest had a way to call for help in case of injury. Youngsters could send messages to one another without being monitored by their parents, teachers, or peers. And the pueblo’s newest company—a privately owned fleet of three-wheeled mototaxis—increased its business dramatically as prospective passengers used their phones to summon drivers.

Soon other villages in the region followed Talea’s lead and built their own cellphone networks—first, in nearby Zapotec-speaking pueblos and later, in adjacent communities where Mixe and Mixtec languages are spoken. By late 2015, the villages had formed a cooperative, Telecomunicaciones Indígenas Comunitarias, to help organize the villages’ autonomous networks. Today TIC serves more than 4,000 people in 70 pueblos.

But some in Talea de Castro began asking questions about management and maintenance of the community network. Who could be trusted with operating this critical innovation upon which villagers had quickly come to depend? What forms of oversight would be needed to safeguard such a vital part of the pueblo’s digital infrastructure? And how would villagers respond if outsiders tried to interfere with the fledgling network?

On a chilly morning in May 2014, Talea’s Monday tianguis, or open-air market, began early and energetically. Some merchants descended upon the mountain village by foot alongside cargo-laden mules. But most arrived on the beds of clattering pickup trucks. By daybreak, dozens of merchants were displaying an astonishing variety of merchandise: produce, fresh meat, live farm animals, eyeglasses, handmade huaraches, electrical appliances, machetes, knockoff Nike sneakers, DVDs, electronic gadgets, and countless other items.

Hundreds of people from neighboring villages streamed into town on this bright spring morning, and the plaza came alive. By midmorning, the pueblo’s restaurants and cantinas were overflowing. The tianguis provided a welcome opportunity to socialize over coffee or a shot of fiery mezcal.

As marketgoers bustled through the streets, a voice blared over the speakers of the village’s public address system, inviting people to attend a special event—the introduction of a new, commercial cellphone network. Curious onlookers congregated around the central plaza, drawn by the sounds of a brass band.

As music filled the air, two young men in polo shirts set up inflatable “sky dancers,” then assembled tables in front of the stately municipal palace. The youths draped the tables with bright blue tablecloths, each imprinted with the logo of Movistar, Mexico’s second-largest wireless provider. Six men then filed out of the municipal palace and sat at the tables, facing the large audience.

What followed was a ceremony that was part political rally, part corporate event, part advertisement—and part public humiliation. Four of the men were politicians affiliated with the authoritarian Institutional Revolutionary Party, which has dominated Oaxaca state politics for a century. They were accompanied by two Movistar representatives. A pair of young, attractive local women stood nearby, wearing tight jeans and Movistar T-shirts.

The politicians and company men had journeyed to the pueblo to inaugurate a new commercial cellphone service, called Franquicias Rurales (literally, “Rural Franchises”). As the music faded, one of the politicians launched a verbal attack on Talea’s community network, declaring it to be illegal and fraudulent. Then the Movistar representatives presented their alternative plan.

A cellphone antenna would soon be installed to provide villagers with Movistar cellphone service—years after a group of Taleans had unsuccessfully petitioned the company to do precisely that. Movistar would rent the antenna to investors, and the franchisees would sell service plans to local customers in turn. Profits would be divided between Movistar and the franchise owners.

The grand finale was a fiesta, where the visitors gave away cheap cellphones, T-shirts, and other swag as they enrolled villagers in the company’s mobile-phone plans.

The Movistar event came at a tough time for the pueblo’s network. From the beginning, it had experienced technical problems. The system’s VoIP technology required a stable Internet connection, which was not always available in Talea. As demand grew, the problems continued, even after a more powerful system manufactured by the Canadian company Nutaq/NuRAN Wireless was installed. Sometimes, too many users saturated the network. Whenever the electricity went out, which is not uncommon during the stormy summer months, technicians would have to painstakingly reboot the system.

In early 2014, the network encountered a different threat: opposition from the community. Popular support for the network was waning amidst a growing perception that those in charge of maintaining the network were mismanaging the enterprise. Soon, village officials demanded that the network repay the money granted by the municipal treasury. When the municipal government opted to become Talea’s Movistar franchise owner, they used these repaid funds to pay for the company’s antenna. The homegrown network lost more than half of its subscribers in the months following the company’s arrival. In March 2019, it shut down for good.

No one forced Taleans to turn their backs on the autonomous cell network that had brought them international fame. There were many reasons behind the pueblo’s turn to Movistar. Although the politicians who condemned the community network were partly responsible, the cachet of a globally recognized brand also helped Movistar succeed. The most widely viewed television events in the pueblo (as in much of the rest of the world) are World Cup soccer matches. As Movistar employees were signing Taleans up for service during the summer of 2014, the Mexican national soccer team could be seen on TV wearing jerseys that prominently featured the telecom company’s logo.

But perhaps most importantly, young villagers were drawn to Movistar because of a burning desire for mobile data services. Many teenagers from Talea attended high school in the state capital, Oaxaca City, where they became accustomed to the Internet and all that it offers.

Was Movistar’s goal to smash an incipient system of community-controlled communication, destroy a bold vision for an alternative future, and prove that villagers are incapable of managing affairs themselves? It’s hard to say. The company’s goals were probably motivated more by long-term financial objectives: to increase market share by extending its reach into remote regions.

And yet, despite the demise of Talea’s homegrown network, the idea of community-based telecoms had taken hold. The pueblo had paved the way for significant legal and regulatory changes that made it possible to create similar networks in nearby communities, which continue to thrive.

The people of Talea have long been open to outside ideas and technologies, and have maintained a pragmatic attitude and a willingness to innovate. The same outlook that led villagers to enthusiastically embrace the foreign hackers from Rhizomatica also led them to give Movistar a chance to redeem itself after having ignored the pueblo’s earlier appeals. In this town, people greatly value convenience and communication. As I tell students at my university, located in the heart of Silicon Valley, villagers for many years have longed to be reliably and conveniently connected to each other—and to the rest of the world. In other words, Taleans want cellphones because they want to be like you.

This article is based on excerpts from Connected: How a Mexican Village Built Its Own Cell Phone Network (University of California Press, 2020).

About the Author

Roberto J. González is chair of the anthropology department at San José State University.

How To Use Wi-Fi Networks To Ensure a Safe Return to Campus

Post Syndicated from Jan Dethlefs original https://spectrum.ieee.org/view-from-the-valley/telecom/wireless/want-to-return-to-campus-safely-tap-wifi-network

IEEE COVID-19 coverage logo, link to landing page

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Universities, together with other operators of large spaces like shopping malls or airports, are currently facing a dilemma: how to return to business as fast as possible, while at the same time providing a safe environment?

Given that the behavior of individuals is the driving factor for viral spread, key to answering this question will be understanding and monitoring the dynamics of how people move and interact while on campus, whether indoors or outdoors.

Fortunately, universities already have the perfect tool in place:  the campus-wide Wi-Fi network. These networks typically cover every indoor and most outdoor spaces on campus, and users are already registered. All that needs to be added is data analytics to monitor on-campus safety.

We, a team of researchers from the University of Melbourne and the startup Nexulogy, have developed the necessary algorithms that, when fed data already gathered by campus Wi-Fi networks, can help keep universities safe. We have already tested these algorithms successfully at several campuses and other large yet contained environments.

To date, little attention has been paid to using Wi-Fi networks to track social distancing. Countries like Australia have rolled out smartphone apps to support contact tracing, typically using Bluetooth to determine proximity. A recent Google/Apple collaboration, also using Bluetooth, led to a decentralized protocol for contact monitoring.

Yet the success of these apps mainly relies on people voluntarily downloading them. A study by the University of Oxford estimated that more than 70 percent of smartphone users in the United Kingdom would have to install the app for it to be effective. But adoption is not happening at anything near that scale; the Australian COVIDSafe app, for example, released in April 2020, has only been downloaded by 6 million people by mid-June 2020, or about 24 percent of the population.

Furthermore, this kind of Bluetooth-based tracking does not relate the contacts to a physical location, such as a classroom. This makes it hard to satisfy the requirements of running a safe campus. And data collected by the Bluetooth tracking apps is generally not readily available to the campus owners, so it doesn’t help make their own spaces safer.

Our Wi-Fi based algorithms provide the least privacy-intrusive monitoring mechanisms thus far, because they use only anonymous device addresses; no individual user names are necessary to understand crowd densities and proximity. In the case of a student or campus staff member reporting a positive coronavirus test, the device addresses determined to belong to someone at risk can be passed on authorities with the appropriate privacy clearance. Only then would names be matched to devices, with people at risk informed individually and privately.

Wi-Fi presents the best solution for universities for a couple of reasons: wireless coverage is already campus wide; it is a safe assumption that everyone on campus is carrying at least one Wi-Fi capable device; and virtually everyone living and working on campus registers their devices to have internet access. Such tracking is possible without onerous user-facing app downloads.

Often university executives already have the rights to use the information collected in the wireless system included as part of its Terms and Conditions. In the midst of this pandemic, they now also have a legal or, at least a moral obligation, to use such data to their best ability to improve safety and well-being of everyone on campus.

The process starts by collecting the time and symbolic location (also known as network access point) of Wi-Fi capable devices when they are first detected by the Wi-Fi infrastructure, for example, when a student enters the campus’ Wi-Fi  environment, and then during regular intervals or when they change locations. Then, after consolidating any multiple devices of a single user, our algorithms calculate the number of occupants in a given area. That provides a quick insight into crowd density in any building or outdoor plaza.

Our algorithms can also reconstruct the journey of any user within the Wi-Fi infrastructure, and from that can derive exposure time to other users, spaces visited and transmission risk.

This information lets campus managers do several things. For one, our algorithms can easily identify high risk areas by time and location and flag areas where social distance limits are regularly exceeded. This helps campus managers focus resources on areas that may need more hand sanitizers or more frequent deep cleaning.

For another, imagine an infected staff member has been identified by public health authorities. Based on the health authorities’ request, the university can identify possible infection pathways by tracking the individual’s journeys through the campus. It is possible to backtrack the movement history since the start of the data collection, for days or even weeks if necessary. 

To illustrate the methodology, we assumed one of us had a COVID infection and we reconstructed his journey and exposure to other university visitors during a recent visit to a busy university campus. In the heat map below, you can see that only the conference building showed a significant number of contacts (red bar) with an exposure time of, in this example, more than 30 minutes. While we only detected other individuals by Wi-Fi devices, the university executives could now notify people that potentially have been exposed to an infectious user.

This system isn’t without technical challenges. The biggest problem is noise in the system that needs to be removed before the data is useful.

For example, depending on the building layout and wireless network configurations, wireless counts inside specific zones can include passing outside traffic or static devices (like desktop computers). We have developed algorithms to eliminate both without requiring access to any user information.

Even though the identification and management of infection risks is limited to the area covered by the wireless infrastructure, the timely identification of a COVID event naturally benefits areas beyond the campus. Campuses, and even residential campuses in lockdown, have a daily influx of external visitors —staff, contractors, and family members. Those individuals could be identified via telecommunication providers if requested by health authorities.

In the case of someone on campus testing positive for the virus, people they came in contact with, their contact times and places they went to can be identified within minutes. This allows the initiation of necessary measures (e.g. COVID testing or decontamination) in a targeted, timely and cost-effective way.

Since the analytics can happen in the cloud, the software can easily be updated to reflect on new or refined medical knowledge or health regulations, say a new exposure time threshold or physical distancing guidelines.

Privacy is paramount in this process. Understanding population densities and crowd management is done anonymously. Only in the case of someone on campus reporting a confirmed case of the coronavirus  do authorities with the necessary privacy clearance need to connect the devices and the individuals.  Our algorithm operates separately from the identification and notification process.

As universities around the world are eager to welcome students back to the campus, proactive plans need to be in place to ensure the safety and wellbeing of everyone. Wi-Fi is available and ready to help.

Jan Dethlefs is a data scientist and Simon Wei is a data engineer, both at Nexulogy in Melbourne, Australia.  Stephan Winter is a professor and Martin Tomko is a senior lecturer, both in the Department of Infrastructure Engineering at the University of Melbourne, where they work with geospatial data analytics.

With 5G Rollout Lagging, Research Looks Ahead to 6G

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/telecom/wireless/with-5g-rollout-lagging-research-looks-ahead-to-6g

Amid a 5G rollout that has faced its fair share of challenges, it might seem somewhat premature to start looking ahead at 6G, the next generation of mobile communications. But 6G development is happening now, and it’s being pursued in earnest by both industry and academia.

Much of the future landscape for 6G was mapped out in an article published in March of this year in an article published by IEEE Communications titled “Toward 6G Networks: Use Cases and Technologies.”  The article presents the requirements, the enabling technologies and the use cases for adopting a systematic approach to overcoming the research challenges for 6G.

“6G research activities are envisioning radically new communication technologies, network architectures, and deployment models,” said Michele Zorzi,  a professor at the University of Padua in Italy, and one of the authors of the IEEE Communications article. “Although some of these solutions have already been examined in the context of 5G, they were intentionally left out of initial 5G standards developments and will not be part of early 5G commercial rollout mainly because markets are not mature enough to support them.”

The foundational difference between 5G and 6G networks, according to Zorzi, will be the increased role that intelligence will play in 6G networks. It will go beyond merely classification and prediction tasks as is the case in legacy and/or 5G systems.

While machine-learning-driven networks are now still in their infancy, they will likely represent a fundamental component of the 6G ecosystem, which will shift towards a fully-user-centric architecture where end terminals will be able to make autonomous network decisions without supervision from centralized controllers.

This decentralization of control will enable sub-millisecond latency as required by several 6G services (which is below the already challenging 1-millisecond requirement of emerging 5G systems). This is expected to yield more responsive network management.

To achieve this new kind of performance, the underlying technologies of 6G will be fundamentally different from 5G. For example, says Marco Giordani, a researcher at the University of Padua and co-author of the IEEE Communications article, even though 5G networks have been designed to operate at extremely high frequencies in the millimeter-wave bands, 6G will exploit even higher-spectrum technologies—terahertz and optical communications being two examples.

At the same time, Giordani explains that 6G will have a new cell-less network architecture that is a clear departure from current mobile network designs. The cell-less paradigm can promote seamless mobility support, targeting interruption-free communication during handovers, and can provide quality of service (QoS) guarantees that are in line with the most challenging mobility requirements envisioned for 6G, according to Giordani.

Giordani adds: “While 5G networks (and previous generations) have been designed to provide connectivity for an essentially bi-dimensional space, future 6G heterogeneous architectures will provide three-dimensional coverage by deploying non-terrestrial platforms (e.g., drones, HAPs, and satellites) to complement terrestrial infrastructures.”

Key Industry and Academic Initiatives in 6G Development:

Cognitive Radios Will Go Where No Deep-Space Mission Has Gone Before

Post Syndicated from Sven Bilén original https://spectrum.ieee.org/telecom/wireless/cognitive-radios-will-go-where-no-deepspace-mission-has-gone-before

Space seems empty and therefore the perfect environment for radio communications. Don’t let that fool you: There’s still plenty that can disrupt radio communications. Earth’s fluctuating ionosphere can impair a link between a satellite and a ground station. The materials of the antenna can be distorted as it heats and cools. And the near-vacuum of space is filled with low-level ambient radio emanations, known as cosmic noise, which come from distant quasars, the sun, and the center of our Milky Way galaxy. This noise also includes the cosmic microwave background radiation, a ghost of the big bang. Although faint, these cosmic sources can overwhelm a wireless signal over interplanetary distances.

Depending on a spacecraft’s mission, or even the particular phase of the mission, different link qualities may be desirable, such as maximizing data throughput, minimizing power usage, or ensuring that certain critical data gets through. To maintain connectivity, the communications system constantly needs to tailor its operations to the surrounding environment.

Imagine a group of astronauts on Mars. To connect to a ground station on Earth, they’ll rely on a relay satellite orbiting Mars. As the space environment changes and the planets move relative to one another, the radio settings on the ground station, the satellite orbiting Mars, and the Martian lander will need continual adjustments. The astronauts could wait 8 to 40 minutes—the duration of a round trip—for instructions from mission control on how to adjust the settings. A better alternative is to have the radios use neural networks to adjust their settings in real time. Neural networks maintain and optimize a radio’s ability to keep in contact, even under extreme conditions such as Martian orbit. Rather than waiting for a human on Earth to tell the radio how to adapt its systems—during which the commands may have already become outdated—a radio with a neural network can do it on the fly.

Such a device is called a cognitive radio. Its neural network autonomously senses the changes in its environment, adjusts its settings accordingly—and then, most important of all, learns from the experience. That means a cognitive radio can try out new configurations in new situations, which makes it more robust in unknown environments than a traditional radio would be. Cognitive radios are thus ideal for space communications, especially far beyond Earth orbit, where the environments are relatively unknown, human intervention is impossible, and maintaining connectivity is vital.

Worcester Polytechnic Institute and Penn State University, in cooperation with NASA, recently tested the first cognitive radios designed to operate in space and keep missions in contact with Earth. In our tests, even the most basic cognitive radios maintained a clear signal between the International Space Station (ISS) and the ground. We believe that with further research, more advanced, more capable cognitive radios can play an integral part in successful deep-space missions in the future, where there will be no margin for error.

Future crews to the moon and Mars will have more than enough to do collecting field samples, performing scientific experiments, conducting land surveys, and keeping their equipment in working order. Cognitive radios will free those crews from the onus of maintaining the communications link. Even more important is that cognitive radios will help ensure that an unexpected occurrence in deep space doesn’t sever the link, cutting the crew’s last tether to Earth, millions of kilometers away.

Cognitive radio as an idea was first proposed by Joseph Mitola III at the KTH Royal Institute of Technology, in Stockholm, in 1998. Since then, many cognitive radio projects have been undertaken, but most were limited in scope or tested just a part of a system. The most robust cognitive radios tested to date have been built by the U.S. Department of Defense.

When designing a traditional wireless communications system, engineers generally use mathematical models to represent the radio and the environment in which it will operate. The models try to describe how signals might reflect off buildings or propagate in humid air. But not even the best models can capture the complexity of a real environment.

A cognitive radio—and the neural network that makes it work—learns from the environment itself, rather than from a mathematical model. A neural network takes in data about the environment, such as what signal modulations are working best or what frequencies are propagating farthest, and processes that data to determine what the radio’s settings should be for an optimal link. The key feature of a neural network is that it can, over time, optimize the relationships between the inputs and the result. This process is known as training.

For cognitive radios, here’s what training looks like. In a noisy environment where a signal isn’t getting through, the radio might first try boosting its transmission power. It will then determine whether the received signal is clearer; if it is, the radio will raise the transmission power more, to see if that further improves reception. But if the signal doesn’t improve, the radio may try another approach, such as switching frequencies. In either case, the radio has learned a bit about how it can get a signal through its current environment. Training a cognitive radio means constantly adjusting its transmission power, data rate, signal modulation, or any other settings it has in order to learn how to do its job better.

Any cognitive radio will require initial training before being launched. This training serves as a guide for the radio to improve upon later. Once the neural network has undergone some training and it’s up in space, it can autonomously adjust the radio’s settings as necessary to maintain a strong link regardless of its location in the solar system.

To control its basic settings, a cognitive radio uses a wireless system called a software-defined radio. Major functions that are implemented with hardware in a conventional radio are accomplished with software in a software-defined radio, including filtering, amplifying, and detecting signals. That kind of flexibility is essential for a cognitive radio.

There are several basic reasons why cognitive radio experiments are mostly still limited in scope. At their core, the neural networks are complex algorithms that need enormous quantities of data to work properly. They also require a lot of computational horsepower to arrive at conclusions quickly. The radio hardware must be designed with enough flexibility to adapt to those conclusions. And any successful cognitive radio needs to make these components work together. Our own effort to create a proof-of-concept cognitive radio for space communications was possible only because of the state-of-the-art Space Communications and Navigation (SCaN) test bed on the ISS.

NASA’s Glenn Research Center created the SCaN test bed specifically to study the use of software-defined radios in space. The test bed was launched by the Japan Aerospace Exploration Agency and installed on the main lattice frame of the space station in July 2012. Until its decommissioning in June 2019, the SCaN test bed allowed researchers to test how well ­software-defined radios could meet the demands expected of radios in space—such as real-time reconfiguration for orbital operations, the development and verification of new software for custom space networks, and, most relevant for our group, cognitive communications.

The test bed consisted of three software-defined radios broadcasting in the S-band (2 to 4 gigahertz) and Ka-band (26.5 to 40 GHz) and receiving in the L-band (1 to 2 GHz). The SCaN test bed could communicate with NASA’s Tracking and Data Relay Satellite System in low Earth orbit and a ground station at NASA’s Glenn Research Center, in Cleveland.

Nobody has ever used a cognitive radio system on a deep-space mission before—nor will they, until the technology has been thoroughly vetted. The SCaN test bed offered the ideal platform for testing the tech in a less hostile environment close to Earth. In 2017, we built a cognitive radio system to communicate between ground-based modems and the test bed. Ours would be the first-ever cognitive radio experiments conducted in space.

In our experiments, the SCaN test bed was a stand-in for the radio on a deep-space probe. It’s essential for a deep-space probe to maintain contact with Earth. Otherwise, the entire mission could be doomed. That’s why our primary goal was to prove that the radio could maintain a communications link by adjusting its radio settings autonomously. Maintaining a high data rate or a robust signal were lower priorities.

A cognitive radio at the ground station would decide on an “action,” or set of operating parameters for the radio, which it would send to the test-bed transmitter and to two modems at the ground station. The action dictated a specific data rate, modulation scheme, and power level for the test-bed transmitter and the ground station modems that would most likely be effective in maintaining the wireless link.

We completed our first tests during a two-week window in May 2017. That wasn’t much time, and we typically ended up with only two usable passes per day, each lasting just 8 or 9 minutes. The ISS doesn’t pass over the same points on Earth during each orbit, so there’s a limited number of opportunities to get a line-of-sight connection to a particular location. Despite the small number of passes, though, our radio system experienced plenty of dynamic and challenging link conditions, including fluctuations in the atmosphere and weather. Often, the solar panels and other protrusions on the ISS created large numbers of echoes and reflections that our system had to take into account.

During each pass, the neural network would compare the quality of the communications link with data from previous passes. The network would then select the previous pass with the conditions that were most similar to those of the current pass as a jumping-off point for setting the radio. Then, as if fiddling with the knob on an FM radio, the neural network would adjust the radio’s settings to best fit the conditions of the current pass. These settings included all of the elements of the wireless signal, including the data rate and modulation.

The neural network wasn’t limited to drawing on just one previous pass. If the best option seemed to be taking bits and pieces of multiple passes to create a bespoke solution, the network would do just that.

During the tests, the cognitive radio clearly showed that it could learn how to maintain a communications link. The radio autonomously selected settings to avoid losing contact, and the link remained stable even as the radio adjusted itself. It also managed a signal power strong enough to send data, even though that wasn’t a primary goal for us.

Overall, the success of our tests on the SCaN test bed demonstrated that cognitive radios could be used for deep-space missions. However, our experiments also uncovered several problems that will have to be solved before such radios blast off for another planet.

The biggest problem we encountered was something called “catastrophic forgetting.” This happens when a neural network receives too much new information too quickly and so forgets a lot of the information it already possessed. Imagine you’re learning algebra from a textbook and by the time you reach the end, you’ve already forgotten the first half of the material, so you start over. When this happened in our experiments, the cognitive radio’s abilities degraded significantly because it was basically retraining itself over and over in response to environmental conditions that kept getting overwritten.

Our solution was to implement a tactic called ensemble learning in our cognitive radio. Ensemble learning is a technique, still largely experimental, that uses a collection of “learner” neural networks, each of which is responsible for training under a limited set of conditions—in our case, on a specific type of communications link. One learner may be best suited for ISS passes with heavy interference from solar particles, while another may be best suited for passes with atmospheric distortions caused by thunderstorms. An overarching meta–neural network decides which learner networks to use for the current situation. In this arrangement, even if one learner suffers from catastrophic forgetting, the cognitive radio can still function.

To understand why, let’s say you’re learning how to drive a car. One way to practice is to spend 100 hours behind the wheel, assuming you’ll learn everything you need to know in the process. That is currently how cognitive radios are trained; the hope is that what they learn during their training will be applicable to any situation they encounter. But what happens when the environments the cognitive radio encounters in the real world differ significantly from the training environments? It’s like practicing to drive on highways for 100 hours but then getting a job as a delivery truck driver in a city. You may excel at driving on highways, but you might forget the basics of driving in a stressful urban environment.

Now let’s say you’re practicing for a driving test. You might identify what scenarios you’ll likely be tested on and then make sure you excel at them. If you know you’re bad at parallel parking, you may prioritize that over practicing rights-of-way at a 4-way stop sign. The key is to identify what you need to learn rather than assume you will practice enough to eventually learn everything. In ensemble learning, this is the job of the meta–neural network.

The meta–neural network may recognize that the radio is in an environment with a high amount of ionizing radiation, for example, and so it will select the learner networks for that environment. The neural network thus starts from a baseline that’s much closer to reality. It doesn’t have to replace information as quickly, making catastrophic forgetting much less likely.

We implemented a very basic version of ensemble learning in our cognitive radio for a second round of experiments in August 2018. We found that the technique resulted in fewer instances of catastrophic forgetting. Nevertheless, there are still plenty of questions about ensemble learning. For one, how do you train the meta–neural network to select the best learners for a scenario? And how do you ensure that a learner, if chosen, actually masters the scenario it’s being selected for? There aren’t solid answers yet for these questions.

In May 2019, the SCaN test bed was decommissioned to make way for an X-ray communications experiment, and we lost the opportunity for future cognitive-radio tests on the ISS. Fortunately, NASA is planning a three-CubeSat constellation to further demonstrate cognitive space communications. If the mission is approved, the constellation could launch in the next several years. The goal is to use the constellation as a relay system to find out how multiple cognitive radios can work together.

Those planning future missions to the moon and Mars know they’ll need a more intelligent approach to communications and navigation than we have now. Astronauts won’t always have direct-to-Earth communications links. For example, signals sent from a radio telescope on the far side of the moon will require relay satellites to reach Earth. NASA’s planned orbiting Lunar Gateway, aside from serving as a staging area for surface missions, will be a major communications relay.

The Lunar Gateway is exactly the kind of space communications system that will benefit from cognitive radio. The round-trip delay for a signal between Earth and the moon is about 2.5 seconds. Handing off radio operations to a cognitive radio aboard the Lunar Gateway will save precious seconds in situations when those seconds really matter, such as maintaining contact with a robotic lander during its descent to the surface.

Our experiments with the SCaN test bed showed that cognitive radios have a place in future deep-space communications. As humanity looks again at exploring the moon, Mars, and beyond, guaranteeing reliable connectivity between planets will be crucial. There may be plenty of space in the solar system, but there’s no room for dropped calls.

This article appears in the August 2020 print issue as “Where No Radio Has Gone Before.”

About the Authors

Alexander Wyglinski is a professor of electrical engineering and robotics engineering at Worcester Polytechnic Institute. Sven Bilén is a professor of engineering design, electrical engineering, and aerospace engineering at the Pennsylvania State University. Dale Mortensen is an electronics engineer and Richard Reinhart is a senior communications engineer at NASA’s Glenn Research Center, in Cleveland.

The Uncertain Future of Ham Radio

Post Syndicated from Julianne Pepitone original https://spectrum.ieee.org/telecom/wireless/the-uncertain-future-of-ham-radio

Will the amateur airwaves fall silent? Since the dawn of radio, amateur operators—hams—have transmitted on tenaciously guarded slices of spectrum. Electronic engineering has benefited tremendously from their activity, from the level of the individual engineer to the entire field. But the rise of the Internet in the 1990s, with its ability to easily connect billions of people, captured the attention of many potential hams. Now, with time taking its toll on the ranks of operators, new technologies offer opportunities to revitalize amateur radio, even if in a form that previous generations might not recognize.

The number of U.S. amateur licenses has held at an anemic 1 percent annual growth for the past few years, with about 7,000 new licensees added every year for a total of 755,430 in 2018. The U.S. Federal Communications Commission doesn’t track demographic data of operators, but anecdotally, white men in their 60s and 70s make up much of the population. As these baby boomers age out, the fear is that there are too few young people to sustain the hobby.

“It’s the $60,000 question: How do we get the kids involved?” says Howard Michel, former CEO of the American Radio Relay League (ARRL). (Since speaking with IEEE Spectrum, Michel has left the ARRL. A permanent replacement has not yet been appointed.)

This question of how to attract younger operators also reveals deep divides in the ham community about the future of amateur radio. Like any large population, ham enthusiasts are no monolith; their opinions and outlooks on the decades to come vary widely. And emerging digital technologies are exacerbating these divides: Some hams see them as the future of amateur radio, while others grouse that they are eviscerating some of the best things about it.

No matter where they land on these battle lines, however, everyone understands one fact. The world is changing; the amount of spectrum is not. And it will be hard to argue that spectrum reserved for amateur use and experimentation should not be sold off to commercial users if hardly any amateurs are taking advantage of it.

Before we look to the future, let’s examine the current state of play. In the United States, the ARRL, as the national association for hams, is at the forefront, and with more than 160,000 members it is the largest group of radio amateurs in the world. The 106-year-old organization offers educational courses for hams; holds contests where operators compete on the basis of, say, making the most long-distance contacts in 48 hours; trains emergency communicators for disasters; lobbies to protect amateur radio’s spectrum allocation; and more.

Michel led the ARRL between October 2018 and January 2020, and he fits easily the profile of the “average” American ham: The 66-year-old from Dartmouth, Mass., credits his career in electrical and computer engineering to an early interest in amateur radio. He received his call sign, WB2ITX, 50 years ago and has loved the hobby ever since.

“When our president goes around to speak to groups, he’ll ask, ‘How many people here are under 20 [years old]?’ In a group of 100 people, he might get one raising their hand,” Michel says.

ARRL does sponsor some child-centric activities. The group runs twice-annual Kids Day events, fosters contacts with school clubs across the country, and publishes resources for teachers to lead radio-centric classroom activities. But Michel readily admits “we don’t have the resources to go out to middle schools”—which are key for piquing children’s interest.

Sustained interest is essential because potential hams must clear a particular barrier before they can take to the airwaves: a licensing exam. Licensing requirements vary—in the United States no license is required to listen to ham radio signals—but every country requires operators to demonstrate some technical knowledge and an understanding of the relevant regulations before they can get a registered call sign and begin transmitting.

For those younger people who are drawn to ham radio, up to those in their 30s and 40s, the primary motivating factor is different from that of their predecessors. With the Internet and social media services like WhatsApp and Facebook, they don’t need a transceiver to talk with someone halfway around the world (a big attraction in the days before email and cheap long-distance phone calls). Instead, many are interested in the capacity for public service, such as providing communications in the wake of a disaster, or event comms for activities like city marathons.

“There’s something about this post-9/11 group, having grown up with technology and having seen the impact of climate change,” Michel says. “They see how fragile cellphone infrastructure can be. What we need to do is convince them there’s more than getting licensed and putting a radio in your drawer and waiting for the end of the world.”

New Frontiers

The future lies in operators like Dhruv Rebba (KC9ZJX), who won Amateur Radio Newsline’s 2019 Young Ham of the Year award. He’s the 15-year-old son of immigrants from India and a sophomore at Normal Community High School in Illinois, where he also runs varsity cross-country and is active in the Future Business Leaders of America and robotics clubs. And he’s most interested in using amateur radio bands to communicate with astronauts in space.

Rebba earned his technician class license when he was 9, after having visited the annual Dayton Hamvention with his father. (In the United States, there are currently three levels of amateur radio license, issued after completing a written exam for each—technician, general, and extra. Higher levels give operators access to more radio spectrum.)

“My dad had kind of just brought me along, but then I saw all the booths and the stalls and the Morse code, and I thought it was really cool,” Rebba says. “It was something my friends weren’t doing.”

He joined the Central Illinois Radio Club of Bloomington, experimented with making radio contacts, participated in ARRL’s annual Field Days, and volunteered at the communications booths at local races.

But then Rebba found a way to combine ham radio with his passion for space: He learned about the Amateur Radio on the International Space Station (ARISS) program, managed by an international consortium of amateur radio organizations, which allows students to apply to speak directly with crew members onboard the ISS. (There is also an automated digital transponder on the ISS that allows hams to ping the station as it orbits.)

Rebba rallied his principal, science teacher, and classmates at Chiddix Junior High, and on 23 October 2017, they made contact with astronaut Joe Acaba (KE5DAR). For Rebba, who served as lead control operator, it was a crystallizing moment.

“The younger generation would be more interested in emergency communications and the space aspect, I think. We want to be making an impact,” Rebba says. “The hobby aspect is great, but a lot of my friends would argue it’s quite easy to talk to people overseas with texting and everything, so it’s kind of lost its magic.”

That statement might break the hearts of some of the more experienced hams recalling their tinkering time in their childhood basements. But some older operators welcome the change.

Take Bob Heil (K9EID), the famed sound engineer who created touring systems and audio equipment for acts including the Who, the Grateful Dead, and Peter Frampton. His company Heil Sound, in Fairview Heights, Ill., also manufactures amateur radio technology.

“I’d say wake up and smell the roses and see what ham radio is doing for emergencies!” Heil says cheerfully. “Dhruv and all of these kids are doing incredible things. They love that they can plug a kit the size of a cigar box into a computer and the screen becomes a ham radio…. It’s all getting mixed together and it’s wonderful.”

But there are other hams who think that the amateur radio community needs to be much more actively courting change if it is to survive. Sterling Mann (N0SSC), himself a millennial at age 27, wrote on his blog that “Millennials Are Killing Ham Radio.”

It’s a clickbait title, Mann admits: His blog post focuses on the challenge of balancing support for the dominant, graying ham population while pulling in younger people too. “The target demographic of every single amateur radio show, podcast, club, media outlet, society, magazine, livestream, or otherwise, is not young people,” he wrote. To capture the interest of young people, he urges that ham radio give up its century-long focus on person-to-person contacts in favor of activities where human to machine, or machine to machine, communication is the focus.

These differing interests are manifesting in something of an analog-to-digital technological divide. As Spectrum reported in July 2019, one of the key debates in ham radio is its main function in the future: Is it a social hobby? A utility to deliver data traffic? And who gets to decide?

Those questions have no definitive or immediate answers, but they cut to the core of the future of ham radio. Loring Kutchins, president of the Amateur Radio Safety Foundation, Inc. (ARSFi)—which funds and guides the “global radio email” system Winlink—says the divide between hobbyists and utilitarians seems to come down to age.

“Younger people who have come along tend to see amateur radio as a service, as it’s defined by FCC rules, which outline the purpose of amateur radio—especially as it relates to emergency operations,” Kutchins (W3QA) told Spectrum last year.

Kutchins, 68, expanded on the theme in a recent interview: “The people of my era will be gone—the people who got into it when it was magic to tune into Radio Moscow. But Grandpa’s ham radio set isn’t that big a deal compared to today’s technology. That doesn’t have to be sad. That’s normal.”

Gramps’ radios are certainly still around, however. “Ham radio is really a social hobby, or it has been a very social hobby—the rag-chewing has historically been the big part of it,” says Martin F. Jue (K5FLU), founder of radio accessories maker MFJ Enterprises, in Starkville, Miss. “Here in Mississippi, you get to 5 or 6 o’ clock and you have a big network going on and on—some of them are half-drunk chattin’ with you. It’s a social group, and they won’t even talk to you unless you’re in the group.”

But Jue, 76, notes the ham radio space has fragmented significantly beyond rag-chewing and DXing (making very long-distance contacts), and he credits the shift to digital. That’s where MFJ has moved with its antenna-heavy catalog of products.

“Ham radio is connected to the Internet now, where with a simple inexpensive handheld walkie-talkie and through the repeater systems connected to the Internet, you’re set to go,” he says. “You don’t need a HF [high-frequency] radio with a huge antenna to talk to people anywhere in the world.”

To that end, last year MFJ unveiled the RigPi Station Server: a control system made up of a Raspberry Pi paired with open-source software that allows operators to control radios remotely from their iPhones or Web browser.

“Some folks can’t put up an antenna, but that doesn’t matter anymore because they can use somebody else’s radio through these RigPis,” Jue says.

He’s careful to note the RigPi concept isn’t plug and play—“you still need to know something about networking, how to open up a port”—but he sees the space evolving along similar lines.

“It’s all going more and more toward digital modes,” Jue says. “In terms of equipment I think it’ll all be digital at some point, right at the antenna all the way until it becomes audio.”

The Signal From Overseas

Outside the United States, there are some notable bright spots, according to Dave Sumner (K1ZZ), secretary of the International Amateur Radio Union (IARU). This collective of national amateur radio associations around the globe represents hams’ interests to the International Telecommunication Union (ITU), a specialized United Nations agency that allocates and manages spectrum. In fact, in China, Indonesia, and Thailand, amateur radio is positively booming, Sumner says.

China’s advancing technology and growing middle class, with disposable income, has led to a “dramatic” increase in operators, Sumner says. Indonesia is subject to natural disasters as an island nation, spurring interest in emergency communication, and its president is a licensed operator. Trends in Thailand are less clear, Sumner says, but he believes here, too, that a desire to build community response teams is driving curiosity about ham radio.

“So,” Sumner says, “you have to be careful not to subscribe to the notion that it’s all collapsing everywhere.”

China is also changing the game in other ways, putting cheap radios on the market. A few years ago, an entry-level handheld UHF/VHF radio cost around US $100. Now, thanks to Chinese manufacturers like Baofeng, you can get one for under $25. HF radios are changing, too, with the rise of software-defined radio.

“It’s the low-cost radios that have changed ham radio and the future thereof, and will continue to do so,” says Jeff Crispino, CEO of Nooelec, a company in Wheatfield, N.Y., that makes test equipment and software-defined radios, where demodulating a signal is done in code, not hardwired electronics. “SDR was originally primarily for military operations because they were the only ones who could afford it, but over the past 10 years, this stuff has trickled down to become $20 if you want.” Activities like plane and boat tracking, and weather satellite communication, were “unheard of with analog” but are made much easier with SDR equipment, Crispino says.

Nooelec often hears from customers about how they’re leveraging the company’s products. For example, about 120 members from the group Space Australia to collect data from the Milky Way as a community project. They are using an SDR and a low-noise amplifier from Nooelec with a homemade horn antenna to detect the radio signal from interstellar clouds of hydrogen gas.

“We will develop products from that feedback loop—like for hydrogen line detection, we’ve developed accessories for that so you can tap into astronomical events with a $20 device and a $30 accessory,” Crispino says.

Looking ahead, the Nooelec team has been talking about how to “flatten the learning curve” and lower the bar to entry, so that the average user—not only the technically adept—can explore and develop their own novel projects within the world of ham radio.

“It is an increasingly fragmented space,” Crispino says. “But I don’t think that has negative connotations. When you can pull in totally unique perspectives, you get unique applications. We certainly haven’t thought of it all yet.”

The ham universe is affected by the world around it—by culture, by technology, by climate change, by the emergence of a new generation. And amateur radio enthusiasts are a varied and vibrant community of millions of operators, new and experienced and old and young, into robotics or chatting or contesting or emergency communications, excited or nervous or pessimistic or upbeat about what ham radio will look like decades from now.

As Michel, the former ARRL CEO, puts it: “Every ham has [their] own perspective. What we’ve learned over the hundred-plus years is that there will always be these battles—AM modulation versus single-sideband modulation, whatever it may be. The technology evolves. And the marketplace will follow where the interests lie.”

About the Author

Julianne Pepitone is a freelance technology, science, and business journalist and a frequent contributor to IEEE Spectrum. Her work has appeared in print, online, and on television outlets such as Popular Mechanics, CNN, and NBC News.

Intel, NSF Invest $9 Million in Machine Learning for Wireless Networks Projects

Post Syndicated from Fahmida Y Rashid original https://spectrum.ieee.org/tech-talk/telecom/wireless/intel-nsf-invest-9-million-machine-learning-wireless-networks

Intel and the National Science Foundation (NSF) have awarded a three-year grant to a research team for research on delivering distributed machine learning computations over wireless edge networks to enable a broad range of new wireless applications. The team is a joint group from the University of Southern California (USC) and the University of California, Berkeley. The award was part of Intel’s and the NSF’s Machine Learning for Wireless Networking Systems effort, a multi-university research program to accelerate “fundamental, broad-based research” on developing wireless-specific machine learning techniques which can be applied to new wireless systems and architecture design.

Machine learning can hopefully manage the size and complexity of next-generation wireless networks. Intel and the NSF focused on efforts to harness discoveries in machine learning to design new algorithms, schemes, and communication protocols to handle density, latency, and throughput demands of complex networks. In total, US $9,000,000 has been awarded to 15 research teams.

FCC Approves 5G Upgrade Order in an Effort to Speed Rollouts

Post Syndicated from Neil Savage original https://spectrum.ieee.org/tech-talk/telecom/wireless/fcc-speeds-up-5g-approval-process

The path to rolling out new 5G wireless service in the United States may have gotten a bit smoother. This week, the Federal Communications Commission clarified some of the rules for approving new 5G equipment installations, although two dissenting commissioners said the FCC should have waited until local governments didn’t have a pandemic and demonstrations to deal with.

The commission voted 3-2 on Tuesday to clarify rules for streamlined approval that it had written in 2014. The two Democrats on the board voted no.

How Off-Grid, Lights-Out Cell Sites Will Aid the Effort to Bring the Next Billion People Online

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/500-offgrid-lightsout-cell-sites-will-be-deployed-in-multiple-african-countries-by-the-end-of-the-year

Rural connectivity has always been a headache for service providers expanding their networks. Smaller and more spread out communities, by their nature, require more investment and infrastructure for fewer customers and smaller returns. Lighting up a street of apartment buildings in a major city can bring in dozens if not hundreds of new customers with just one fiber cable, for example, while miles of cable might be needed for each new customer in a sparsely populated area.

This problem is truer for many places in Africa, where a higher percentage of the population is rural. Connecting the people in those regions has been a tough task to date.

In an effort to change that, Clear Blue Technologies, a Canadian company developing off-grid power systems, will be working with South African provider MTN and others to roll out 500 new off-grid, no-maintenance 2G and 3G cell towers in Africa. The $3.5 million CAD contract is the first step in a 3- to 5-year program to expand cellular networks in a dozen countries in a way that’s suited to much of Africa’s large size and rural demographics.

“What’s unique about Africa,” says Miriam Tuerk, the CEO of Clear Blue, “is the pent-up demand.” People there have smartphones, but they don’t necessarily have the networks to use them to their full potential.

Tuerk outlines two key challenges faced by any rural network buildout. The first is the cost of the cell towers, specifically the software and hardware. Because rural networks must inherently require a lot of towers to cover a wide area, the cost of licensing or purchasing software or hardware can be staggering. Tuerk suggests that projects like OpenCellular that are developing open source software can help bring down those costs.

More critically, cell towers require power. Traditionally, a cell tower receives power through the grid or, often in cases of emergency, diesel generators. Of course, building out power grid infrastructure is just as expensive as building out wired communications networks. When it comes to refilling diesel generators—well, you can imagine how time consuming it could be to regularly drive out and refill generators in person.

“If you want to move into rural areas, you really have to have a completely lights-off, off-the-grid solution,” says Tuerk. That means no grid connections and no regular maintenance visits to the cell sites to cut down on installation and maintenance costs. Tuerk says Clear Blue’s power tech makes it possible for a cell site to be installed for about US $10,000, whereas a traditional installation can start at between $30,000 and $50,000 per site.

Clear Blue was selected by the Telecom Infra Project to supply tech for off-grid cell towers. The Telecom Infra Project is a collaborative effort by companies to expand telecom network infrastructure, with the stated goal of bringing the next billion people online. For those currently living in places without a grid, something like Clear Blue’s tech is essential.

Clear Blue provides power to cell sites through solar, but Tuerk says it’s more than just plugging some solar panels into a cell tower and calling it a day. “If you’re solar, it has to work through weather,” she says. Lights-out, off-grid cell towers require energy forecasting, to ensure that service doesn’t drop overnight or during a string of cloudy days. Tuerk compares it to driving an electric car between two cities, with no charging stations in between: You want to make sure you have enough juice to make it to the other side.

The sites to be installed will conduct energy forecasting on site using data analytics and weather forecasts to determine energy usage. The sites will also report data back to Clear Blue’s cloud, so that the company can refine energy forecasts using aggregate data.

The initial rollouts have been delayed because of the coronavirus pandemic, but Tuerk is confident that they’ll still meet their goal of installing at least 500 sites by the end of the year. As Tuerk points out, it’s almost expected that projects in Africa get delayed because of some unforeseen hiccup. As the saying goes, “This is Africa.”

Build Wireless Networks and Systems with Software Defined Radio

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/build_wireless_networks_and_systems_with_software_defined_radio

Wireless research handbook

Download the latest edition of NI’s Wireless Research Handbook, which includes research examples from around the world and across a wide range of advanced wireless research topics. This comprehensive look at next-generation wireless systems will offer you a more in-depth view of how prototyping can enhance research results.

Applications include:

  • Flexible Waveform Shaping Based on UTW-OFDM for 5G and Beyond
  • Flexible Real-Time Waveform Generator for Mixed-Service Scenarios
  • In-Band Full-Duplex SDR for MAC Protocol with Collision Detection
  • Bandwidth-Compressed Spectrally Efficient Communication System
  • World-Leading Parallel Channel Sounder Platform
  • Distributed Massive MIMO: Algorithm for TDD Reciprocity Calibration
  • Wideband/Opportunistic Map-Based Full-Duplex Radios
  • An Experimental SDR Platform for In-Band D2D Communications in 5G
  • Wideband Multichannel Signal Recorder for Radar Applications
  • Passive and Active Radar Imaging
  • Multiantenna Technology for Reliable Wireless Communication
  • Radio Propagation Analysis for he Factories of the Future

About the SDR and Wireless Research Team

With a common goal of rapidly moving from theory to prototype, NI established lead user programs to accelerate next-generation research in controls, mechatronics, robotics, and wireless communications. Established in 2010, the wireless communications lead user program includes numerous research institutions examining advanced wireless system concepts. Many researchers around the world are making significant contributions in advancing wireless technologies through standardization and commercialization based on the foundational work completed by the lead user program.

In the Future, AIs—Not Humans—Will Design Our Wireless Signals

Post Syndicated from Joe Downey original https://spectrum.ieee.org/telecom/wireless/in-the-future-aisnot-humanswill-design-our-wireless-signals

The era of telecommunications systems designed solely by humans is coming to an end. From here on, artificial intelligence will play a pivotal role in the design and operation of these systems. The reason is simple: rapidly escalating complexity.

Each new generation of communications system strives to improve coverage areas, bit rates, number of users, and power consumption. But at the same time, the engineering challenges grow more difficult. To keep innovating, engineers have to navigate an increasingly tangled web of technological trade-offs made during previous generations.

In telecommunications, a major source of complexity comes from what we’ll call impairments. Impairments include anything that deteriorates or otherwise interferes with a communications system’s ability to deliver information from point A to point B. Radio hardware itself, for example, impairs signals when it sends or receives them by adding noise. The paths, or channels, that signals travel over to reach their destinations also impair signals. This is true for a wired channel, where a nearby electrical line can cause nasty interference. It’s equally true for wireless channels, where, for example, signals bouncing off and around buildings in an urban area create a noisy, distortive environment.

These problems aren’t new. In fact, they’ve been around since the earliest days of radio. What’s different now is the explosive pace of wireless proliferation, which is being driven increasingly by the rise of the Internet of Things. The upshot is that the combined effect of all these impairments is growing much more acute, just when the need for higher bit rates and lower latencies is surging.

Is there a way out? We believe there is, and it’s through machine learning. The breakthroughs in artificial intelligence in general, and machine learning in particular, have given engineers the ability to thrive, rather than drown, in complex situations involving huge amounts of data. To us, these advances posed a compelling question: Can neural networks (a type of machine-learning model), given enough data, actually design better communications signals than humans? In other words, can a machine learn how to talk to another machine, wirelessly, and do so better than if a human had designed its signals?

Based on our work on space communications systems with NASA, we are confident the answer is yes. Starting in 2018, we began carrying out experiments using NASA’s Tracking Data Relay Satellite System (TDRSS), also known as the Space Network, in which we applied machine learning to enable radios to communicate in an extremely complex environment. The success of these experiments points to a likely future in which communications engineers are less focused on developing wireless signals and more focused on building the machine-learning systems that will design such signals.

Over the years, communications engineers have invented countless techniques to minimize impairments in wireless communications. For example, one option is to send a signal over multiple channels so that it is resilient to interference in any one channel. Another is to use multiple antennas—as a signal bounces off obstacles in the environment, the antennas can receive the signal along different paths. That causes the signals to arrive at different times in order to avoid a burst of unexpected interference. But these techniques also make radios more complex.

Completely accounting for impairments has never been practical, because the radio systems and environments that create them are so complicated. As a result, communications engineers have developed statistical models that approximate the effects of impairments on a channel. These models give communications engineers a generally good idea of how to design and build the equipment for a particular wireless system in order to reduce impairments as much as possible.

Nevertheless, using statistical models to guide the design of communications signals can’t last forever. The status quo is already struggling with the latest telecom systems, such as 5G cellular networks. The systems are too complicated and the number of connected devices is too high. For communications engineering to meet the demands of the current and future generations of wireless systems, a new approach, like artificial intelligence, is needed.

To be clear, using AI in communications systems isn’t a new concept. Adaptive radios, smart radios, and cognitive radios, increasingly used in military and other applications, all use AI to improve performance in challenging environments.

But these existing technologies have always been about adapting how a radio system behaves. For example, 4G LTE wireless networks include AI techniques that reduce data rates when the connection between the transmitter and receiver degrades. A lower data rate avoids overloading a low-bandwidth channel and causing data to be lost. In another example, AI techniques in Bluetooth systems shift a signal’s frequency to avoid an interfering signal, should one appear.

The takeaway here is that AI has been used in the past to modify the settings within a communications system. What hasn’t happened before is using AI to design the signal itself.

One of us, Tim O’Shea, researched how to apply deep learning to wireless signal processing as part of his doctoral work at Virginia Tech from 2013 to 2018. In late 2016, O’Shea cofounded DeepSig with Jim Shea, a veteran engineer and entrepreneur, to build on his research and create prototypes of the technology. The goal of the company, based in Arlington, Va., is to identify where human engineering in communications systems is hitting its limits, and how neural networks might enable us to push past that threshold (more on that later).

Before we go further, it will help to know a bit about how communications engineers design the physical components of a radio that are responsible for creating a signal to be transmitted. The traditional approach starts with a statistical model that resembles the real-world communications channel you’re trying to build. For example, if you’re designing cell towers to go into a dense, urban area, you might pick a model that accounts for how signals propagate in an environment with lots of buildings.

The model is backed by channel soundings, which are actual physical measurements taken with test signals in that environment. Engineers will then design a radio modem—which modulates and demodulates a radio signal to encode the ones and zeroes of binary code—that performs well against that model. Any design has to be tested in simulations and real-world experiments, and then adjusted and retested until it performs as expected. It’s a slow and laborious process, and it often results in radios with design compromises, such as filter quality. Generally, a radio operating on a narrow band of frequencies can filter out noise very well, but a wider-band radio will have a less effective filter.

DeepSig’s efforts resulted in a new technique, which we call the channel autoencoder, to create signals. It works by training two deep neural networks in tandem, an encoder and a decoder, which together effectively act as the channel’s modem. The encoder converts the data to be transmitted into a radio signal, and on the other side of the channel—the other side of the corrupting impairments—the decoder reconstructs its best estimate of the transmitted data from the received radio signal.

Let’s take a moment here to break down what the channel autoencoder does, step by step. At the autoencoder’s core are the two neural networks. You’ve probably heard of neural networks for image recognition. As a basic example, a researcher might “show” a neural network thousands of images of dogs and of animals that are not dogs. The network’s algorithms can then distinguish between “dog” and “not-dog” and are optimized to recognize future images of dogs, even if those images are novel to the network. In this example, “dog” would be the image the neural network is trained to recognize.

In this application, the neural network is trained to identify features of input data corresponding to an image. When a new image is presented, if it has input data with similar features, it will result in a similar output. By “feature,” we mean a pattern that exists in the data. In image recognition, it may be an aspect of the images seen. In speech recognition, it might be a particular sound in the audio. In natural-language processing, it might be a paragraph’s sentiment.

You may remember that we said the channel autoencoder uses deep neural networks. What that means is that each neural network is made up of many more layers (often numbering in the hundreds) that enable it to make more detailed decisions about the input data than a simpler neural network could. Each layer uses the results of the previous layers to make progressively more complex insights. For example, in computer vision, a simpler neural network could tell you whether an image is a dog, but a deep neural network could tell you how many dogs there are or where they are located in the image.

You’ll also need to know what an autoencoder is. Autoencoders were first invented in 1986 by Geoffrey Hinton, a pioneer in machine learning, to address the challenge of data compression. In the implementation of an autoencoder, one of its two neural networks is a compressor and one is a decompressor. The compressor, as the name suggests, learns how to effectively compress data based on its type—it would compress PDFs differently than it would JPGs, for example. The decompressor does the opposite. The key is that neither the compressor nor the decompressor can function alone—both are needed for the autoencoder to function.

So now let’s put all of this in the context of wireless signals. The channel autoencoder does the same thing as more traditional autoencoders. But instead of optimizing for different types of data, the autoencoder is optimizing for different wireless channels. The autoencoder consists of two deep neural networks, one on each side of the channel, which learn how to modulate and demodulate types of wireless signals, thus jointly forming a modem. The takeaway is that the channel autoencoder can create better signals for a wireless channel than the generic, one-size-fits-all signals that are typically used in communications.

Previously we mentioned channel soundings—if you recall, those are the test signals sent through a wireless channel to measure interference and distortion. These soundings are key for the channel autoencoder because they give an understanding of what obstacles a signal will face as it moves across a channel. For example, a lot of activity in the 2.4-gigahertz band would indicate there’s a Wi-Fi network nearby. Or if the radio receives many echoes of the test signal, the environment is likely filled with many reflective surfaces.

With the soundings complete, the deep neural networks can get to work. The first, the encoder, uses the information gleaned from the soundings to encode the data being modulated into a wireless signal. That means the neural network on this side is taking advantage of effects from analog-to-digital signal converters and power amplifiers in the radio itself, as well as known reflective surfaces and other impairments in the soundings. In so doing, the encoder creates a wireless signal that is resilient to the interference and distortion in the channel, and it may develop a complex solution that is not easily arrived at by traditional means.

On the other side of the channel, the neural network designated as the decoder does the same thing, only in reverse. When it receives a signal, it will take what it knows about the channel to strip away the effects of interference. In this case, the network will reverse distortions and reflections along with any redundancy that was encoded to estimate what sequence of bits was transmitted. Error-correction techniques may also come into play to help clean up the signal. At the end of the process, the decoder has restored the original information.

During the training process, the neural networks get feedback on their current performance relative to whatever metric the engineers want to prioritize, whether that’s the error rate in the reconstructed data, the radio system’s power consumption, or something else. The neural networks use that feedback to improve their performance against those metrics without direct human intervention.

Among the channel autoencoder’s advantages is its ability to treat all impairments the same way, regardless of their source. It doesn’t matter if the impairment is distortion from nearby hardware inside the radio or from interference over the air from another radio. What this means is that the neural networks can take all impairments into account at once and create a signal that works best for that specific channel.

The DeepSig team believed that training neural networks to manage signal processing in a pair of modems would be a titanic shift in how communications systems are designed. So we knew we would have to test the system thoroughly if we were going to prove that this shift was not only possible, but desirable.

Fortunately, at NASA, coauthor Joe Downey and his colleague Aaron Smith had taken notice of DeepSig’s ideas, and had just such a test in mind.

Since the early 1980s, NASA’s TDRSS has been providing communication and tracking services for near-Earth satellites. TDRSS is itself a collection of ground stations and a constellation of satellites that stays in continuous contact with Earth-orbiting satellites and the International Space Station. TDRSS satellites act as relays, transmitting signals between other satellites and ground-station antennas around the world. This system eliminates the need to build more ground stations to guarantee that satellites can stay in contact. Today, 10 TDRSS satellites provide service for the International Space Station, commercial resupply missions, and NASA’s space- and earth-science missions.

When TDRSS first came on line, spacecraft used lower data-rate signals that were robust and resilient to noise. More recent science and human-spaceflight missions have required higher data throughputs, however. To keep up with demand, TDRSS now uses signals that can cram more bits of information into the same amount of bandwidth. The trade-off is that these types of signals are more sensitive to impairments. By the early 2010s, NASA’s demands on TDRSS had become so large that it was difficult to design signals that worked well among the growing impairments. Our hope was that it would not be so difficult for neural networks to do so on their own.

A key characteristic of TDRSS for our purposes is that its satellites don’t perform any signal processing. They simply receive a signal from either a ground station or another satellite, amplify the signal, and then retransmit it to its destination. That means the main impairments to a signal transmitted through TDRSS come from the radio’s own amplifiers and filters, and the distortion from interference among simultaneous signals. As you may recall, our neural networks don’t distinguish among forms of interference, treating them all as part of the same external channel that the signal has to travel through.

TDRSS provided an ideal scenario for testing how well AI could develop signals for complex, real-world situations. Communicating with a satellite through TDRSS is rife with interference, but it’s a thoroughly tested system. That means we have a good understanding of how well signals currently perform. That made it easy to check how well our system was doing by comparison. Even better, the tests wouldn’t require modifying the existing TDRSS equipment at all. Because the channel autoencoders already include modems, they could be plugged into the TDRSS equipment and be ready to transmit.

In late July of 2018, after months of preparation, the DeepSig team headed to NASA’s Cognitive Radio Laboratory at Glenn Research Center in Cleveland. There, they would test modems using signals created through neural networks in a live, over-the-air experiment. The goal of the test was to run signal modulations used by the TDRSS system side by side with our channel autoencoder system, enabling us to directly compare their performances in a live channel.

At Glenn, the DeepSig team joined NASA research scientists and engineers to replace the proven, human-designed modems at NASA ground stations in Ohio and New Mexico with neural networks created with a channel autoencoder. During the tests, traditional TDRSS signals, as well as signals generated by our autoencoders, would travel from one ground station, up to one of the satellites, and back down to the second ground station. Because we kept the bandwidth and frequency usage the same, the existing TDRSS system and the channel autoencoder faced the exact same environment.

When the test was over, we saw that the traditional TDRSS system had a bit error rate just over 5 percent, meaning about one in every 20 bits of information had failed to arrive correctly due to some impairment along the way. The channel autoencoder, however, had a bit error rate of just under 3 percent. It’s worth mentioning that the tests did not include standard after-the-fact error correction, to allow for a direct comparison. Typically, both systems would see a much lower bit error rate. Just in this test, however, the channel autoencoder reduced TDRSS’s bit error rate by 42 percent.

The TDRSS test was an early demonstration of the technology, but it was an important validation of using machine-learning algorithms to design radio signals that must function in challenging environments. The most exciting aspect of all this is that the neural networks were able to develop signals that are not easily or obviously conceived by people using traditional methods. What that means is the signals may not resemble any of the standard signal modulations used in wireless communications. That’s because the autoencoder is building the signals from the ground up—the frequencies, the modulations, the data rates, every aspect—for the channel in question.

Remember how we said that today’s techniques for signal creation and processing are a double-edged sword? Traditional signal-modulation approaches grow more complex as the available data about the system increases. But a machine-learning approach instead thrives as data becomes more abundant. The approach also isn’t hindered by complex radio equipment, meaning the problem of the double-edged sword goes away.

And here’s the best part: Presented with a new communications channel, machine-learning systems are able to train autoencoders for that channel in just a few seconds. For comparison, it typically takes a team of seasoned experts months to develop a new communications system.

To be clear, machine learning does not remove the need for communications engineers to understand wireless communications and signal processing. What it does is introduce a new approach to designing future communications systems. This approach is too powerful and effective to be left out of future systems.

Since the TDRSS experiment and subsequent research, we’ve seen increasing research interest in channel autoencoders as well as promising usage, especially in fields with very hard-to-model channels. Designing communications systems with AI has also become a hot topic at major wireless conferences including Asilomar, the GNU Radio Conference, and the IEEE Global Communications Conference.

Future communications engineers will not be pure signal-processing and wireless engineers. Rather, their skills will need to span a mixture of wireless engineering and data science. Already, some universities, including the University of Texas at Austin and Virginia Tech, are introducing data science and machine learning to their graduate and undergraduate curricula for wireless engineering.

Channel autoencoders aren’t a plug-and-play technology yet. There’s plenty yet to be done to further develop the technology and underlying computer architectures. If channel autoencoders are ever to become a part of existing widespread wireless systems, they’ll have to undergo a rigorous standardization process, and they’ll also need specially designed computer architectures to maximize their performance.

The impairments of TDRSS are hard to optimize against. This raises a final question: If channel autoencoders can work so well for TDRSS, why not for many other wireless systems? Our answer is that there’s no reason to believe they can’t.

This article appears in the May 2020 print issue as “Machine
Learning Remakes Radio.”

About the Author

Ben Hilburn was until recently director of engineering at the wireless deep-learning startup DeepSig. He’s now with Microsoft. Tim O’Shea is CTO at DeepSig and a research assistant professor at Virginia Tech. Nathan West is director of machine learning at DeepSig. Joe Downey works on space-based software-defined radio technology at the NASA Glenn Research Center.

The Pandemic Has Slowed Wireless Network Buildouts

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/the-pandemic-has-slowed-wireless-network-buildouts

When cities and states in the United States first began lockdowns due to the COVID-19 pandemic, a big concern initially was whether the Internet and other communications networks would hold up under the strain. Thankfully, aside from some minor hiccups and latency problems, the Internet’s infrastructure seems to be doing just fine.

It’s one thing to know that our current communications infrastructure is up to the task, but what about future networks? After all, plenty of networks were under construction when social distancing and shelter-in-place measures were instituted across the U.S.

The gist is that lockdowns have made it difficult, though not impossible, for companies to complete their network buildouts on schedule. For a wireless network operator, that can mean more than just deadlines—it could mean the loss of its license to operate a network over the frequencies it has been assigned.

Here’s how network buildout typically works in the U.S.: When a company wants to build and operate a wireless network, it must receive a license from the Federal Communications Commission to operate over a specific set of frequencies in a given area. That license also includes a deadline: By a specific date, the company must have finished its network buildout and started sending signals. Otherwise, the frequencies are up for grabs.

But now, companies are struggling to meet their deadlines. “It’s not like these guys are trying to skirt the rules,” says Mark Crosby, CEO of the Enterprise Wireless Alliance, an industry trade association. “It’s not that they don’t want to finish their construction, it’s just that things are difficult right now.”

Crosby says he’s heard plenty of stories from wireless companies that are members of EWA. Some can’t get cranes out to sites for tower construction, while others have had delays in getting technicians and utility workers on site. Crosby’s favorite story, so far, is a company that had a sheriff come out to a site to tell the workers that they could finish their work, but because of the risk that some of them might be carrying COVID-19, they shouldn’t plan on coming back into town, but to leave as soon as they were done. “It’s like it’s straight out of the movies, the sheriff saying ‘I thought I told you to get out of town,’” says Crosby.

Because of these problems, the EWA petitioned the FCC for a waiver of buildout deadlines until 31 August 2020. The FCC responded by giving all companies with a deadline between 15 March and 15 May an additional 60 days to complete their buildouts [PDF]. Although it’s not as much as the EWA initially asked for, Crosby believes companies will be okay, and says that the EWA will continue to monitor the situation, should more time be needed.

Prior to the FCC’s decision, the agency was granting deadline waivers on a case-by-case basis. Although companies are traditionally able to apply for deadline extensions—and Crosby notes that such extensions are almost always granted, provided the company is actually working on their buildout—the EWA’s position was that case-by-case was too cumbersome during the pandemic. “The FCC is all working from home,” Crosby says. “It’s just as difficult for them.” To avoid headaches and paperwork, it makes more sense to grant a blanket extension.

The FCC did not respond to a request for comment on the current state of buildout delays and why its response was not the same as the EWA’s original request.

While the pandemic is undoubtedly disrupting network buildouts, Crosby for one is equally certain it won’t cause lingering problems. He’s confident that when state and local governments begin lifting lockdown measures, companies will have no trouble wrapping up their delayed buildouts and getting new construction done on schedule. “The EWA wouldn’t support [taking undue advantage of the situation],” he says. “If everything’s normal, don’t roll deadlines. We do not want people sitting on the spectrum.”

How Facebook and Google Track Public’s Movement in Effort to Fight COVID-19

Post Syndicated from Emily Waltz original https://spectrum.ieee.org/the-human-os/telecom/wireless/facebook-google-data-publics-movement-covid19

How we move about in our communities—where we go and how often—greatly affects the spread of COVID-19. And few know our whereabouts better than Facebook and Google

So, in an effort to help researchers combat the pandemic, the two companies say they are now making their troves of GPS-based mobility data available. The data comes from users who opt in to location services on the companies platforms and is provided for public health use in an aggregated, anonymized way. 

Such data is vital to public health researchers’ efforts to understand trends in population movement and predict the spread of the disease, which is caused by the novel coronavirus SARS-CoV-2. Local government officials can use the data to make informed decisions on travel and social distancing interventions.

Data for Good

Both Facebook and Google are providing information about where people are going, but the companies differ in the way they are releasing the information.

Facebook, through its Data for Good program, provides mobility datasets and maps directly to researchers upon request. Facebook generates the data in file formats that support epidemiological models and case data.

“We’re sharing the data in a way that public health researchers can use,” says Laura McGorman, policy lead for Facebook’s Data for Good program. “Once a researcher signs a license agreement, they can request data through our mapping portal and get it the next day,” she says.

The mobility datasets let researchers look at population movement between two points, movement patterns such as commutes, and whether people are staying close to home or visiting many parts of town. Facebook’s is the only source of mobility data in machine-readable format” that is global and free of charge, says McGorman.

Data for Good started three years ago as an initiative to help track evacuations and displacement after natural disasters. It has since expanded to address disease and, most recently, COVID-19. The company gathers its information from people using Facebook on their mobile phones with the location history feature enabled. Data is aggregated to protect individual privacy.

Scientists have used Facebook’s data in several ways over the last few weeks to study the pandemic. For example, scientists at the Institute for Disease Modeling in Bellevue, Washington used Facebook’s mobility data to study how social distancing measures and a stay-at-home orders have affected movement near Seattle. They found that population movement indeed declined, which led to reduced transmission of the virus. 

Separately, researchers in Italy used Facebook’s mobility data to analyze how lockdown orders affect economic conditions and create an economic segregation effect. A report from the National Tsing Hua University in Taiwan used Facebook’s data to show that travel restrictions reduced the spread of the virus. 

Facebook’s program is also supplying the bulk of the data for the COVID-19 Mobility Data Network. The recently-formed group, composed of a network of epidemiologists, uses mobility data to generate daily situation reports for decision makers who are implementing social distancing interventions.

Google’s mobility tracking tool

Separately, Google on April 3 announced that it had launched a mobility tracking tool called COVID-19 Community Mobility Reports. The web-based tool is available freely to the public and provides insights on how communities have reduced or increased their visits to certain types of places. 

The public can go to the website and choose a region, such as a state or country. The tool then generates graphs on a downloadable PDF displaying the percentage change in visits over the last few weeks to places such as retail stories, pharmacies, parks, places of work and public transportation hubs in that region. 

In the county where Indianapolis is located, for example, people have reduced their visits to grocery and pharmacy stories by 17% and to other retail locations by 45%, since February 23. Visits to parks, however, have increased 54%.  

In a blog post highlighting the resource, Google executives wrote that they believe the mobility reports could help shape business hours, inform delivery service offerings, or indicate a need to add additional buses or trains to a particular public transportation hub. 

The company pulls the data from Google users who have opted in to location tracking services. The information is aggregated and anonymized, and does not provide real-time data in an effort to protect privacy.

Mobility data similar to that from Facebook and Google have already informed decisions of government officials. Tennessee Governor Bill Lee on April 2 issued a statewide order for residents to stay at home after he reviewed mobility data released by tech startup UnacastThe information, gleaned from mobile phone location data, showed that people in some regions, such as Nashville, had significantly reduced their daily travel, but people in many other Tennessee counties had not. This convinced Lee that a statewide order was necessary. 

Beyond mobility

Both Facebook and Google are releasing other kinds of data to coronavirus researchers and the public. Data for Good offers population density maps as well as social connectedness indices. The latter relies on aggregated, anonymous friendship connections on Facebook to measure the general connectedness of two geographic regions.

That type of information can help predict the spread of the virus and where to put resources. Researchers at NYU used the social connectedness data to show that geographic regions with strong social ties to two early COVID-19 hotspots—in New York and Italy—had higher cases of the illness. Separately, an organization funded by the World Bank used Facebook’s population density data to help determine where coronavirus testing facilities and extra beds should be located.  

Google and Apple last week announced an ambitious effort to provide the technological support for digital contact tracing. The strategy would allow people with certain Bluetooth-enabled apps to find out if they have been in the vicinity of people who have tested positive for the novel coronavirus.

Digital contact tracing has been touted by public health specialists as a strategy to help reopen the economy in a safe way, but privacy and ethical considerations have been hotly debated. 

MicroLEDs Transmit Whopping Amounts of Data

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/wireless/micro-led-lights-transmit-whopping-amounts-data

Journal Watch report logo, link to report landing page

Modern wireless communications most often occur at radio frequencies invisible to the human eye—but that isn’t stopping some scientists from eyeing visible light as a means to transmit data. In a recent advance, researchers in the United Kingdom succeeded in transmitting 1.61 Gbps of data across 20 meters using microLEDs. Their work was published 18 March in IEEE Photonics Technology Letters.