Beam management, a defining feature in mmWave communications, entails highly directional and steerable beams in single- and multi-user scenarios. It will play a crucial role in the future of 5G wireless designs, because the 3GPP Release 15 specification outlines the basics of beam management. Likewise, the 3GPP Release 16 examines the real-world performance of phased array antennas in the context of beam management.
The mmWave systems with a large antenna count enable narrow beam patterns, and that makes antenna performance and beam characteristics important considerations in choosing the algorithms for beam management. This article will examine the real-world performance of phased array antennas in the context of beam management. It will also provide a design example of hardware prototyping for 5G system beamforming and beamsteering.
Starting with the beamforming basics: The traditional analog beamforming creates a single beam by applying a phase delay, or time delay, to each antenna element. Here, for multiple simultaneous beams, designers need to use a phase delay for each incoming signal and then add the beam.
On the other hand, in full digital beamforming, each antenna mandates a dedicated analog baseband channel, which, in turn, calls for a digital transceiver for each antenna. This adds to both cost and power consumption. Enter hybrid beamforming, which allows designers to keep the overall cost and power consumption lower.
Why Hybrid Beamforming
Hybrid beamforming combines analog beamforming with digital precoding to intelligently form the patterns transmitted from a large antenna array (Figure 2), and the same technique is used at the receive end to create desired receiver pattern.
Hybrid transceivers use a combination of analog beamformers in the RF and digital beamformers in the baseband domains, respectively, and that leads to fewer RF chains compared to the number of transmit elements. In other words, an architecture that is properly partitioned between the analog and digital domains.
The fact that hybrid beamforming uses a smaller number of RF chains, which otherwise have large power consumption, allows designers to use a larger number of antenna array elements while reducing energy consumption and system design complexity.
However, hybrid beamforming must include the precoding weights and RF phase shifts to meet the goal of improving the virtual connection between the base station and the user equipment (UE). Moreover, the precoder design mandates massive calculations such as the singular value decomposition (SVD) of the channel.
Here, a low complexity hybrid precoding algorithm employed for beamsteering operations utilizes array response vectors of the channel. The algorithm applies a set of array response vectors that are used to form the channel, so there is no need for complicated operations used in the traditional precoding algorithms.
The beamsteering algorithms are also critical in the capacity-based optimal beam set selection and in detecting the presence of unknown interference and noise. Additionally, they facilitate higher throughput by overcoming problems such as random beam blockage and misalignment.
mmWave Beamsteering Algorithms
In high-speed mmWave communications with large antenna arrays, frequent channel estimation is required because channel conditions vary rapidly. A vital part of channel estimation relates to an efficient beam searching in order to allow more time for data transmission.
Then, there is multi-beam selection, a crucial element in hybrid beamforming systems for frequency-selective fading channels. The impact of interference on network capacity due to highly narrow beams also poses serious challenges. It is therefore imperative that designers carefully examine the network capacity from both node capacity and antenna size standpoints.
Here, unlike the trial-and-error approach, which is time-consuming and does not easily adapt to change, beamsteering algorithms help engineers understand design constraints and account for them in the optimization process. They facilitate an arithmetic mean of signal yields, which, in turn, refines observations with minor noise effects and provide more accurate sparse multipath delay information.
These observations, for instance, regarding estimated multipath delay indices, are crucial in the selection of the analog beams. The beamsteering algorithms also play a vital role in phase control and beam tuning. They analyze the computational complexity and compare numerical results to deliver the best signal propagation and reception for mmWave communication channels.
Take the Hybrid Beamforming Testbed from National Instruments (NI), which moves signals from the analog to the digital domain using an mmWave Transceiver System (Figure 3). This off-the-shelf prototyping system allows engineers to validate radio performance by efficiently implementing the mmWave beamsteering algorithms.
Hybrid Beamforming Testbed
The 5G NR standard for mmWave frequencies uses a combination of computationally intensive algorithms to encode, decode, modulate, demodulate and multiplex signals. Here, NI’s mmWave Transceiver System, a software-defined radio (SDR) solution, can help implement beamforming algorithms for a variety of 5G prototyping use cases.
This modular prototyping system comprises a PXI Express chassis, controllers, a clock distribution module, high-performance FPGA modules, high-speed DACs and ADCs, LO and IF modules, and mmWave radio heads (Figure 4). The FPGA modules in the PXI chassis handle communication rules, error correction, decoding and encoding, and signal mapping.
Next, the baseband transceiver modules — along with the IF/LO modules — transmit and receive signals to and from the mmWave heads, which can be connected to the phased arrays via a single SMK cable. Here, the LabVIEW software allows developers to configure different connectivity features in the digital domain of beamforming.
So, network designers can combine the transceiver system with a phased array to create off-the-shelf phased array prototype solutions. And they can map different computational tasks on multiple FPGAs and thus design and test beam algorithms for a wide variety of mmWave channels and network configurations.
A new antenna that uses saltwater and plastic instead of metal to shape radio signals could make it easier to build networks that use VHF and UHF signals.
Being able to focus the energy of a radio signal towards a given receiver means you can increase the range and efficiency of transmissions. If you know the location of the receiver, and are sure that it’s going to stay put, you can simply use an antenna that is shaped to emit energy mostly in one direction and point it. But if the receiver’s location is uncertain, or if it’s moving, or if you’d like to switch to a different receiver, then things get tricky. In this case, engineers often fall back on a technique called beam-steering or beamforming, and doing it at at a large scale is one of the key underlying mechanisms behind the rollout of 5G networks.
Beam-steering lets you adjust the focus of antenna without having to move it around to point in different directions. It involves adjusting the relative phases of a set of radio waves at the antenna: these waves interfere constructively and destructively, cancelling out in unwanted directions and reinforcing the signal in the direction you want to send it. Different beam patterns, or states, are also possible—for example, you might want a broader beam if you are sending the same signal to multiple receivers in a given direction, or a tighter beam if you are talking to just one.
Now, researchers have developed an advanced liquid-based antenna system that relies on a readily available ingredient: saltwater.
To be sure, this is not the first liquid antenna: these antennas, which use fluid to transmit and receive radio signals, can be useful in situations where VHF or UHF frequencies are required (frequencies between 30 megahertz and 3 gigahertz). They tend to be small, transparent, and more reconfigurable than conventional metal antennas. For these reasons, they are being explored in for some internet of things (IoT) and 5G applications.
Liquid antennas that depend on salty water have even more benefits, since the substance is readily available, low-cost and eco-friendly. Several saltwater-based antennas have been developed to date, but these designs are limited in how easily the beam can be steered and reconfigured.
However, in a recent publication in IEEE Antennas and Wireless Propagation Letters, Lei Xing and her colleagues at the College of Electronic and Information Engineering at Nanjing University of Aeronautics and Astronautics in China have proposed a new saltwater-based antenna that achieves 12 directional beam-steering states and one omnidirectional state. Its circular configuration allows for complete 360-degree beam-steering and works for frequencies between 334 to 488 MHz.
The proposed design consists of a circular ground plane, with 13 transparent acrylic tubes that can be filled with (or emptied of) salt water on demand. One tube is located in the center to act as a driven monopole (the radio signal is fed in via a copper disk at the base of the tube). Surrounding it are 12 so-called parasitic monopoles. When only the driven monopole is excited, this creates an omnidirectional signal. But the 12 remaining monopoles, when filled with water, work together to act as reflectors and give the broadcasted signal direction.
“The most challenging part of designing this antenna is how to effectively and efficiently control the water parasitic monopoles,” Xing explains. To do so, her team developed a liquid control system using micropumps, which she says can be applied to other liquid antennas or antenna arrays.
“The attractive feature of using water monopoles is that both the water height and activating status can be dynamically tuned through microfluidic techniques, which has a higher degree of design flexibility than metal antennas,” explains Xing. “More importantly, the antenna can be totally ‘turned off’ when not in use.”
When the antenna is switched completely off and drained, it is nearly undetectable by radar. In contrast, this effect is hard to achieve with metal antennas.
The new antenna’s operating range of 334 MHz to 488 MHz makes it a promising candidate for very-high frequency applications such as IoT and maritime applications, says Xing. One limitation of saltwater-based antennas, she notes, is that that the permittivity of saline water (a measure of how it interacts with electric fields) is sensitive to temperature variation. Xing says she plans to continue to explore various liquid-based designs for antennas moving forward.
As carriers roll out 5G, industry group 3GPP is considering other ways to modulate radio signals
In 2017, members of the mobile telephony industry group 3GPP were bickering over whether to speed the development of 5G standards. One proposal, originally put forward by Vodafone and ultimately agreed to by the rest of the group, promised to deliver 5G networks sooner by developing more aspects of 5G technology simultaneously.
Adopting that proposal may have also meant pushing some decisions down the road. One such decision concerned how 5G networks should encode wireless signals. 3GPP’s Release 15, which laid the foundation for 5G, ultimately selected orthogonal frequency-division multiplexing (OFDM), a holdover from 4G, as the encoding option.
But Release 16, expected by year’s end, will include the findings of a study group assigned to explore alternatives. Wireless standards are frequently updated, and in the next 5G release, the industry could address concerns that OFDM may draw too much power in 5G devices and base stations. That’s a problem, because 5G is expected to require far more base stations to deliver service and connect billions of mobile and IoT devices.
“I don’t think the carriers really understood the impact on the mobile phone, and what it’s going to do to battery life,” says James Kimery, the director of marketing for RF and software-defined radio research at National Instruments Corp. “5G is going to come with a price, and that price is battery consumption.”
And Kimery notes that these concerns apply beyond 5G handsets. China Mobile has “been vocal about the power consumption of their base stations,” he says. A 5G base station is generally expected to consume roughly three times as much power as a 4G base station. And more 5G base stations are needed to cover the same area.
So how did 5G get into a potentially power-guzzling mess? OFDM plays a large part. Data is transmitted using OFDM by chopping the data into portions and sending the portions simultaneously and at different frequencies so that the portions are “orthogonal” (meaning they do not interfere with each other).
The trade-off is that OFDM has a high peak-to-average power ratio (PAPR). Generally speaking, the orthogonal portions of an OFDM signal deliver energy constructively—that is, the very quality that prevents the signals from canceling each other out also prevents each portion’s energy from canceling out the energy of other portions. That means any receiver needs to be able to take in a lot of energy at once, and any transmitter needs to be able to put out a lot of energy at once. Those high-energy instances cause OFDM’s high PAPR and make the method less energy efficient than other encoding schemes.
Yifei Yuan, ZTE Corp.’s chief engineer of wireless standards, says there are a few emerging applications for 5G that make a high PAPR undesirable. In particular, Yuan, who is also the rapporteur for 3GPP’s study group on nonorthogonal multiple-access possibilities for 5G, points to massive machine-type communications, such as large-scale IoT deployments.
Typically, when multiple users, such as a cluster of IoT devices would communicate using OFDM, their communications would be organized using orthogonal frequency-division multiple access (OFDMA), which allocates a chunk of spectrum to each user. (To avoid confusion, remember that OFDM is how each device’s signals are encoded, and OFDMA is the method to make sure that overall, one device’s signals don’t interfere with any others.) The logistics of using distinct spectrum for each device could quickly spiral out of control for large IoT networks, but Release 15 established OFDMA for 5G-connected machines, largely because it’s what was used on 4G.
One promising alternative that Yuan’s group is considering, non-orthogonal multiple access (NOMA), could deliver the advantages of OFDM while also overlapping users on the same spectrum.
For now, Yuan believes OFDM and OFDMA will suit 5G’s early needs. He sees 5G first being used by smartphones, with applications like massive machine-type communications not arriving for at least another year or two, after the completion of Release 16, currently scheduled for December 2019.
But if network providers want to update their equipment to provide NOMA down the line, there could very well be a cost. “This would not come for free,” says Yuan. “Especially for the base station sites.” At the very least, base stations would need software updates to handle NOMA, but they might also require more advanced receivers, more processing power, or other hardware upgrades.
Kimery, for one, isn’t optimistic that the industry will adopt any non-OFDMA options. “It is possible there will be an alternative,” he says. “The probability isn’t great. Once something gets implemented, it’s hard to shift.”
Demand for wireless communication presents multiple industry challenges at once
As the technology curve grows inevitably steeper, solutions that solve multiple problems across a variety of industries become more necessary. The race toward 5G has challenged every wireless researcher, hardware manufacturer, and operator to evaluate what tools they can use for the next generation of communication technology.
The physical layer as we knew it for 4G LTE and previous standards is being pushed to new limits that include integrating multiple input, multiple output (MIMO) technologies; moving to mmWave; and using unlicensed bands for coexistence between protocols. But the physical layer isn’t the only piece of the puzzle anymore.
As the standards become increasingly complex and stringent across multiple layers, the MAC, data link, and network layers need to be developed. Timing requirements begin to tighten and latency becomes a bigger issue. The need for processing inside a single node begins to scale far beyond what has previously been met with application-specific integrated circuits (ASICs). To take it a step further, the deployability and scale of systems are growing. There becomes an increasing need for remote radio head applications, where base stations may evolve beyond a single antenna system to remote nodes across a city. The demand for wireless communication has placed multiple advanced challenges on the industry at once.
Challenges in Determinism Applications such as 5G New Radio (5G NR) introduce timing constraints that make the relationship between the processor and RF front end even more critical than previous communication protocols like LTE or 802.11. Ultra-reliable machine type communication creates a need for upper-layer functionality to occur in much more deterministic and precise timing intervals, which forces technologies like schedulers to be implemented more deterministically. New 802.11 standards such as 802.11ax depend on strict timing requirements as access points determine channel models dynamically through trigger frames and a trigger-based physical layer protocol data unit (PPDU). All these interactions must happen in tight 16us timing requirements, otherwise the communication breaks down. Determinism is becoming increasingly necessary in applications as more and more intelligence is provided to the PHY layer through a MAC. Time-critical operations can no longer be implemented strictly with a PC. Technologies like real-time OSs and FPGAs must be used to handle these sub-1ms timing requirements.
Challenges in Processing In increasing processing power, there are always difficulties such as the mobility of a processing unit, data pipes that transition data from processing resource to processing resource, and the flexibility of the configuration. With complex developments in MAC layer functionality, added focus on software-defined networks, and more robust and complex schedulers, the need for time-accurate processing and parallel computing becomes greater.
Let’s specifically explore the software-defined networking case. With complex requirements from a software-defined communication network, any node may need to completely reconfigure its function at short notice. Integrating a single node with the processing power to handle those decision-making tasks requires transceivers to scale beyond the RF. The RF node may need to have either the ability to process decisions from a scheduler at a central processing point or run its own decision engine, which both require advancements in processing that scales beyond typical ASICs.
However, raw processing power is still not enough. As the need for environment emulation grows, whether from the channel, base station, or user equipment perspective, increasingly lower latencies between processor and circuitry are required. Any application that integrates heterogenous processor architectures (GPP, GPU, FPGA, and so on) requires that the data be transferred over high-speed serial interfaces with low latency and wide data bandwidth.
Challenges in Scale It’s no surprise that technology is shrinking, whether looking at the trend of the off-the-shelf device side of technology or the silicon that the devices are built with. The technology is not only shrinking but it’s also changing scale. Instead of the base station of old, multiple deployed radio nodes may act as the new infrastructure for 5G and beyond. This introduces an entirely new range of needs including servicing the infrastructure and implementing software updates remotely.
However, these changes affect more than operators. Wireless communications researchers must be able to move out of the lab and into the field for real-world trials. The technology can no longer just be demonstrated at the lab level. Multiple universities, operators, and vendors are collaborating to deploy testbeds throughout cities to demonstrate the capabilities of new physical, data link, and network layers in a real-world environment. Deploying hardware for field trials and maintaining accessibility to the project from remote campuses introduce a variety of new issues.
The Stand-Alone USRP RIO: USRP-2974 The USRP (Universal Software Radio Peripheral) solution has been the benchmark of industry and academic software defined radio (SDR) technology for the past decade.
The USRP-2974 is the first stand-alone SDR from NI and will be the first NI USRP to use the power of LabVIEW software, the LabVIEW FPGA Module, and the LabVIEW Real-Time Module, all in a single device. It is designed around the existing USRP RIO hardware solution, but now integrates an x86 processor connected to the USRP through high-speed PCI Express and Ethernet connections for data streaming between an x86 target and FPGA target.
An x86 processor that’s incorporated into the design of an SDR provides many benefits. First, the LabVIEW Real-Time programmable processor is now the ideal target to test scheduler algorithms, allowing for prioritization on the processor and deterministically operating on and sending commands to the FPGA, which in turn can handle the physical layer RF signals. Second, because data can be processed onboard the device, every radio node can run additional computation beyond what previously was done on only the FPGA. Finally, with an Ethernet connection to a development machine, any number of USRP-2974 devices can be deployed with duplicate or unique code bases in a flash, which improves system and code management at the individual module level or at the large-scale testbed.
The USRP-2974 opens the doors for new research and advanced use cases previously limited by traditional SDRs. Systems become more scalable, are more easily managed, and can now integrate decision making and deterministic selection through the onboard processor. The USRP-2974 provides the high performance to tackle even the most complex of communications challenges.
NASA’s Orion and Gateway will try out optical communications gear for a high-speed connection to Earth
With NASA making serious moves toward a permanent return to the moon, it’s natural to wonder whether human settlers—accustomed to high-speed, ubiquitous Internet access—will have to deal with mind-numbingly slow connections once they arrive on the lunar surface. The vast majority of today’s satellites and spacecraft have data rates measured in kilobits per second. But long-term lunar residents might not be as satisfied with the skinny bandwidth that, say, the Apollo astronauts contended with.
To meet the demands of high-definition video and data-intensive scientific research, NASA and other space agencies are pushing the radio bands traditionally allocated for space research to their limits. For example, the Orion spacecraft, which will carry astronauts around the moon during NASA’s Artemis 2 mission in 2022, will transmit mission-critical information to Earth via an S-band radio at 50 megabits per second. “It’s the most complex flight-management system ever flown on a spacecraft,” says Jim Schier, the chief architect for NASA’s Space Communications and Navigation program. Still, barely 1 Mb/s will be allocated for streaming video from the mission. That’s about one-fifth the speed needed to stream a high-definition movie from Netflix.
To boost data rates even higher means moving beyond radio and developing optical communications systems that use lasers to beam data across space. In addition to its S-band radio, Orion will carry a laser communications system for sending ultrahigh-definition 4K video back to Earth. And further out, NASA’s Gateway will create a long-term laser communications hub linking our planet and its satellite.
Laser communications are a tricky proposition. The slightest jolt to a spacecraft could send a laser beam wildly off course, while a passing cloud could interrupt it. But if they work, robust optical communications will allow future missions to receive software updates in minutes, not days. Astronauts will be sheltered from the loneliness of working in space. And the scientific community will have access to an unprecedented flow of data between Earth and the moon.
Today, space agencies prefer to use radios in the S band (2 to 4 gigahertz) and Ka band (26.5 to 40 GHz) for communications between spacecraft and mission control, with onboard radios transmitting course information, environmental conditions, and data from dozens of spaceflight systems back to mission control. The Ka band is particularly prized—Don Cornwell, who oversees radio and optical technology development at NASA, calls it “the Cadillac of radio frequencies”—because it can transmit up to gigabits per second and propagates well in space.
Any spacecraft’s ability to transmit data is constrained by some unavoidable physical truths of the electromagnetic spectrum. For one, radio spectrum is finite, and the prized bands for space communications are equally prized by commercial applications. Bluetooth and Wi-Fi use the S band, and 5G cellular networks use the Ka band.
The second big problem is that radio signals disperse in the vacuum of space. By the time a Ka-band signal from the moon reaches Earth, it will have spread out to cover an area about 2,000 kilometers in diameter—roughly the size of India. By then, the signal is a lot weaker, so you’ll need either a sensitive receiver on Earth or a powerful transmitter on the moon.
Laser communications systems also have dispersion issues, and beams that intersect can muddle up the data. But a laser beam sent from the moon would cover an area only 6 km across by the time it arrives on Earth. That means it’s much less likely for any two beams to intersect. Plus, they won’t have to contend with an already crowded chunk of spectrum. You can transmit a virtually limitless quantity of data using lasers, says Cornwell. “The spectrum for optical is unconstrained. Laser beams are so narrow, it’s almost impossible [for them] to interfere with one another.”
Higher frequencies also mean shorter wavelengths, which bring more benefits. Ka-band signals have wavelengths from 7.5 millimeters to 1 centimeter, but NASA plans to use lasers that have a 1,550-nanometer wavelength, the same wavelength used for terrestrial optical-fiber networks. Indeed, much of the development of laser communications for space builds on existing optical-fiber engineering. Shorter wavelengths (and higher frequencies) mean that more data can be packed into every second.
The advantages of laser communications have been known for many years, but it’s only recently that engineers have been able to build systems that outperform radio. In 2013, for example, NASA’s Lunar Laser Communications Demonstration proved that optical signals can reliably send information from lunar orbit back to Earth. The month-long experiment used a transmitter on the Lunar Atmosphere and Dust Environment Explorer to beam data back to Earth at speeds of 622 Mb/s, more than 10 times as fast as Orion’s S-band radio will.
“I was shocked to learn [Orion was] going back to the moon with an S-band radio,” says Bryan Robinson, an optical communications expert at MIT Lincoln Laboratory in Lexington, Mass. Lincoln Lab has played an important role in developing many of the laser communications systems on NASA missions, starting with the early optical demonstrations of the classified GeoLITE satellite in 2001. “Humans have gotten used to so much more, here on Earth and in low Earth orbit. I was glad they came around and put laser comm back on the mission.”
As a complement to its S-band radio, during the Artemis 2 mission Orion will carry a laser system called Optical to Orion, or O2O. NASA doesn’t plan to use O2O for any mission-critical communications. Its main task will be to stream 4K ultrahigh-definition video from the moon to a curious public back home. O2O will receive data at 80 Mb/s and transmit at 20 Mb/s while in lunar orbit. If you’re wondering why O2O will transmit at 20 Mb/s when a demonstration project six years ago was able to transmit at 622 Mb/s, it’s simply because the Orion developers “never asked us to do 622,” says Farzana Khatri, a senior staff member in Lincoln Lab’s optical communications group. Cornwell confirms that O2O’s downlink will deliver a minimum of 80 Mb/s from Earth, though the system is capable of higher data rates.
If successful, O2O will open the door for data-heavy communications on future crewed missions, allowing for video chats with family, private consultations with doctors, or even just watching a live sports event during downtime. The more time people spend on the moon, the more important all of these connections will be to their mental well-being. And eventually, video will become mission critical for crews on board deep-space missions.
Before O2O can even be tested in space, it first has to survive the journey. Laser systems mounted on spacecraft use telescopes to send and receive signals. Those telescopes rely on a fiddly arrangement of mirrors and other moving parts. O2O’s telescope will use an off-axis Cassegrain design, a type of telescope with two mirrors to focus the captured light, mounted on a rotating gimbal. Lincoln Lab researchers selected the design because it will allow them to separate the telescope from the optical transceiver, making the entire system more modular. The engineers must ensure that the Space Launch System rocket carrying Orion won’t shake the whole delicate arrangement apart. The researchers at Lincoln Lab have developed clasps and mounts that they hope will reduce vibrations and keep everything intact during the tumultuous launch.
Once O2O is in space, it will have to be precisely aimed. It’s hard to miss a receiver when your radio signal has the cross section the size of a large country. A 6-km-diameter signal, on the other hand, could miss Earth entirely with just a slight bump from the spacecraft. “If you [use] a laser pointer when you’re nervous and your hand is shaking, it’s going to go all over the place,” says Cornwell.
Orion’s onboard equipment will also generate constant minuscule vibrations, any one of which would be enough to throw off an optical signal. So engineers at NASA and Lincoln Lab will place the optical system on an antijitter platform. The platform measures the jitters from the spacecraft and produces an opposite pattern of vibrations to cancel them out—“like noise-canceling headphones,” Cornwell says.
One final hurdle for O2O will be dealing with any cloud cover back on Earth. Infrared wavelengths, like the O2O’s 1,550 nm, are easily absorbed by clouds. A laser beam might travel the nearly 400,000 km from the moon without incident, only to be blocked just above Earth’s surface. Today, the best defense against losing a signal to a passing stratocumulus is to beam transmissions to multiple receivers. O2O, for example, will use ground stations at Table Mountain, Calif., and White Sands, N.M.
The Gateway, scheduled to be built in the 2020s, will present a far bigger opportunity for high-speed laser communications in space. NASA, with help from its Canadian, European, Japanese, and Russian counterparts, will place this space station in orbit around the moon; the station will serve as a staging area and communications relay for lunar research.
NASA’s Schier suspects that research and technology demonstrations on the Gateway could generate 5 to 8 Gb/s of data that will need to be sent back to Earth. That data rate would dwarf the transmission speed of anything in space right now—the International Space Station (ISS) sends data to Earth at 25 Mb/s. “[Five to 8 GB/s is] the kind of thing that if you turned everything on in the [ISS], you’d be able to run it for 2 seconds before you overran the buffers,” Schier says.
The Gateway offers an opportunity to build a permanent optical trunk line between Earth and the moon. One thing NASA would like to use the Gateway for is transmitting positioning, navigation, and timing information to vehicles on the lunar surface. “A cellphone in your pocket needs to see four GPS satellites,” says Schier. “We’re not going to have that around the moon.” Instead, a single beam from the Gateway could provide a lunar rover with accurate distance, azimuth, and timing to find its exact position on the surface.
What’s more, using optical communications could free up radio spectrum for scientific research. Robinson points out that the far side of the moon is an optimal spot to build a radio telescope, because it would be shielded from the chatter coming from Earth. (In fact, radio astronomers are already planning such an observatory: Our article “Rovers Will Unroll a Telescope on the Moon’s Far Side” explains their scheme.) If all the communication systems around the moon were optical, he says, there’d be nothing to corrupt the observations.
Beyond that, scientists and engineers still aren’t sure what else they’ll do with the Gateway’s potential data speeds. “A lot of this, we’re still studying,” says Cornwell.
In the coming years, other missions will test whether laser communications work well in deep space. NASA’s mission to the asteroid Psyche, for instance, will help determine how precisely an optical communications system can be pointed and how powerful the lasers can be before they start damaging the telescopes used to transmit the signals. But closer to home, the communications needed to work and live on the moon can be provided only by lasers. Fortunately, the future of those lasers looks bright.
This article appears in the July 2019 print issue as “Phoning Home, With Lasers.”
Some think automated radio emails are mucking up the spectrum reserved for amateur radio, while others say these new offerings provide a useful service
Like many amateur radio fans his age, Ron Kolarik, 71, still recalls the “pure magic” of his first ham experience nearly 60 years ago. Lately, though, encrypted messages have begun to infiltrate the amateur bands in ways that he says are antithetical to the spirit of this beloved hobby.
So Kolarik filed a petition, RM-11831 [PDF], to the U.S. Federal Communications Commission (FCC) proposing a rule change to “Reduce Interference and Add Transparency to Digital Data Communications.” And as the proposal makes its way through the FCC’s process, it has stirred up heated debate that goes straight to the heart of what ham radio is, and ought to be.
The core questions: Should amateur radio—and its precious spectrum—be protected purely as a hobby, or is it a utility that delivers data traffic? Or is it both? And who gets to decide?
AI, 5G, and the IoT will allow factories to produce new goods on the fly
The future of manufacturing is software defined. You don’t have to look further than ABB to understand why companies are turning to 5G networks, artificial intelligence, and computer vision. The Swiss company is using these new tools to boost reliability and agility in its nearly 300 factories around the world, which produce a host of goods, from simple plastic zip ties to complex robotic arms.
For ABB and other companies pushing software-defined networking, it’s all about being safer while adapting to a growing clamor for personalized products.
When it comes to safety, adding more sensors to machines and deploying AI can make the end product more consistently reliable. At its Heidelberg factory, for example, ABB makes circuit breakers. But even with 99.999 percent reliability at ABB’s factory, faulty circuit breakers would still kill 3,000 people a year, according to Guido Jouret, the company’s chief digital officer.
People can’t achieve 100 percent reliability when making and inspecting the completed circuit breakers—but a camera with machine learning can. When that camera detects any sort of variation, factory managers can go back to the machine to figure out what’s causing that defect.
Boosting safety and reliability isn’t exactly news to anyone who has followed the adoption of Japanese Kaizen or Six Sigma manufacturing, efforts to improve reliability and reduce waste, in the automotive industry. But adding automation and robots is becoming more important today as our culture increasingly wants customized products. These so-called lean-manufacturing methods allow factory managers to reconfigure their production lines on the fly.
“It’s not always about being more efficient,” Jouret says. “It’s about being more agile.”
Kiva Allgood, the head of Internet of Things and automotive at Ericsson, calls the shift from efficiency to agility a move away from economies of scale toward an economy of one. In other words, the inefficiencies traditionally associated with making low quantities of goods will no longer apply. She saw this change coming as an executive at General Electric. Now she’s working on the wireless technology that will help make this possible.
But before we can reprogram the factory floor, we have to understand it. That starts with individual machines. We’ll need manufacturing equipment with sensors measuring both the machine’s work and the machine’s health. This is the stage where many manufacturers are today.
The factory should also have sensors that provide context to the overall environment, including temperature, workers’ movements, and more. Armed with that understanding as well as computer-vision algorithms designed to detect flaws in the manufactured product, it will become possible to quickly repurpose robots to make something new.
Perhaps more interestingly, future agile factories will remove the wires littering factory floors. Historically, factory automation has meant building a rigidly defined manufacturing line dictated by the robots making the product. But with developing tech, factories will free those robots from their data and power wires, and replace the wires with low-latency wireless 5G networks. Then, factories can turn days-long reconfiguration efforts into an overnight project.
By emphasizing agility over efficiency, the factories of the future will be able to turn on a dime to meet the demands of our fast-paced society.
This article appears in the July 2019 print issue as “One Factory Fits All.”
The company was able to respond quickly because it had already begun tests in the country
When a magnitude 8.0 earthquake struck Peru on Sunday, it wreaked havoc on the country’s communications infrastructure. Within 48 hours, though, people in affected regions could use their mobile phones again. Loon, the Alphabet company, was there delivering wireless service by balloon.
Such a rapid response was possible because Loon happened to be in the country, testing its equipment while working out a deal with provider Telefonica. Both terrestrial infrastructure and the balloons themselves were already in place, and Loon simply had to reorganize its balloons to deliver service on short notice.
Modern wireless tech isn’t just for communications. It can also sense a person’s breathing and heart rate, even gauge emotions
When I was a boy, I secretly hoped I would discover latent within me some amazing superpower—say, X-ray vision or the ability to read people’s minds. Lots of kids have such dreams. But even my fertile young brain couldn’t imagine that one day I would help transform such superpowers into reality. Nor could I conceive of the possibility that I would demonstrate these abilities to the president of the United States. And yet two decades later, that’s exactly what happened.
There was, of course, no magic or science fiction involved, just new algorithms and clever engineering, using wireless technology that senses the reflections of radio waves emanating from a nearby transmitter. The approach my MIT colleagues and I are pursuing relies on inexpensive equipment that is easy to install—no more difficult than setting up a Wi-Fi router.
These results are now being applied in the real world, helping medical researchers and clinicians to better gauge the progression of diseases that affect gait and mobility. And devices based on this technology will soon become commercially available. In the future, they could be used for monitoring elderly people at home, sending an alert if someone has fallen. Unlike users of today’s medical alert systems, the people being monitored won’t have to wear a radio-equipped bracelet or pendant. The technology could also be used to monitor the breathing and heartbeat of newborns, without having to put sensors in contact with the infants’ fragile skin.
You’re probably wondering how this radio-based sensing technology works. If so, read on, and I will explain by telling the story of how we managed to push our system to increasing levels of sensitivity and sophistication.
It all started in 2012, shortly after I became a graduate student at MIT. My faculty advisor, Dina Katabi, and I were working on a way to allow Wi-Fi networks to carry data faster. We mounted a Wi-Fi device on a mobile robot and let the robot navigate itself to the spot in the room where data throughput was highest. Every once in a while, our throughput numbers would mysteriously plummet. Eventually we realized that when someone was walking in the adjacent hallway, the person’s presence disrupted the wireless signals in our room.
We should have seen this coming. Wireless communication systems are notoriously vulnerable to electromagnetic noise and interference, which engineers work hard to combat. But seeing these effects firsthand got us thinking along completely different lines about our research. We wondered whether this “noise,” caused by passersby, could serve as a new source of information about the nearby environment. Could we take a Wi-Fi device, point it at a wall, and see on a computer screen how someone behind the wall was moving?
That should be possible, we figured. After all, walls don’t block wireless signals. You can get a Wi-Fi connection even when the router is in another room. And if there’s a person on the other side of a wall, the wireless signal you send out on this side will reflect off his or her body. Naturally, after the signal traverses the wall, gets reflected back, and crosses through the wall again, it will be very attenuated. But if we could somehow register these minute reflections, we would, in some sense, be able to see through the wall.
Using radio waves to detect what’s on the other side of a wall has been done before, but with sophisticated radar equipment and expensive antenna arrays. We wanted to use equipment not much different from the kind you’d use to create a Wi-Fi local area network in your home.
As we started experimenting with this idea, we discovered a host of practical complications. The first came from the wall itself, whose reflections were 10,000 to 100,000 times as strong as any reflection coming from beyond it. Another challenge was that wireless signals bounce off not only the wall and the human body but also other objects, including tables, chairs, ceilings, and floors. So we had to come up with a way to cancel out all these other reflections and keep only those from someone behind the wall.
To do this, we initially used two transmitters and one receiver. First, we sent off a signal from one transmitter and measured the reflections that came back to our receiver. The received signal was dominated by a large reflection coming off the wall.
We did the same with the second transmitter. The signal it received was also dominated by the strong reflection from the wall, but the magnitude of the reflection and the delay between transmitted and reflected signals were slightly different.
Then we adjusted the signal given off by the first transmitter so that its reflections would cancel the reflections created by the second transmitter. Once we did that, the receiver didn’t register the giant reflection from the wall. Only the reflections that didn’t get canceled, such as those from someone moving behind the wall, would register. We could then boost the signal sent by both transmitters without overloading the receiver with the wall’s reflections. Indeed, we now had a system that canceled reflections from all stationary objects.
Next, we concentrated on detecting a person as he or she moved around the adjacent room. For that, we used a technique called inverse synthetic aperture radar, which is sometimes used for maritime surveillance and radar astronomy. With our simple equipment, that strategy was reasonably good at detecting whether there was a person moving behind a wall and even gauging the direction in which the person was moving. But it didn’t show the person’s location.
As the research evolved, and our team grew to include fellow graduate student Zachary Kabelac and Professor Robert Miller, we modified our system so that it included a larger number of antennas and operated less like a Wi-Fi router and more like conventional radar.
People often think that radar equipment emits a brief radio pulse and then measures the delay in the reflections that come back. It can, but that’s technically hard to do. We used an easier method, called frequency-modulated continuous-wave radar, which measures distance by comparing the frequency of the transmitted and reflected waves. Our system operated between about 5 and 7 gigahertz, transmitted signals that were just 0.1 percent the strength of Wi-Fi, and could determine the distance to an object to within a few centimeters.
Using one transmitting antenna and multiple receiving antennas mounted at different positions allowed us to measure radio reflections for each transmit-receive antenna pair. Those measurements showed the amount of time it took for a radio signal to leave the transmitter, reach the person in the next room, and reflect back to the receiver—typically several tens of nanoseconds. Multiplying that very short time interval by the speed of light gives the distance the signal traveled from the transmitter to the person and back to the receiver. As you might recall from middle-school geometry class, that distance defines an ellipse, with the two antennas located on the ellipse’s two foci. The person generating the reflection in the next room must be located somewhere on that ellipse.
With two receiving antennas, we could map out two such ellipses, which intersected at the person’s location. With more than two receiving antennas, it was possible to locate the person in 3D—you could tell whether a person was standing up or lying on the floor, for example. Things get trickier if you want to locate multiple people this way, but as we later showed, with enough antennas, it’s doable.
It’s easy to think of applications for such a system. Smart homes could track the location of their occupants and adjust the heating and cooling of different rooms. You could monitor elderly people, to be sure they hadn’t fallen or otherwise become immobilized, without requiring these seniors to wear a radio transmitter. We even developed a way for our system to track someone’s arm gestures, enabling the user to control lights or appliances by pointing at them.
The natural next step for our research team—which by this time also included graduate students Chen-Yu Hsu and Hongzi Mao, and Professor Frédo Durand—was to capture a human silhouette through the wall.
The fundamental challenge here was that at Wi-Fi frequencies, the reflections from some body parts would bounce back at the receiving antenna, while other reflections would go off in other directions. So our wireless imaging device would capture some body parts but not others, and we didn’t know which body parts they were.
Our solution was quite simple: We aggregated the measurements over time. That works because as a person moves, different body parts as well as different perspectives of the same body part get exposed to the radio waves. We designed an algorithm that uses a model of the human body to stitch together a series of reflection snapshots. Our device was then able to reconstruct a coarse human silhouette, showing the location of a person’s head, chest, arms, and feet.
While this isn’t the X-ray vision of Superman fame, the low resolution might be considered a feature rather than a bug, given the concerns people have about privacy. And we later showed that the ghostly images were of sufficient resolution to identify different people with the help of a machine-learning classifier. We also showed that the system could be used to track the palm of a user to within a couple of centimeters, which means we might someday be able to detect hand gestures.
Initially, we assumed that our system could track only a person who was moving. Then one day, I asked the test subject to remain still during the initialization stage of our device. Our system accurately registered his location despite the fact that we had designed it to ignore static objects. We were surprised.
Looking more closely at the output of the device, we realized that a crude radio image of our subject was appearing and disappearing—with a periodicity that matched his breathing. We had not realized that our system could capture human breathing in a typical indoor setting using such low-power wireless signals. The reason this works, of course, is that the slight movements associated with chest expansion and contraction affect the wireless signals. Realizing this, we refined our system and algorithms to accurately monitor breathing.
Earlier research in the literature had shown the possibility of using radar to detect breathing and heartbeats, and after scrutinizing the received signals more closely, we discovered that we could also measure someone’s pulse. That’s because, as the heart pumps blood, the resulting forces cause different body parts to oscillate in subtle ways. Although the motions are tiny, our algorithms could zoom in on them and track them with high accuracy, even in environments with lots of radio noise and with multiple subjects moving around, something earlier researchers hadn’t achieved.
By this point it was clear that there were important real-world applications, so we were excited to spin off a company, which we called Emerald, to commercialize our research. Wearing our Emerald hats, we participated in the MIT $100K Entrepreneurship Competition and made it to the finals. While we didn’t win, we did get invited to showcase our device in August 2015 at the White House’s “Demo Day,” an event that President Barack Obama organized to show off innovations from all over the United States.
It was thrilling to demonstrate our work to the president, who watched as our system detected my falling down and monitored my breathing and heartbeat. He noted that the system would make for a good baby monitor. Indeed, one of the most interesting tests we had done was with a sleeping baby.
Video on a typical baby monitor doesn’t show much. But outfitted with our device, such a monitor could measure the baby’s breathing and heart rate without difficulty. This approach could also be used in hospitals to monitor the vital signs of neonatal and premature babies. These infants have very sensitive skin, making it problematic to attach traditional sensors to them.
About three years ago, we decided to try sensing human emotions with wireless signals. And why not? When a person is excited, his or her heart rate increases; when blissful, the heart rate declines. But we quickly realized that breathing and heart rate alone would not be sufficient. After all, our heart rates are also high when we’re angry and low when we’re sad.
Looking at past research in affective computing—the field of study that tries to recognize human emotions from such things as video feeds, images, voice, electroencephalography (EEG), and electrocardiography (ECG)—we learned that the most important vital sign for recognizing human emotions is the millisecond variation in the intervals between heartbeats. That’s a lot harder to measure than average heart rate. And in contrast to ECG signals, which have very sharp peaks, the shape of a heartbeat signal on our wireless device isn’t known ahead of time, and the signal is quite noisy. To overcome these challenges, we designed a system that learns the shape of the heartbeat signal from the pattern of wireless reflections and then uses that shape to recover the length of each individual beat.
Using features from these heartbeat signals as well as from the person’s breathing patterns, we trained a machine-learning system to classify them into one of four fundamental emotional states: sadness, anger, pleasure, and joy. Sadness and anger are both negative emotions, but sadness is a calm emotion, whereas anger is associated with excitement. Pleasure and joy, both positive emotions, are similarly associated with calm and excited states.
Testing our system on different people, we demonstrated that it could accurately sense emotions 87 percent of the time when the testing and training were done on the same subject. When it hadn’t been trained on the subject’s data, it still recognized the person’s emotions with more than 73 percent accuracy.
In October 2016, fellow graduate student Mingmin Zhao, Katabi, and I published a scholarly article [PDF] about these results, which got picked up in the popular press. A few months later, our research inspired an episode of the U.S. sitcom “The Big Bang Theory.” In the episode, the characters supposedly borrow the device we’d developed to try to improve Sheldon’s emotional intelligence.
While it’s unlikely that a wireless device could ever help somebody in this way, using wireless signals to recognize basic human mental states could have other practical applications. For example, it might help a virtual assistant like Amazon’s Alexa recognize a user’s emotional state.
There are many other possible applications that we’ve only just started to explore. Today, Emerald’s prototypes are in more than 200 homes, where they are monitoring test subjects’ sleep and gait. Doctors at Boston Medical Center, Brigham and Women’s Hospital, Massachusetts General Hospital, and elsewhere will use the data to study disease progression in patients with Alzheimer’s, Parkinson’s, and multiple sclerosis. We hope that in the not-too-distant future, anyone will be able to purchase an Emerald device.
When someone asks me what’s next for wireless sensing, I like to answer by asking them what their favorite superpower is. Very possibly, that’s where this technology is going.
This article appears in the June 2019 print issue as “Seeing With Radio.”
About the Author
Fadel Adib is an assistant professor at MIT and founding director of the Signal Kinetics research group at the MIT Media Lab.
DARPA’s Spectrum Collaboration Challenge demonstrates that autonomous radios can manage spectrum better than humans can
In the early 2000s, Bluetooth almost met an untimely end. The first Bluetooth devices struggled to avoid interfering with Wi-Fi routers, a higher-powered, more-established cohort on the radio spectrum, with which Bluetooth devices shared frequencies. Bluetooth engineers eventually modified their standard—and saved their wireless tech from early extinction—by developing frequency-hopping techniques for Bluetooth devices, which shifted operation to unoccupied bands upon detecting Wi-Fi signals.
Frequency hopping is just one way to avoid interference, a problem that has plagued radio since its beginning. Long ago, regulators learned to manage spectrum so that in the emerging wireless ecosystem, different radio users were allocated different frequencies for their exclusive use. While this practice avoids the challenges of detecting transmissions and shifting frequencies on the fly, it makes very inefficient use of spectrum, as portions lay fallow.
Today, demand is soaring for the finite resource of radio spectrum. Over the last several years, wireless data transmission has grown by roughly 50 percent per year, driven largely by people streaming videos and scrolling through social media on their smartphones. To meet this demand, we must allocate spectrum as efficiently as possible. Increasingly, that means that wireless technologies cannot have exclusive frequencies, but rather must share available spectrum. Frequency hopping, which Bluetooth uses, will be part of the solution, but to cope with the surging demand we are going to have to go far beyond it.
To tackle spectrum scarcity, I created the Spectrum Collaboration Challenge (SC2) at the U.S. Defense Advanced Research Projects Agency (DARPA), where I am a program manager. SC2 is a three-year open competition in which teams from around the world are rethinking the spectrum-management problem with a clean slate. Teams are designing new radios that use artificial intelligence (AI) to learn how to share spectrum with their competitors, with the ultimate goal of increasing overall data throughput. These teams are vying for nearly US $4 million in prizes to be awarded at the SC2 championship this coming October in Los Angeles. Thanks to two years of competition, we have witnessed, for the first time, autonomous radios collectively sharing wireless spectrum to transmit far more data than would be possible by assigning exclusive frequencies to each radio.
Before SC2, various DARPA projects had demonstrated that a handful of radios could autonomously manage spectrum by frequency hopping, as Bluetooth does, in order to avoid one another. So why can’t we just extend the use of the frequency-hopping technique to a wider array of radios, and solve the problem of limited spectrum that way?
Unfortunately, frequency hopping works only up to a point. It depends on the availability of unused spectrum, and if there are too many radios trying to send signals, there won’t be much, if any, unused spectrum available. To make SC2 work, we realized, we would need to test competing teams on scenarios with dozens of radios trying to share a spectrum band simultaneously. That way, we could ensure that each radio couldn’t have its own dedicated channel, because there wouldn’t be enough spectrum to go around.
With that in mind, we developed scenarios that would be played out in a series of round-robin matches, in which three, four, or five independent radio networks all broadcast together in a roughly one-square-kilometer area. The radio networks would be permitted access to the same frequencies, and each network would use an AI system to figure out how to share those frequencies with the other networks. We would determine how successful a given match was based on how many tasks, such as phone calls and video streams, were completed. A group of radio networks completing more tasks than another group would be crowned the winner for that match. However, our main goal was to see teams develop AI-managed radio networks that would be capable of completing more tasks collectively than would be possible if each radio was using an exclusive frequency band.
We realized quickly that placing these radios in the real world would have been impractical. We would never be able to guarantee that the wireless conditions would be the same for each team that competed. Also, moving individual radios around to set up each scenario and each match would have been far too complicated and time consuming.
So we built Colosseum, the world’s largest radio-frequency emulation test-bed. Currently housed in Laurel, Md., at the Johns Hopkins University Applied Physics Laboratory, Colosseum occupies 21 server racks, consumes 65 kilowatts, and requires roughly the same amount of cooling as 10 large homes. It can emulate more than 65,000 unique interactions, such as text messages or video streams, between 128 radios at once. There are 64 field-programmable gate arrays that handle the emulation by together performing more than 150 trillion floating-point operations (teraflops).
For each match, we plug in radios so that they can “broadcast” radio-frequency signals straight into Colosseum. This test-bed has enough computing power to calculate how those signals will behave, according to a detailed mathematical model of a given environment. For example, within Colosseum are emulated walls, off which signals “bounce.” There are emulated rainstorms and ponds, within which signals are partly “absorbed.”
The emulation provides all the information necessary for the teams’ AIs to make appropriate decisions based on their observations during each emulated scenario. Faced with a cellphone jammer that is flooding a frequency with meaningless noise, for example, an AI might choose to change its frequency to one not affected by the jammer.
It’s one thing to build an environment for AIs to collaboratively manage spectrum, but it’s another thing entirely to create those AIs. To understand how the teams competing in SC2 are building these AI systems, you need a bit of background on how AI has developed in the past several decades.
Broadly speaking, researchers have advanced AI in a couple of “waves” that have redefined how these systems learn. The first wave of AI was expert systems. These AIs are created by interviewing experts in a particular area and deriving a set of rules from them that an autonomous system can use to make decisions while trying to accomplish something. These AIs excel at problems, such as chess, where the rules can be written down in a straightforward fashion. In fact, one of the best-known examples of first-wave AI is IBM’s Deep Blue, which first beat chess master Garry Kasparov in 1997.
There’s a newer, second wave of AI that relies on huge amounts of data, rather than human expertise, to learn the rules of a given task. Second-wave AI is particularly good at problems where humans have trouble writing down all the nuances of a problem and where there often seem to be more exceptions than rules. Recognizing speech is an example of such a problem. These systems ingest complex raw data, such as audio signals, and then make decisions about the data, such as what words were spoken. This wave of AI is the type we find in the speech recognition used by digital assistants like Siri and Alexa.
Today, neither first- nor second-wave AI is used for managing wireless spectrum. That meant that we could consider both waves of AI and the ways in which researchers teach those AIs how to solve problems, to find the best solution to our problem. Ultimately, it is easiest to treat spectrum management as a reinforcement-learning problem, in which we reward the AI when it succeeds and penalize it when it fails. For example, the AI may receive one point for successfully transmitting data, or lose one point for a transmission that was dropped. By accumulating points during a training period, the AI remembers successes and tries to repeat them, while also moving away from unsuccessful tactics.
In our competition, a dropped transmission often happens because of interference from another radio’s transmission. So we also have to think of wireless management as a collaborative challenge, because there are multiple radios broadcasting at the same time. The key to AI-managed radios performing better than traditional, static allocation is developing AIs that can maximize their own points while leaving room for the other AIs to do the same. Teams are rewarded when they make as many successful transmissions as possible without constantly bumping into one another in the pursuit of available spectrum, which would prevent them all from maximizing use of that spectrum.
As if that’s not difficult enough, there’s one additional wrinkle that makes spectrum collaboration harder than many similar problems. Imagine playing a game of pickup basketball with people you’ve never met before: Your team’s ability to play together is not going to be anywhere near as good as that of teammates who have trained together for years. To date, the most successful challenges involving multiple agents have been ones where AIs have been trained together. A recent example was a project in 2018, in which the nonprofit AI research company OpenAI demonstrated that a group of five AIs could beat a team of human players in the video game Dota 2.
It’s 9 December 2018, and my DARPA colleagues and I are finally getting our chance to learn if a group of AIs can succeed at such a complex multiagent problem. We’re huddled around a set of computers in a hotel conference room, just a block away from where Colosseum is installed. The hotel has been our command center for a week now, and we’ve analyzed more than 300 matches to determine the top-scoring teams. In three days, we expect to award up to eight $750,000 awards, one for each of the top teams. But for the moment, we don’t actually know how many prizes we’re going to be handing out.
In the first qualifying event a year earlier, teams were judged solely on their relative rankings. This time, however, to win an award, the top teams must also demonstrate that their radios can manage spectrum better than by using traditional dedicated channels.
To compare autonomous radios against exclusive-frequency management, we designed one last set of matches. First, we took a baseline, in which each team was assigned exclusive frequencies, to measure how much data they could transmit. Then we removed the restrictions to see whether a team’s network could transmit more data without hampering the four other radio networks sharing the spectrum.
In the hotel room, we’re waiting anxiously for the last set of matches to complete. If no one is able to clear the bar that we’ve set for them, two years of hard work could be dashed. It occurs to us that in our zeal, we had no backup plan should everyone fail. And it didn’t necessarily soothe our nerves that by this point in SC2’s existence, we’ve started to see the limitations of some approaches.
Fortunately, we’ve also begun to spot some of the keys to success. When the competition began, nearly all of the teams started with first-wave AI approaches. This made sense as a starting point—remember that there are no AI systems being used to manage spectrum. In this first-wave approach, the teams are trying to write the general rules for collaboratively using spectrum.
Of course, each team has written slightly different rules, but every system they developed had some general principles in common. First, the systems should listen for what frequencies each network has asked to use. Second, from the remaining frequency bands, only one radio should be assigned to each band—and teams should be good neighbors by not claiming more than their fair share. And third, if there are no empty frequency bands, radios should select the ones with the least interference.
Unfortunately, these rules fail to catch all the idiosyncrasies of wireless management, which result in unintended consequences that hamper the radios’ ability to work together. During SC2, we’ve seen plenty of examples where these seemingly straightforward rules fail.
For example, remember the second rule, to be a good neighbor and not hog frequencies? In principle, this cooperative approach should provide opportunities for other radios to use more spectrum if they need it. In practice, we saw how this strategy goes awry: In one instance, three teams left a large amount of the spectrum completely unused.
Looking at the results, we realized that one team insisted on using no more than one-third of the spectrum. While this strategy was very altruistic, it also limited the connections they could make to complete their own tasks—and therefore limited their score as well. It got worse when another system noticed that the first system was not scoring enough points, so it limited its own spectrum use to allow the first system to use more, which it would never do. Basically, the systems were being too deferential, and the result was wasted spectrum.
To fix that problem with a first-wave AI, the teams have to write another rule. And when that new rule results in another unexpected outcome, they deal with that by writing another rule. And so on. These constant surprises and consequent new rules are the main shortcoming of first-wave AIs. What can seem like a straightforward problem might end up being more difficult than it appeared to be.
Rather than rely on a few hard-and-fast rules, it seems that a better approach is for each radio to adapt its strategy based on the other radios with which it is sharing spectrum. In effect, the radio should develop an ever-growing series of rules by mining them from a large volume of data—the kind of data that Colosseum is good at generating. That’s why now, during this trial on 9 December 2018, we’re seeing teams shift to a second-wave AI approach. Several teams have built fledgling second-wave AI networks that can quickly characterize how the other networks are playing a match, and use this information to change their own radios’ rules on the fly.
When SC2 started, we suspected that many teams would take the simple approach of employing a “sense and avoid” strategy. This is what a Bluetooth device does when it discovers that the spectrum it wants is being used by a Wi-Fi router: It jumps to a new frequency. But Bluetooth’s frequency hopping works, in part, because Wi-Fi acts in a predictable way (that is, it broadcasts on a specific frequency and won’t change that behavior). However, in our competition, each team’s radios behave very differently and not at all predictably, making a sense-and-avoid strategy, well, senseless.
Instead, we’re seeing that a better approach is to predict what the spectrum will look like in the future. Then, a radio could use those predictions to decide which frequencies might open up—even if only for a moment or two, just enough to push through even a small amount of data. More precise predictions will allow collaborating radios to capitalize on every opportunity to transmit more data, without interfering by grabbing for the same frequency at the same time. Now our hope is that second-wave AIs can learn to predict the spectrum environment with enough precision to not let a single hertz go to waste.
Of course, all of this theoryis useless if the AI-managed systems can’t outperform traditional allocation. That’s why we’re delighted to see, that night in the hotel room with the results rolling out of Colosseum, that six of the top eight teams had succeeded! The teams demonstrated that their radios, when they collaborated to share spectrum, could collectively deliver more data than if they had used exclusive frequencies. Three weeks later, four additional teams would do the same, bringing the total to 10.
We were, of course, ecstatic. But encouraging though the results are, it’s too soon to say when we might see radios using AI actively to manage their use of radio spectrum. The important thing to understand about the DARPA grand challenges is that they’re not about the state of technology at the end of the competition. Rather, the challenges are designed to determine whether a fundamental shift is possible. Look at DARPA’s autonomous driving Grand Challenge in 2004: It took another decade for autonomous technology to start being used in a very limited way in commercial cars.
That said, the results from our initial tournaments are promising. So far, we’ve found that when three radio networks share the spectrum, their predictions are much better than when four or five teams try to share the same amount. But we’re not done yet, and our teams are currently building even better systems. Perhaps, on 23 October 2019 at SC2’s live championship event at Mobile World Congress Americas, in Los Angeles, those systems will demonstrate, more successfully than ever before, that AI-operated radios can work together to create a new era of wireless communications.
This article appears in the June 2019 print issue as “AI Will Rule the Airwaves.”
About the Author
Paul Tilghman is a program manager in the Microsystems Technology Office of the Defense Advanced Research Projects Agency, where he’s been overseeing DARPA’s Spectrum Collaboration Challenge.
With more data, constructing buildings can look more like factory manufacturing
The construction industry can be a mess. When constructing any building, there are several steps you must take in a specific order, so a snag in one step tends to snowball into more problems down the line. You can’t start on drywall until the plumbers and electricians complete their work, for example, and if the drywall folks are behind, the crew working on the interior finish gets delayed even more.
Developers and general contractors hope that by adopting Internet of Things (IoT) solutions to cut costs, build faster, and use a limited labor pool more efficiently, they can turn the messy, fragmented world of building construction into something more closely resembling what it actually is—a manufacturing process.
“We’re looking at the most fragmented and nonstructured process ever, but it is still a manufacturing process,” says Meirav Oren, the CEO of Versatile Natures, an Israeli company that provides on-site data-collection technology to construction sites. The more you understand the process, says Oren, the better you are at automating it. In other conversations I’ve had with people in the construction sector, the focus isn’t on prefabricated housing or cool bricklaying robots. The focus is on turning construction into a regimented process that can be better understood and optimized.
Like agriculture—which has undergone its own revolution, thanks to connected tech—construction is labor intensive, dependent on environmental factors, and highly regulated. In farming today, lidar identifies insects while robots pick weeds with the aid of computer vision. The goal in agricultural tech is to make workers more efficient, rather than eliminating them. Construction technology startups, using artificial intelligence and the IoT, have a similar goal.
Oren’s goal, for example, is to make construction work more like a typical manufacturing process by using a sensor-packed device that’s mounted to a crane to track the flow of materials on a site. Versatile Natures’ devices also monitor environmental factors, such as wind speed, to make sure the crane isn’t pushed beyond its capabilities.
Another construction-tech startup, Pillar Technologies of New York City, aims to reduce the impact of on-site environments on workers’ safety and construction schedules. Pillar makes sensors that measure eight environmental metrics, including temperature, humidity, carbon monoxide, and particulates. The company then uses the gathered data to evaluate what is happening at the site and to make predictions about delays, such as whether the air is too humid to properly drywall a house.
Because many work crews sign up for multiple job sites, a delay at one site often means a delay at others. Alex Schwarzkopf, cofounder and CEO of Pillar, hopes that one day Pillar’s devices will use data to monitor construction progress and then inform general contractors in advance that the plumbers are behind, for example. That way, the contractor can reschedule the drywall group or help the plumbers work faster.
Construction is full of fragmented processes, and understanding each fragment can lead to an improvement of the whole. As Oren says, there is a lot of low-hanging fruit in the construction industry, which means that startups can attack a small individual problem and still make a big impact.
This article appears in the June 2019 print issue as “Deconstructing the Construction Industry.”
Validation of embedded connectivity in V2X designs and self-driving cars mark an inflection point in automotive wireless designs, opening the door to modular solutions based on off-the-shelf hardware and flexible software
Automotive OEMs and Tier-1 suppliers are now in the thick of the technology world’s two gigantic engineering challenges: connected cars and autonomous vehicles. That inevitably calls for more flexible development, test, validation, and verification programs that can quickly adapt to the changing technologies and standards.
For a start, the vehicle-to-everything (V2X) technology, which embodies the connected car movement, is also the point where the wireless industry most intersects with autonomous vehicles (AVs). Collision detection and avoidance is a classic example of this technology crossover between the AV and connected car technologies.
It shows how vehicles and infrastructure work in tandem for the creation of a smart motoring network. Here, at this technology crossroads, when V2X technology converges and collides with AV connectivity, reliability and low latency become requirements that are even more critical.
It is worth mentioning here that the demand for extreme reliability in stringent environments is already a precondition in automotive designs. When added to the connected car and self-driving vehicle design realms, well-tested connectivity becomes a major stepping stone.
The immensely complex hardware and software in a highly automated vehicle connected to the outside world also opens the door to malicious attacks from hackers and spoofs. And that calls for future-proof design solutions that demonstrate safeguards against hacking and spoofing attacks.
Not surprisingly, the convergence of connected cars and self-driving vehicles significantly expands development, test, validation and verification requirements. And that makes it imperative for engineers to employ highly-integrated development frameworks for multiple system components like RF quality and protocol conformance.
Modular Test Solutions
Take the example of a V2X system that requires certification for radio frequency identification tags and readers in electronic toll collection systems. Here, design engineers must also ensure that this connected car application protects data privacy and prevents unauthorized access.
However, being a new technology, this could entail a higher cost for validation at different development stages. The testing of communication equipment supporting different regional V2X standards could also lead to the purchase of measurement instruments for each standard and design layer.
That is why test and prototype solutions based on modular hardware and software building blocks can prove more efficient and cost-effective, as they can explore new technologies, standards and architectural options (see Figure 2). Software defined radio (SDR)-based test solutions, in particular, are flexible and cheaper in the long run.
The following sections will show how modular hardware and flexible software can help create RF calibration and validation solutions for autonomous and connected vehicles. It will explain how these highly customized systems can validate embedded wireless technology in V2X communications designed to save lives on the road.
URLLC Experimental Testbed
The immense amount of compute power involved in autonomous and connected car designs may lead to the expanded use of flexible SDR platforms for tackling the increasing complexity of embedded software and the rising number of usage scenarios, especially when automotive engineers must carefully balance extreme-reliability demands with low-latency requirements.
It is a vital design consideration amid the massive breadth of inputs and outputs for multiple RF streams serving a diverse array of cameras and sensors in autonomous vehicles. Moreover, SDR platforms can efficiently validate connected vehicles’ RF links to each other and to roadside units for information regarding traffic and construction work.
That is why the ultra-reliable low-latency communications mechanism is becoming so critical in both V2X systems and autonomous vehicles. It boosts system capacity and network coverage by reporting 99.999% reliability with a latency of 1 ms.
The URLLC reference design, for instance, enables engineers to create physical layer mechanisms for different driving environments and then compares them via simulation to analyze trade-offs between latency and reliability. So, a real-time experimental testbed like this one can substitute for expensive and cumbersome on-road testing to prove that the vehicle is safe for autonomous driving and V2X communications.
Shanghai University has joined with National Instruments to create a URLLC experimental testbed for advanced V2X services like vehicle platooning. The URLLC reference design and vehicle-to-vehicle communication are built around National Instruments’ SDR-based hardware for rapid prototyping of mobile communication channels.
Vector Signal Transceiver
Another design platform worth mentioning in the context of AV connectivity and connected car technology is Vector Signal Transceiver (VST). It is a customizable platform that combines an RF and baseband vector signal analyzer and generator with a user-programmablefield-programmable gate array (FPGA) and high-speed interfaces for real-time signal processing and control.
That enables comprehensive RF characteristic measurements and features like dynamic obstacle generation in a variety of road conditions. A VST system, for example, can simulate Doppler effect velocity from multiple angles or simulate scenarios such as a pedestrian walking across the street and a vehicle changing lanes.
It is another customized system that combines flexible, off-the-shelf hardware with a software development environment like LabVIEW to create user-defined prototyping and test solutions. That allows design engineers to transform VST into what they need it to be at the firmware level and address the most demanding development, test and validation challenges.
National Instruments introduced the first VST system in 2012 with an FPGA programmable with LabVIEW to accelerate design time and lower validation cost. Fast forward to 2019, the second-generation VST is ready to serve the autonomous and connected car designs where bandwidth and latency are crucial factors.
Automotive Testing’s Inflection Point
Industry observers call 2019 the year of V2X communications, while self-driving cars are still a work in progress. In the connected car realm, engineers are busy testing and validating the dedicated short-range communications-based vehicle-to-vehicle and vehicle-to-infrastructure devices to ensure that the V2X communication will work all the time and in all possible scenarios.
It is clear that both connected cars and self-driving vehicles share similar imperatives: they must be trustworthy and they must be credible. What is also evident by now is that these high-tech vehicles require high-tech prototyping and validation tools.
That marks an inflection point in automotive design and validation where two manifestations of smart mobility are striving to make traffic safer and more efficient. And the industry demands efficient and cost-effective test and verification solutions for these rapidly expanding automotive markets.
If these systems are based on modular hardware and flexible software, they can be efficiently customized for the autonomous and connected car designs. More importantly, these verification arrangements can significantly lower the design cost at different development stages.
At the Brooklyn 5G summit, experts said terahertz waves could fix some of the problems that may arise with millimeter-wave networks
It may be the sixth year for the Brooklyn 5G Summit, but in the minds of several speakers, 2019 is also Year Zero for 6G. The annual summit, hosted by Nokia and NYU Wireless, is a four-day event that covers all things 5G, including deployments, lessons learned, and what comes next.
This year, that meant preliminary research into terahertz waves, the frequencies that some researcher believe will make up a key component of the next next generation of wireless. In back-to-back talks, Gerhard Fettweis, a professor at TU Dresden, and Ted Rappaport, the founder and director of NYU Wireless, talked up the potential of terahertz waves.
A single Wi-Fi 6 access point can deliver high-speed, low-latency service to hundreds of users at once
An airport operator in Californiaand a British aerospace manufacturer are among the first organizations to put Wi-Fi 6 networks to the test. Boingo Wireless has begun testing Wi-Fi 6 access points and compatible smartphones at John Wayne Airport in Santa Ana, California, it announced earlier this month, while Mettis Aerospace saysit will begin its own trial of the new protocol at its manufacturing facility in Redditch, England, in the second half of this year.
“We’re still defining what the success criteria will be,” says Mettis IT head Dave Green, but the firm plans to test the next-generation of Wi-Fi for a few applications: collecting sensor data from machines on the factory floor, allowing staff to use augmented reality-enabled tablets to troubleshoot problems, and transmitting live video feeds from harder-to-reach areas on the factory floor. Green says they know these applications will require low latency—one of Wi-Fi 6’s improvements over its predecessor—and the ability to handle hundreds of simultaneous connections at once—another of the standard’s selling points.
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.