Tag Archives: Telecom

With New Tech, Panasonic Aims to Revive Interest in Delivering Broadband Over Power Lines

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/telecom/standards/could-nextgeneration-broadband-over-power-lines-revive-interest-in-the-technology

Using radio frequencies to transmit data over existing power lines both inside and outside of homes has long promised to turn legacy cabling into a more attractive asset by delivering two essential services on a single wire. But broadband over power lines (BPL) has never achieved its potential, due in part to initial low speeds and unreliability, and concerns about radio interference and electromagnetic radiation.

One company that has continued to invest in and improve BPL since 2000 is Panasonic, a multinational electronics and appliance manufacturer with headquarters in Osaka, Japan. In March of this year, the IEEE Standards Association approved the IEEE 1901a standard for BPL that covers IoT applications, and which is based on Panasonic’s upgraded HD-PLC technology.

HD-PLC (high-definition power line communications) is backward compatible with IEEE’s 1901 standard for BPL ratified in 2010. The 1901a standard implements new functions based on Panasonic’s Wavelet orthogonal frequency-division multiplexing (OFDM) technology that is also incorporated in the 2010 standard. 

A Machine Learning Classifier Can Spot Serial Hijackers Before They Strike

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/mit-and-caida-researchers-want-to-use-machine-learning-to-plug-one-of-the-internets-biggest-holes

How would you feel if, every time you had to send sensitive information somewhere, you relied on a chain of people playing the telephone game to get that information to where it needs to go? Sounds like a terrible idea, right? Well, too bad, because that’s how the Internet works.

Data is routed through the Internet’s various metaphorical tubes using what’s called the Border Gateway Protocol (BGP). Any data moving over the Internet needs a physical path of networks and routers to make it from A to B. BGP is the protocol that moves information through those paths—though the downside, like a person in a game of telephone, is that each junction in the path only knows what they’ve been told by their immediate neighbor.

Stevens’ Prototype ‘Quantum Lock’ May Foreshadow the Next Super-Secure Applications

Post Syndicated from Stevens Institute of Technology original https://spectrum.ieee.org/telecom/security/stevens-prototype-quantum-lock-may-foreshadow-the-next-supersecure-applications

A line of onlookers stands before a video camera in Stevens Institute of Technology’s S.C. Williams Library, hopeful. Their task: crack a lock that could open, say, a safety deposit box or a bank account or a social security record. Each visitor stares intently into the camera lens. Nothing.

Then physics professor Yuping Huang strides up to the mark.

Click. The lock opens instantly.

image

“Fortunately,” smiles Huang, “I am one of only three people whose face is allowed by this system to unlock this lock.”

Facial locks are nothing new. Your phone probably has one, or soon will.

But what is remarkable is how this lock works — involving simultaneously creating twin particles of energy that somehow communicate with one another, across distances.

“These quantum properties are going to change the internet,” predicts Huang, who directs the university’s Center for Quantum Science & Engineering and works with graduate students including Lac Nguyen and Jeeva Ramanathan on the quantum lock project. “One big way it will do that is in the enabling of security applications like this one, except on much larger scales.

“If it turns out this technology can be deployed in our homes and offices, as we believe it can be, eavesdroppers will be nearly powerless to sneak into the ever-more connected networks of devices that help run our lives but also hold much of our personal data.”

Welcome to the weird world of quantum communications.

But how does the system work?

When people stand in front of the camera attached to the lock, the Stevens setup captures information about each person’s face and sends it over the internet to a server housed in a different part of the university. There, facial-recognition computations and matches are done using open-source software. (However, Huang’s team is also working on bringing quantum physics to that step, too; stay tuned.)

While this may seem like a pretty standard computation so far, a key distinction occurs in its networking security: the data exchanged between the two parties is secured by fundamental laws of physics.

As facial photos are taken by the video camera, lasers in Huang’s physics lab create twin photons — tiny, power-packed particles of energy — by splitting beams of light with special crystals.

The twin photons are then separated. One photon is kept in the lab while the other is sent through fiber-optic lines back to the library. Complex, secret “keys” are instantly generated as the photons are detected at each site; this process will ensure that the secure information meets up with a trusted partner at the other end of the transaction.

The keys serve as what’s known in cryptography as a “one-time pad”: a temporary, uncrackable code between the parties that encrypt the images and communications, preventing any hacker from intercepting them. 

“We don’t know why quantum properties work this way,” explains Huang, shaking his head. “Even Einstein didn’t know. But they work, always, so far as we can measure. And a whole host of computing, financial and security applications will be coming down the road in our lifetime to leverage the power of those properties.”

“This prototype demonstrates the drop-in compatibility of our quantum key-distribution system for secure networked devices,” agrees Nguyen. “Wider adoption of this technology could protect the communications of corporations, governments and intelligence services.

“Some of the potential applications we already see include corporate and government data centers; military base communications; voting processes; and smart-city monitors, to name just a few.”

The system could also bring secure privacy to the individual, adds the Stevens team, including for such applications as controlling systems in one’s home remotely; communicating with a corporate office from home, or securing a home wireless network.

The Case for Hybrid Beamforming in 5G mmWave Prototypes

Post Syndicated from National Instruments original https://spectrum.ieee.org/telecom/wireless/the-case-for-hybrid-beamforming-in-5g-mmwave-prototypes

Beam management, a defining feature in mmWave communications, entails highly directional and steerable beams in single- and multi-user scenarios. It will play a crucial role in the future of 5G wireless designs, because the 3GPP Release 15 specification outlines the basics of beam management. Likewise, the 3GPP Release 16 examines the real-world performance of phased array antennas in the context of beam management.

The mmWave systems with a large antenna count enable narrow beam patterns, and that makes antenna performance and beam characteristics important considerations in choosing the algorithms for beam management. This article will examine the real-world performance of phased array antennas in the context of beam management. It will also provide a design example of hardware prototyping for 5G system beamforming and beamsteering.

Starting with the beamforming basics: The traditional analog beamforming creates a single beam by applying a phase delay, or time delay, to each antenna element. Here, for multiple simultaneous beams, designers need to use a phase delay for each incoming signal and then add the beam.

On the other hand, in full digital beamforming, each antenna mandates a dedicated analog baseband channel, which, in turn, calls for a digital transceiver for each antenna. This adds to both cost and power consumption. Enter hybrid beamforming, which allows designers to keep the overall cost and power consumption lower.

Why Hybrid Beamforming

Hybrid beamforming combines analog beamforming with digital precoding to intelligently form the patterns transmitted from a large antenna array (Figure 2), and the same technique is used at the receive end to create desired receiver pattern.

Hybrid transceivers use a combination of analog beamformers in the RF and digital beamformers in the baseband domains, respectively, and that leads to fewer RF chains compared to the number of transmit elements. In other words, an architecture that is properly partitioned between the analog and digital domains.

The fact that hybrid beamforming uses a smaller number of RF chains, which otherwise have large power consumption, allows designers to use a larger number of antenna array elements while reducing energy consumption and system design complexity.

However, hybrid beamforming must include the precoding weights and RF phase shifts to meet the goal of improving the virtual connection between the base station and the user equipment (UE). Moreover, the precoder design mandates massive calculations such as the singular value decomposition (SVD) of the channel.

Here, a low complexity hybrid precoding algorithm employed for beamsteering operations utilizes array response vectors of the channel. The algorithm applies a set of array response vectors that are used to form the channel, so there is no need for complicated operations used in the traditional precoding algorithms.

The beamsteering algorithms are also critical in the capacity-based optimal beam set selection and in detecting the presence of unknown interference and noise. Additionally, they facilitate higher throughput by overcoming problems such as random beam blockage and misalignment.

mmWave Beamsteering Algorithms

In high-speed mmWave communications with large antenna arrays, frequent channel estimation is required because channel conditions vary rapidly. A vital part of channel estimation relates to an efficient beam searching in order to allow more time for data transmission.

Then, there is multi-beam selection, a crucial element in hybrid beamforming systems for frequency-selective fading channels. The impact of interference on network capacity due to highly narrow beams also poses serious challenges. It is therefore imperative that designers carefully examine the network capacity from both node capacity and antenna size standpoints.

Here, unlike the trial-and-error approach, which is time-consuming and does not easily adapt to change, beamsteering algorithms help engineers understand design constraints and account for them in the optimization process. They facilitate an arithmetic mean of signal yields, which, in turn, refines observations with minor noise effects and provide more accurate sparse multipath delay information.

These observations, for instance, regarding estimated multipath delay indices, are crucial in the selection of the analog beams. The beamsteering algorithms also play a vital role in phase control and beam tuning. They analyze the computational complexity and compare numerical results to deliver the best signal propagation and reception for mmWave communication channels.

Take the Hybrid Beamforming Testbed from National Instruments (NI), which moves signals from the analog to the digital domain using an mmWave Transceiver System (Figure 3). This off-the-shelf prototyping system allows engineers to validate radio performance by efficiently implementing the mmWave beamsteering algorithms.

Hybrid Beamforming Testbed

The 5G NR standard for mmWave frequencies uses a combination of computationally intensive algorithms to encode, decode, modulate, demodulate and multiplex signals. Here, NI’s mmWave Transceiver System, a software-defined radio (SDR) solution, can help implement beamforming algorithms for a variety of 5G prototyping use cases.

This modular prototyping system comprises a PXI Express chassis, controllers, a clock distribution module, high-performance FPGA modules, high-speed DACs and ADCs, LO and IF modules, and mmWave radio heads (Figure 4). The FPGA modules in the PXI chassis handle communication rules, error correction, decoding and encoding, and signal mapping.

Next, the baseband transceiver modules — along with the IF/LO modules transmit and receive signals to and from the mmWave heads, which can be connected to the phased arrays via a single SMK cable. Here, the LabVIEW software allows developers to configure different connectivity features in the digital domain of beamforming.

So, network designers can combine the transceiver system with a phased array to create off-the-shelf phased array prototype solutions. And they can map different computational tasks on multiple FPGAs and thus design and test beam algorithms for a wide variety of mmWave channels and network configurations.

For more on this, go to National Instruments’ website.

Making the Ultimate Software Sandbox

Post Syndicated from Mark Anderson original https://spectrum.ieee.org/tech-talk/telecom/security/making-the-ultimate-software-sandbox

Is it possible to process highly sensitive data (say, a person’s genome or their tax returns) on an untrusted, remote system without ever exposing that data to that system? Some of the biggest names in tech are now working together to figure out a way to do just that.

Last month, the Linux Foundation announced the Community Computing Consortium (CCC)—a cross-industry effort to develop and standardize the safeguarding of data even when it’s in use by a potentially untrusted system. Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent are all CCC founding members.

Encrypting data at rest and in transit, the Foundation said in a press statement, are familiar challenges for cloud computing providers and users today. But the new challenge the Consortium is taking up concerns allowing sensitive, encrypted data to be processed by a system that otherwise would have no access to that data.

They may want to take a look an an open-source project first launched in 2016 called Ryoan.

“In our model, not only is the code untrusted, the platform is also untrusted,” says Emmett Witchel, professor of computer science at the University of Texas at Austin.

Witchel and co-authors developed Ryoan as a piece of code that would use security features in Intel CPUs to effectively sandbox encrypted data—and allow computation on that data without requiring that either the software or the hardware (other than the CPU) be secure or trusted.

Witchel says a key inspiration for his group was a 1973 paper by Butler Lampson of Xerox’s Palo Alto Research Center. “He talked about how difficult it is to confine untrusted code,” Witchel says. “That means if you have code that wasn’t written by you, you don’t know the motivations of the people who wrote it, and you want to make sure that code isn’t stealing any secrets [from your data], it’s very, very difficult.”

Witchel says one of his graduate students at the time argued that confining data completely within a sandbox—in the face of potentially adversarial code and hardware—was practically impossible.

Witchel adds that he agreed with his student, up to a point. “But it’s not 1973. And people are doing different things with computers now,” he says. “They’re recognizing images. They’re processing genome data. These are very specific tasks that have properties that we can take advantage of.”

According to Witchel, Ryoan uses what’s called a “one-shot data model.” That means the program looks at a user’s sensitive data only once in passing—a scheme that might be applicable to rapid-fire video or image streams that run image recognition software.

It was this transitory, one-time nature of the data processing that simplified Lampson’s Confinement Problem and made it solvable, Witchel says.

On the other hand, you couldn’t use Ryoan if you were processing a sensitive image on a remote, untrusted system that was storing and processing—and then storing and further processing—one’s data. Such a process may be necessary if, say, you were running Photoshop remotely on a sensitive image.

Ryoan relies on an Intel hardware security feature called Software Guard Extensions (SGX), which allowed its creators to begin addressing the problem. However, SGX is only a first step, says Tyler Hunt, a University of Texas, Austin computer science graduate student and co-developer of Ryoan.

“SGX only allows you to have 128 megabytes of physical memory, so there’s some challenge in finding applications that are small enough,” Hunt says. “And genomes are very large, so if you actually wanted to process an entire genome, you’d need some larger hardware to do it.”

Witchel and Hunt say there are other efforts now afoot to make “enclaves” like Ryoan (in other words—isolated, trusted computing environments) in larger and more diverse systems than just a CPU.

Microsoft researchers last year proposed a software environment called Graviton, which would run trusted computing enclaves on GPUs. RISC-V architecture hardware systems may soon be adopting a trusted computing enclave standard called Keystone. And ARM—a CCC member—offers its own trusted computing environment called TrustZone.

Witchel says trusted enclaves like Ryoan and its descendants could be important as users become more savvy about the permissions they give for use of their personal data.

People today could be better educated, Witchel says, so they can understand and assert their rights. They need to know “that their data is out there and is being used and monetized. And that they should have some control and some assurance that their data is only being used for the purposes that they released it for. I want my genome data to tell me about my ancestry. I don’t want you to keep that genome data to use it to develop a drug that you then sell to me at an inflated price. That seems unfair.”

How Language Shapes Password Security

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/security/how-language-shapes-chinese-and-english-password-security

No matter the differences in language and culture, both Chinese- and English-language Internet users apparently find common ground in using easily guessable password variants of “123456.” But a recent study comparing password patterns among the two languages also found notable and unique features in Chinese passwords that have big implications for Internet security beyond China.

Where’s My Stuff? Now, Bluetooth and Ultrawideband Can Tell You

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/standards/wheres-my-stuff-now-bluetooth-and-ultrawideband-can-tell-you

We all lose things. Think about how much time you’ve spent searching for your keys or your wallet. Now imagine how much time big companies spend searching for lost items. In a hospital, for example, the quest for a crash cart can slow a response team during an emergency, while on a construction site, the hunt for the right tool can lead to escalating delays.

According to a recent study funded by Microsoft, roughly 33 percent of companies utilizing the Internet of Things are using it for tracking their stuff. Quality location data is important for more than tracking misplaced tools; it’s also necessary for robotics in manufacturing and in autonomous vehicles, so they can spot nearby humans and avoid them.

The growing interest in locating things is reflected in updated wireless standards. The Bluetooth Special Interest Group estimates that with the updated 5.1 standard, the wireless technology can now locate devices to within a few inches. Elsewhere, Texas Instruments has built a radar chip using 60-gigahertz signals that can help robots “see” where things are in a factory by bouncing radio waves off its surroundings.

But for me, the real excitement is in a newcomer to the scene. In August, NXP, Bosch, Samsung, and access company Assa Abloy launched the FiRa Consortium to handle location tracking using ultrawideband radios (FiRa stands for “fine-ranging”). This isn’t the ultrawideband of almost 20 years ago, which offered superfast wireless data transfers over short distances much like Wi-Fi does today. FiRa uses a wide band of spectrum in the 6- to 9-GHz range and relies on the new IEEE 802.15.4z standard. The base standard is used for other IoT network technologies, including Zigbee, Wi-SUN, 6LoWPAN, and Thread radios, but the z formulation is designed specifically for securely ascertaining the location of a device.

FiRa delivers location data based on a time-of-flight measurement—the time it takes a quick signal pulse to make a round trip to the device. This is different from Bluetooth’s method, which opens a connection between radios and then broadcasts the location. Charles Dachs, vice chair of the FiRa Consortium and vice president of mobile transactions at NXP, says FiRa’s pulselike data transmissions allow location data to be gleaned for items within 100 to 200 meters of a node without sucking up a lot of power. Time-of-flight measurements allow for additional security, since they make it harder to spoof a location, and they’re so accurate, it’s obvious that a person is right there, not even a few meters away. Also, because the radio transmissions aren’t constant, it’s possible for hundreds of devices to ping a node without overwhelming it. By comparison, Bluetooth nodes can handle only about 50 devices.

FiRa’s location-tracking feature is likely to be the application that entices many companies to adopt the standard, but it can do more. The consortium also hopes that automotive companies will use it for securely unlocking car doors or front doors wirelessly. However, there is a downside: Widespread FiRa use for locks would require either a separate fob or new radios on our smartphones.

I think it’s far more likely that FiRa will find its future in enterprise and industrial asset tracking. Historically, Bluetooth has struggled in this space because of the limited number of connections that can be made. Other radios have been a bit too niche, or not well designed for enterprise use. As for location tracking for us consumers? Apple and Google are both betting on Bluetooth, so that’s where I’d place my bets, too.

This article appears in the October 2019 print issue as “Where’s My Stuff?.”

New Antenna Uses Saltwater and Plastic to Steer Radio Beams

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/wireless/new-antenna-uses-saltwater-to-achieve-multiple-beamsteering-states

A new antenna that uses saltwater and plastic instead of metal to shape radio signals could make it easier to build networks that use VHF and UHF signals. 

Being able to focus the energy of a radio signal towards a given receiver means you can increase the range and efficiency of transmissions. If you know the location of the receiver, and are sure that it’s going to stay put, you can simply use an antenna that is shaped to emit energy mostly in one direction and point it. But if the receiver’s location is uncertain, or if it’s moving, or if you’d like to switch to a different receiver, then things get tricky. In this case, engineers often fall back on a technique called beam-steering or beamforming, and doing it at at a large scale is one of the key underlying mechanisms behind the rollout of 5G networks. 

Beam-steering lets you adjust the focus of antenna without having to move it around to point in different directions. It involves adjusting the relative phases of a set of radio waves at the antenna: these waves interfere constructively and destructively, cancelling out in unwanted directions and reinforcing the signal in the direction you want to send it. Different beam patterns, or states, are also possible—for example, you might want a broader beam if you are sending the same signal to multiple receivers in a given direction, or a tighter beam if you are talking to just one. 

Now, researchers have developed an advanced liquid-based antenna system that relies on a readily available ingredient: saltwater. 

To be sure, this is not the first liquid antenna: these antennas, which use fluid to transmit and receive radio signals, can be useful in situations where VHF or UHF frequencies are required (frequencies between 30 megahertz and 3 gigahertz). They tend to be small, transparent, and more reconfigurable than conventional metal antennas. For these reasons, they are being explored in for some internet of things (IoT) and 5G applications.

Liquid antennas that depend on salty water have even more benefits, since the substance is readily available, low-cost and eco-friendly. Several saltwater-based antennas have been developed to date, but these designs are limited in how easily the beam can be steered and reconfigured.

However, in a recent publication in IEEE Antennas and Wireless Propagation Letters, Lei Xing and her colleagues at the College of Electronic and Information Engineering at Nanjing University of Aeronautics and Astronautics in China have proposed a new saltwater-based antenna that achieves 12 directional beam-steering states and one omnidirectional state. Its circular configuration allows for complete 360-degree beam-steering and works for frequencies between 334 to 488 MHz.

The proposed design consists of a circular ground plane, with 13 transparent acrylic tubes that can be filled with (or emptied of) salt water on demand. One tube is located in the center to act as a driven monopole (the radio signal is fed in via a copper disk at the base of the tube). Surrounding it are 12 so-called parasitic monopoles. When only the driven monopole is excited, this creates an omnidirectional signal. But the 12 remaining monopoles, when filled with water, work together to act as reflectors and give the broadcasted signal direction. 

“The most challenging part of designing this antenna is how to effectively and efficiently control the water parasitic monopoles,” Xing explains. To do so, her team developed a liquid control system using micropumps, which she says can be applied to other liquid antennas or antenna arrays.

“The attractive feature of using water monopoles is that both the water height and activating status can be dynamically tuned through microfluidic techniques, which has a higher degree of design flexibility than metal antennas,” explains Xing. “More importantly, the antenna can be totally ‘turned off’ when not in use.”

When the antenna is switched completely off and drained, it is nearly undetectable by radar. In contrast, this effect is hard to achieve with metal antennas.

The new antenna’s operating range of 334 MHz to 488 MHz makes it a promising candidate for very-high frequency applications such as IoT and maritime applications, says Xing. One limitation of saltwater-based antennas, she notes, is that that the permittivity of saline water (a measure of how it interacts with electric fields) is sensitive to temperature variation. Xing says she plans to continue to explore various liquid-based designs for antennas moving forward.

Remaking the World for Robots

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/standards/remaking-the-world-for-robots

Over time, we will design physical spaces to accommodate robots and augmented reality

Every time I’m in a car in Europe and bumping along a narrow, cobblestone street, I am reminded that our physical buildings and infrastructure don’t always keep up with our technology. Whether we’re talking about cobblestone roads or the lack of Ethernet cables in the walls of old buildings, much of our established architecture stays the same while technology moves forward.

But embracing augmented reality, autonomous vehicles, and robots gives us new incentives to redevelop our physical environments. To really get the best experience from these technologies, we’ll have to create what Carla Diana, an established industrial designer and author, calls the “robot-readable world.”

Diana works with several businesses that make connected devices and robots. One such company is Diligent Robotics, of Austin, Texas, which is building Moxi, a one-handed robot designed for hospitals. Moxi will help nurses and orderlies by taking on routine tasks, such as fetching supplies and lab results, that don’t require patient interaction. However, many hospitals weren’t designed with rolling robots with pinchers for hands in mind.

Moxi can’t open every kind of door or use the stairs, so its usefulness is limited in the average hospital. For now, Diligent sends a human helper for Moxi during test runs. But the company’s thinking is that if hospitals see the value in an assistive robot, they might change their door handles and organize supplies around ramps, not stairs. The bonus is that these changes would make hospitals more accessible to the elderly and those with disabilities.

This design philosophy doesn’t have to be limited to the hospital, however. Autonomous cars will likely need road signs that are different from the ones we’ve grown accustomed to. Current road signs are easily read by humans, but they could be vandalized so as to trick autonomous vehicles into interpreting them incorrectly. Delivery drones will need markers to navigate as well as places to land, if Amazon wants to get serious about delivering packages this way.

Google has already developed one solution. Back in 2014, the company invented plus codes. These are short codes for places that don’t traditionally have street names and numbers, such as a residence in a São Paulo favela or a point along an oil pipeline. These codes are readable by humans and machines, thus making the world a little more bot friendly.

Augmented reality (AR) also stands to benefit from this new design philosophy. Mark Rolston is the founder and chief creative officer of ArgoDesign, a company that helps tech companies design their products. Rolston has found that bringing AR—such as Magic Leap’s head-mounted virtual retinal display—into offices and homes can be tough, depending on the environment. For example, the Magic Leap reads glass walls as blank space, which results in AR images that are too faint to show up on the surface.

AR also struggles with white or dark walls. Rolston says the ideal wall is painted a light gray and has curved edges rather than sharp corners. While he doesn’t expect every room in an office or home to follow these guidelines, he does think we’ll start seeing a shift in design to accommodate AR needs.

In other words, we’ll still see the occasional cobblestone street and white wall, but more and more we’ll see our physical structures accommodate our tech-focused society.

5G’s Waveform Is a Battery Vampire

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/5gs-waveform-is-a-battery-vampire

As carriers roll out 5G, industry group 3GPP is considering other ways to modulate radio signals

5G report logo, link to report landing page

In 2017, members of the mobile telephony industry group 3GPP were bickering over whether to speed the development of 5G standards. One proposal, originally put forward by Vodafone and ultimately agreed to by the rest of the group, promised to deliver 5G networks sooner by developing more aspects of 5G technology simultaneously.

Adopting that proposal may have also meant pushing some decisions down the road. One such decision concerned how 5G networks should encode wireless signals. 3GPP’s Release 15, which laid the foundation for 5G, ultimately selected orthogonal frequency-division multiplexing (OFDM), a holdover from 4G, as the encoding option.

But Release 16, expected by year’s end, will include the findings of a study group assigned to explore alternatives. Wireless standards are frequently updated, and in the next 5G release, the industry could address concerns that OFDM may draw too much power in 5G devices and base stations. That’s a problem, because 5G is expected to require far more base stations to deliver service and connect billions of mobile and IoT devices.

“I don’t think the carriers really understood the impact on the mobile phone, and what it’s going to do to battery life,” says James Kimery, the director of marketing for RF and software-defined radio research at National Instruments Corp. “5G is going to come with a price, and that price is battery consumption.”

And Kimery notes that these concerns apply beyond 5G handsets. China Mobile has “been vocal about the power consumption of their base stations,” he says. A 5G base station is generally expected to consume roughly three times as much power as a 4G base station. And more 5G base stations are needed to cover the same area.

So how did 5G get into a potentially power-guzzling mess? OFDM plays a large part. Data is transmitted using OFDM by chopping the data into portions and sending the portions simultaneously and at different frequencies so that the portions are “orthogonal” (meaning they do not interfere with each other).

The trade-off is that OFDM has a high peak-to-average power ratio (PAPR). Generally speaking, the orthogonal portions of an OFDM signal deliver energy constructively—that is, the very quality that prevents the signals from canceling each other out also prevents each portion’s energy from canceling out the energy of other portions. That means any receiver needs to be able to take in a lot of energy at once, and any transmitter needs to be able to put out a lot of energy at once. Those high-energy instances cause OFDM’s high PAPR and make the method less energy efficient than other encoding schemes.

Yifei Yuan, ZTE Corp.’s chief engineer of wireless standards, says there are a few emerging applications for 5G that make a high PAPR undesirable. In particular, Yuan, who is also the rapporteur for 3GPP’s study group on nonorthogonal multiple-access possibilities for 5G, points to massive machine-type communications, such as large-scale IoT deployments.

Typically, when multiple users, such as a cluster of IoT devices would communicate using OFDM, their communications would be organized using orthogonal frequency-division multiple access (OFDMA), which allocates a chunk of spectrum to each user. (To avoid confusion, remember that OFDM is how each device’s signals are encoded, and OFDMA is the method to make sure that overall, one device’s signals don’t interfere with any others.) The logistics of using distinct spectrum for each device could quickly spiral out of control for large IoT networks, but Release 15 established OFDMA for 5G-connected machines, largely because it’s what was used on 4G.

One promising alternative that Yuan’s group is considering, non-orthogonal multiple access (NOMA), could deliver the advantages of OFDM while also overlapping users on the same spectrum.

For now, Yuan believes OFDM and OFDMA will suit 5G’s early needs. He sees 5G first being used by smartphones, with applications like massive machine-type communications not arriving for at least another year or two, after the completion of Release 16, currently scheduled for December 2019.

But if network providers want to update their equipment to provide NOMA down the line, there could very well be a cost. “This would not come for free,” says Yuan. “Especially for the base station sites.” At the very least, base stations would need software updates to handle NOMA, but they might also require more advanced receivers, more processing power, or other hardware upgrades.

Kimery, for one, isn’t optimistic that the industry will adopt any non-OFDMA options. “It is possible there will be an alternative,” he says. “The probability isn’t great. Once something gets implemented, it’s hard to shift.”

Google’s Equiano Cable Will Extend to the Remote Island of Saint Helena, Flooding It With Data

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/googles-equiano-cable-will-extend-to-the-remote-island-of-saint-helena-flooding-it-with-data

The tiny island will need to turn itself into a data hub to make use of the expected bandwidth

If you know anything about the South Atlantic island of Saint Helena, that’s probably because it was the island where the British government exiled Napoleon until he died in 1821. It was actually the second time Britain attempted to exile Napoleon, and the island was chosen for a very specific reason: It’s incredibly remote.

Napoleon is long gone, but the island’s remoteness continues to pose challenges for its 4,500-odd residents. They used to only be able to reach St. Helena by boat once every 3 weeks, though it’s now possible to catch the occasional flight from Johannesburg, South Africa to what’s been called “the world’s most useless airport.” Residents’ Internet prospects are even worse—the island’s entire population shares a single 50 megabits per second satellite link.

That’s about to change, however, as the St. Helena government has shared a letter of intent describing a plan to connect the island to Google’s recently announced Equiano cable. The cable will be capable of delivering orders of magnitude more data than anything the island has experienced. It will create so much capacity, in fact, that St. Helena could use the opportunity to transform itself from an almost unconnected island to a South Atlantic data hub.

How YouTube Paved the Way for Google’s Stadia Cloud Gaming Service

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/how-the-youtube-era-made-cloud-gaming-possible

Google’s vision is that any device that can play YouTube videos will also have access to cloud gaming through Stadia

When Google’s executives floated a vision for the Stadia cloud gaming service that could make graphically intensive gaming available on any device, they knew the company wouldn’t have to build all the necessary technology from scratch. Instead, the tech giant planned to leverage its expertise in shaping Internet standards and installing infrastructure to support YouTube video streaming for more than a billion people worldwide.

Key Challenges in 5G NR

Post Syndicated from National Instruments original https://spectrum.ieee.org/telecom/wireless/key-challenges-in-5g-nr

Demand for wireless communication presents multiple industry challenges at once

Overview

As the technology curve grows inevitably steeper, solutions that solve multiple problems across a variety of industries become more necessary. The race toward 5G has challenged every wireless researcher, hardware manufacturer, and operator to evaluate what tools they can use for the next generation of communication technology. 

The physical layer as we knew it for 4G LTE and previous standards is being pushed to new limits that include integrating multiple input, multiple output (MIMO) technologies; moving to mmWave; and using unlicensed bands for coexistence between protocols. But the physical layer isn’t the only piece of the puzzle anymore.

As the standards become increasingly complex and stringent across multiple layers, the MAC, data link, and network layers need to be developed. Timing requirements begin to tighten and latency becomes a bigger issue. The need for processing inside a single node begins to scale far beyond what has previously been met with application-specific integrated circuits (ASICs). To take it a step further, the deployability and scale of systems are growing. There becomes an increasing need for remote radio head applications, where base stations may evolve beyond a single antenna system to remote nodes across a city. The demand for wireless communication has placed multiple advanced challenges on the industry at once.

Challenges in Determinism
Applications such as 5G New Radio (5G NR)  introduce timing constraints that make the relationship between the processor and RF front end even more critical than previous communication protocols like LTE or 802.11. Ultra-reliable machine type communication creates a need for upper-layer functionality to occur in much more deterministic and precise timing intervals, which forces technologies like schedulers to be implemented more deterministically. New 802.11 standards such as 802.11ax depend on strict timing requirements as access points determine channel models dynamically through trigger frames and a trigger-based physical layer protocol data unit (PPDU). All these interactions must happen in tight 16us timing requirements, otherwise the communication breaks down. Determinism is becoming increasingly necessary in applications as more and more intelligence is provided to the PHY layer through a MAC. Time-critical operations can no longer be implemented strictly with a PC. Technologies like real-time OSs and FPGAs must be used to handle these sub-1ms timing requirements.

Challenges in Processing
In increasing processing power, there are always difficulties such as the mobility of a processing unit, data pipes that transition data from processing resource to processing resource, and the flexibility of the configuration. With complex developments in MAC layer functionality, added focus on software-defined networks, and more robust and complex schedulers, the need for time-accurate processing and parallel computing becomes greater.

Let’s specifically explore the software-defined networking case. With complex requirements from a software-defined communication network, any node may need to completely reconfigure its function at short notice. Integrating a single node with the processing power to handle those decision-making tasks requires transceivers to scale beyond the RF. The RF node may need to have either the ability to process decisions from a scheduler at a central processing point or run its own decision engine, which both require advancements in processing that scales beyond typical ASICs.

However, raw processing power is still not enough. As the need for environment emulation grows, whether from the channel, base station, or user equipment perspective, increasingly lower latencies between processor and circuitry are required. Any application that integrates heterogenous processor architectures (GPP, GPU, FPGA, and so on) requires that the data be transferred over high-speed serial interfaces with low latency and wide data bandwidth.

Challenges in Scale
It’s no surprise that technology is shrinking, whether looking at the trend of the off-the-shelf device side of technology or the silicon that the devices are built with. The technology is not only shrinking but it’s also changing scale. Instead of the base station of old, multiple deployed radio nodes may act as the new infrastructure for 5G and beyond. This introduces an entirely new range of needs including servicing the infrastructure and implementing software updates remotely.

However, these changes affect more than operators. Wireless communications researchers must be able to move out of the lab and into the field for real-world trials. The technology can no longer just be demonstrated at the lab level. Multiple universities, operators, and vendors are collaborating to deploy testbeds throughout cities to demonstrate the capabilities of new physical, data link, and network layers in a real-world environment. Deploying hardware for field trials and maintaining accessibility to the project from remote campuses introduce a variety of new issues.

The Stand-Alone USRP RIO: USRP-2974
The USRP (Universal Software Radio Peripheral) solution has been the benchmark of industry and academic software defined radio (SDR) technology for the past decade.

The USRP-2974 is the first stand-alone SDR from NI and will be the first NI USRP to use the power of LabVIEW software, the LabVIEW FPGA Module, and the LabVIEW Real-Time Module, all in a single device. It is designed around the existing USRP RIO hardware solution, but now integrates an x86 processor connected to the USRP through high-speed PCI Express and Ethernet connections for data streaming between an x86 target and FPGA target.

An x86 processor that’s incorporated into the design of an SDR provides many benefits. First, the LabVIEW Real-Time programmable processor is now the ideal target to test scheduler algorithms, allowing for prioritization on the processor and deterministically operating on and sending commands to the FPGA, which in turn can handle the physical layer RF signals. Second, because data can be processed onboard the device, every radio node can run additional computation beyond what previously was done on only the FPGA. Finally, with an Ethernet connection to a development machine, any number of USRP-2974 devices can be deployed with duplicate or unique code bases in a flash, which improves system and code management at the individual module level or at the large-scale testbed.

The USRP-2974 opens the doors for new research and advanced use cases previously limited by traditional SDRs. Systems become more scalable, are more easily managed, and can now integrate decision making and deterministic selection through the onboard processor. The USRP-2974 provides the high performance to tackle even the most complex of communications challenges.

 

The Internet Is Coming to the Rest of the Animal Kingdom

Post Syndicated from Elie Dolgin original https://spectrum.ieee.org/tech-talk/telecom/internet/internet-of-living-things-can-communication-tools-break-down-the-interspecies-divide

A new Doolittlesque initiative aims to promote Internet communication among smart animals

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Lunar Pioneers Will Use Lasers to Phone Home

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/lunar-pioneers-will-use-lasers-to-phone-home

NASA’s Orion and Gateway will try out optical communications gear for a high-speed connection to Earth

graphic link to special report landing page
graphic link to special report landing  page

With NASA making serious moves toward a permanent return to the moon, it’s natural to wonder whether human settlers—accustomed to high-speed, ubiquitous Internet access—will have to deal with mind-numbingly slow connections once they arrive on the lunar surface. The vast majority of today’s satellites and spacecraft have data rates measured in kilobits per second. But long-term lunar residents might not be as satisfied with the skinny bandwidth that, say, the Apollo astronauts contended with.

To meet the demands of high-definition video and data-intensive scientific research, NASA and other space agencies are pushing the radio bands traditionally allocated for space research to their limits. For example, the Orion spacecraft, which will carry astronauts around the moon during NASA’s Artemis 2 mission in 2022, will transmit mission-critical information to Earth via an S-band radio at 50 megabits per second. “It’s the most complex flight-management system ever flown on a spacecraft,” says Jim Schier, the chief architect for NASA’s Space Communications and Navigation program. Still, barely 1 Mb/s will be allocated for streaming video from the mission. That’s about one-fifth the speed needed to stream a high-definition movie from Netflix.

To boost data rates even higher means moving beyond radio and developing optical communications systems that use lasers to beam data across space. In addition to its S-band radio, Orion will carry a laser communications system for sending ultrahigh-definition 4K video back to Earth. And further out, NASA’s Gateway will create a long-term laser communications hub linking our planet and its satellite.

Laser communications are a tricky proposition. The slightest jolt to a spacecraft could send a laser beam wildly off course, while a passing cloud could interrupt it. But if they work, robust optical communications will allow future missions to receive software updates in minutes, not days. Astronauts will be sheltered from the loneliness of working in space. And the scientific community will have access to an unprecedented flow of data between Earth and the moon.

Today, space agencies prefer to use radios in the S band (2 to 4 gigahertz) and Ka band (26.5 to 40 GHz) for communications between spacecraft and mission control, with onboard radios transmitting course information, environmental conditions, and data from dozens of spaceflight systems back to mission control. The Ka band is particularly prized—Don Cornwell, who oversees radio and optical technology development at NASA, calls it “the Cadillac of radio frequencies”—because it can transmit up to gigabits per second and propagates well in space.

Any spacecraft’s ability to transmit data is constrained by some unavoidable physical truths of the electromagnetic spectrum. For one, radio spectrum is finite, and the prized bands for space communications are equally prized by commercial applications. Bluetooth and Wi-Fi use the S band, and 5G cellular networks use the Ka band.

The second big problem is that radio signals disperse in the vacuum of space. By the time a Ka-band signal from the moon reaches Earth, it will have spread out to cover an area about 2,000 kilometers in diameter—roughly the size of India. By then, the signal is a lot weaker, so you’ll need either a sensitive receiver on Earth or a powerful transmitter on the moon.

Laser communications systems also have dispersion issues, and beams that intersect can muddle up the data. But a laser beam sent from the moon would cover an area only 6 km across by the time it arrives on Earth. That means it’s much less likely for any two beams to intersect. Plus, they won’t have to contend with an already crowded chunk of spectrum. You can transmit a virtually limitless quantity of data using lasers, says Cornwell. “The spectrum for optical is unconstrained. Laser beams are so narrow, it’s almost impossible [for them] to interfere with one another.”

Higher frequencies also mean shorter wavelengths, which bring more benefits. Ka-band signals have wavelengths from 7.5 millimeters to 1 centimeter, but NASA plans to use lasers that have a 1,550-nanometer wavelength, the same wavelength used for terrestrial optical-fiber networks. Indeed, much of the development of laser communications for space builds on existing optical-fiber engineering. Shorter wavelengths (and higher frequencies) mean that more data can be packed into every second.

The advantages of laser communications have been known for many years, but it’s only recently that engineers have been able to build systems that outperform radio. In 2013, for example, NASA’s Lunar Laser Communications Demonstration proved that optical signals can reliably send information from lunar orbit back to Earth. The month-long experiment used a transmitter on the Lunar Atmosphere and Dust Environment Explorer to beam data back to Earth at speeds of 622 Mb/s, more than 10 times as fast as Orion’s S-band radio will.

“I was shocked to learn [Orion was] going back to the moon with an S-band radio,” says Bryan Robinson, an optical communications expert at MIT Lincoln Laboratory in Lexington, Mass. Lincoln Lab has played an important role in developing many of the laser communications systems on NASA missions, starting with the early optical demonstrations of the classified GeoLITE satellite in 2001. “Humans have gotten used to so much more, here on Earth and in low Earth orbit. I was glad they came around and put laser comm back on the mission.”

As a complement to its S-band radio, during the Artemis 2 mission Orion will carry a laser system called Optical to Orion, or O2O. NASA doesn’t plan to use O2O for any mission-critical communications. Its main task will be to stream 4K ultrahigh-definition video from the moon to a curious public back home. O2O will receive data at 80 Mb/s and transmit at 20 Mb/s while in lunar orbit. If you’re wondering why O2O will transmit at 20 Mb/s when a demonstration project six years ago was able to transmit at 622 Mb/s, it’s simply because the Orion developers “never asked us to do 622,” says Farzana Khatri, a senior staff member in Lincoln Lab’s optical communications group. Cornwell confirms that O2O’s downlink will deliver a minimum of 80 Mb/s from Earth, though the system is capable of higher data rates.

If successful, O2O will open the door for data-heavy communications on future crewed missions, allowing for video chats with family, private consultations with doctors, or even just watching a live sports event during downtime. The more time people spend on the moon, the more important all of these connections will be to their mental well-being. And eventually, video will become mission critical for crews on board deep-space missions.

Before O2O can even be tested in space, it first has to survive the journey. Laser systems mounted on spacecraft use telescopes to send and receive signals. Those telescopes rely on a fiddly arrangement of mirrors and other moving parts. O2O’s telescope will use an off-axis Cassegrain design, a type of telescope with two mirrors to focus the captured light, mounted on a rotating gimbal. Lincoln Lab researchers selected the design because it will allow them to separate the telescope from the optical transceiver, making the entire system more modular. The engineers must ensure that the Space Launch System rocket carrying Orion won’t shake the whole delicate arrangement apart. The researchers at Lincoln Lab have developed clasps and mounts that they hope will reduce vibrations and keep everything intact during the tumultuous launch.

Once O2O is in space, it will have to be precisely aimed. It’s hard to miss a receiver when your radio signal has the cross section the size of a large country. A 6-km-diameter signal, on the other hand, could miss Earth entirely with just a slight bump from the spacecraft. “If you [use] a laser pointer when you’re nervous and your hand is shaking, it’s going to go all over the place,” says Cornwell.

Orion’s onboard equipment will also generate constant minuscule vibrations, any one of which would be enough to throw off an optical signal. So engineers at NASA and Lincoln Lab will place the optical system on an antijitter platform. The platform measures the jitters from the spacecraft and produces an opposite pattern of vibrations to cancel them out—“like noise-canceling headphones,” Cornwell says.

One final hurdle for O2O will be dealing with any cloud cover back on Earth. Infrared wavelengths, like the O2O’s 1,550 nm, are easily absorbed by clouds. A laser beam might travel the nearly 400,000 km from the moon without incident, only to be blocked just above Earth’s surface. Today, the best defense against losing a signal to a passing stratocumulus is to beam transmissions to multiple receivers. O2O, for example, will use ground stations at Table Mountain, Calif., and White Sands, N.M.

The Gateway, scheduled to be built in the 2020s, will present a far bigger opportunity for high-speed laser communications in space. NASA, with help from its Canadian, European, Japanese, and Russian counterparts, will place this space station in orbit around the moon; the station will serve as a staging area and communications relay for lunar research.

NASA’s Schier suspects that research and technology demonstrations on the Gateway could generate 5 to 8 Gb/s of data that will need to be sent back to Earth. That data rate would dwarf the transmission speed of anything in space right now—the International Space Station (ISS) sends data to Earth at 25 Mb/s. “[Five to 8 GB/s is] the kind of thing that if you turned everything on in the [ISS], you’d be able to run it for 2 seconds before you overran the buffers,” Schier says.

The Gateway offers an opportunity to build a permanent optical trunk line between Earth and the moon. One thing NASA would like to use the Gateway for is transmitting positioning, navigation, and timing information to vehicles on the lunar surface. “A cellphone in your pocket needs to see four GPS satellites,” says Schier. “We’re not going to have that around the moon.” Instead, a single beam from the Gateway could provide a lunar rover with accurate distance, azimuth, and timing to find its exact position on the surface.

What’s more, using optical communications could free up radio spectrum for scientific research. Robinson points out that the far side of the moon is an optimal spot to build a radio telescope, because it would be shielded from the chatter coming from Earth. (In fact, radio astronomers are already planning such an observatory: Our article “Rovers Will Unroll a Telescope on the Moon’s Far Side” explains their scheme.) If all the communication systems around the moon were optical, he says, there’d be nothing to corrupt the observations.

Beyond that, scientists and engineers still aren’t sure what else they’ll do with the Gateway’s potential data speeds. “A lot of this, we’re still studying,” says Cornwell.

In the coming years, other missions will test whether laser communications work well in deep space. NASA’s mission to the asteroid Psyche, for instance, will help determine how precisely an optical communications system can be pointed and how powerful the lasers can be before they start damaging the telescopes used to transmit the signals. But closer to home, the communications needed to work and live on the moon can be provided only by lasers. Fortunately, the future of those lasers looks bright.

This article appears in the July 2019 print issue as “Phoning Home, With Lasers.”

Is Ham Radio a Hobby, a Utility…or Both? A Battle Over Spectrum Heats Up

Post Syndicated from Julianne Pepitone original https://spectrum.ieee.org/tech-talk/telecom/wireless/is-ham-radio-a-hobby-a-utilityor-both-a-battle-over-spectrum-heats-up

Some think automated radio emails are mucking up the spectrum reserved for amateur radio, while others say these new offerings provide a useful service

Like many amateur radio fans his age, Ron Kolarik, 71, still recalls the “pure magic” of his first ham experience nearly 60 years ago. Lately, though, encrypted messages have begun to infiltrate the amateur bands in ways that he says are antithetical to the spirit of this beloved hobby.

So Kolarik filed a petition, RM-11831 [PDF], to the U.S. Federal Communications Commission (FCC) proposing a rule change to “Reduce Interference and Add Transparency to Digital Data Communications.” And as the proposal makes its way through the FCC’s process, it has stirred up heated debate that goes straight to the heart of what ham radio is, and ought to be.

The core questions: Should amateur radio—and its precious spectrum—be protected purely as a hobby, or is it a utility that delivers data traffic? Or is it both? And who gets to decide?