To be fair, the fact that we can connect to the Internet at all while we’re screaming across the sky at hundreds of kilometers an hour in a metal tube is pretty incredible. But still, the quality of that Internet connection often leaves something to be desired.
Jack Mandala, the CEO of the industry group Seamless Air Alliance, credits the generally poor inflight Wi-Fi experience in part to the fact that there have never been industry-wide standards developed for the technology. “With regard to inflight connectivity, it’s largely just been proprietary equipment,” he says.
The hottest feature of 5G isn’t discussed very much. When people talk about 5G they tend to discuss the gigabit speeds or the lower latencies. But it’s network slicing, the ability to partition off segments of the 5G network with specific latency, bandwidth, and quality-of-service guarantees, that could change the underlying economics of cellular service. Network slicing could lead to new companies that provide connectivity and help offset the capital costs associated with deploying 5G networks.
How? Instead of selling data on a per-gigabyte basis, these companies could sell wireless connectivity with specific parameters. A manufacturing facility, for example, may prioritize low latency so that its robots operate as precisely as possible. A hospital may want not only low latency but also dedicated bandwidth for telemedicine, to ensure that signals aren’t lost at an inopportune moment.
Today, if a hospital or factory wants a dedicated wireless network with specific requirements, a telco has to custom-engineer it. But with network slicing, the telco can instead use software to allocate slices without human involvement. This would reduce the operating costs of a 5G network. That ease and flexibility, combined with the ability to price the network for different capabilities, will be what helps carriers justify the capital costs of deploying 5G, says Paul Challoner, the vice president of network product solutions for Ericsson North America.
Challoner envisions that soon customers will be able to go to a telco’s website and define what they want, get the pricing for it, and then use the network slice for however long they need. He sees 2020 as being the year that equipment companies like Ericsson “race to the slice,” trying to show wireless carriers what they can do.
Mobile-tech consultant Chetan Sharma thinks network slicing will likely take a year or two longer to hit the mainstream. But he also sees it as a catalyst for new companies that will enter the market to resell connectivity for dedicated use cases. For example, a company like Twilio or Particle, which already resell network connectivity to clients, could bring together slices from different carriers to offer a global service with specific characteristics. A company like BMW could then use that service when it wants to roll out a software update at a specific time to all of its vehicles—and to ensure that the update made it through.
Or maybe Amazon or Microsoft Azure could offer an industrial IoT product to factories that have specific latency requirements, by bundling together wireless connectivity from multiple carriers. A few years back, the telecom industry was debating whether carriers were becoming a dumb pipe. Sharma thinks the ability to customize speed, latency, and quality of service means 5G will put an end to that particular debate.
That said, carriers charging customers based on the capabilities they need does mean that some people will bring up concerns around network neutrality and how to ensure that customers aren’t charged an arm and a leg for a decent best-effort service.
“It’s uncharted territory,” says Sharma. “When the FCC was looking at [network neutrality] they didn’t consider network slicing as part of the equation. So my view is that they will have to update what operators are allowed to do with network slicing. We’ll need more clarity on the ruling.”
This article appears in the March 2020 print issue as “What 5G Hype Gets Wrong.”
5G systems will deploy large scale antenna arrays and massive MIMO techniques to boost system capacity. Among MIMO techniques considered for 5G systems, hybrid beamforming has emerged as a scalable and economical choice. In this webinar, we cover elements of an end-to-end 5G hybrid beamforming design, that include:
5G waveform generation
Channel modeling and spatial multiplexing and precoding
In the ongoing race to make and break digital codes, the idea of perfect secrecy has long hovered on the horizon like a mirage. A recent research paper has attracted both interest and skepticism for describing how to achieve perfect secrecy in communications by using specially-patterned silicon chips to generate one-time keys that are impossible to recreate.
To keep us connected, our smartphones constantly switch between networks. They jump from cellular networks to public Wi-Fi networks in cafes, to corporate or university Wi-Fi networks at work, and to our own Wi-Fi networks at home. But we rarely have any input into the security and privacy settings of the networks to which we connect. In many cases, it would be tough to even figure out what those settings are.
A team at Northeastern University is developing personal virtual networks to give everyone more control over their online activities and how their information is shared. Those networks would allow a person’s devices to connect to cellular and Wi-Fi networks—but only on their terms.
Adding EDR capabilities into your AWS (Amazon Web Services) environment can inform investigations and provide actionable details for remediation. Attend this webinar to discover how to unpack and leverage the telemetry provided by endpoint security solutions using MITRE Cloud examples, such as Exploit Public-Facing Application (T1190) and Data Transfer to Cloud Account (T1537) by examining process trees. You will also find out how these solutions can help identify who has vulnerable software or configurations on their systems by leveraging indicators of compromise (IOC) to pinpoint the depth and breadth of malware (MD5).
The FCC measures exposure to RF energy as the amount of wireless power a person absorbs for each kilogram of their body. The agency calls this the specific absorption rate, or SAR. For a cellphone, the FCC’s threshold of safe exposure is 1.6 watts per kilogram. Penumbra’s test found that an iPhone 11 Pro emitted 3.8 W/kg.
Ryan McCaughey, Penumbra’s chief technology officer, said the test was a follow up to an investigation conducted by the Chicago Tribune last year. The Tribune tested several generations of Apple, Samsung, and Motorola phones, and found that many exceeded the FCC’s limit.
Replacing radio waves with laser light could boost the speed and reach of communications far beyond that promised by 5G. It could allow autonomous cars to talk to each other, let drones send high-resolution photos to the ground, and move large volumes of data around smart factories and smart homes. At least that’s the vision of SLD Laser, a Santa Barbara, Calif., company that demonstrated its latest version of laser LiFi at the recent Consumer Electronics Show in Las Vegas.
A clock in South Africa is counting down the seconds until a pair of broken cables are expected to go back in service. Every few hours, the service provider TENET, which keeps South Africa’s university and research facilities connected to the global Internet, tweets updates.
The eight-year-old West Africa Cable System (WACS) submarine cable, which runs parallel to Africa’s west coast, broke at two points early on 16 January. That same day, an 18-year-old cable called SAT-3 that runs along the same route also broke.
“With under $3,000 of compute time in Azure, we were able to break 435,000 certificates,” says JD Kilgallin, Keyfactor’s senior integration engineer and researcher. “We showed that this attack is very easy to execute now.”
Business districts may be bustling in the daytime, but they can often be near-deserted in the evenings. These fluctuations in population density pose a challenge to the emergence of 5G networks, which will require more hardware than ever before to relay massive amounts of data. Here’s the rub: To ensure reliable service, mobile networks must either invest in and deploy many more hardware units–or find ways to let the hardware move with the crowds.
One group of researchers is proposing a creative solution: installing small radio units on cars and crowdsourcing the task of data transmission when the vehicles are not un use. That approach relies on the fact that more cars tend to be parked in highly populated areas.
The most common network model that service providers are considering for 5G networks involves C-RAN architecture. Central units coordinate the transmission of data; the data is disseminated through distribution units and is further processed and transmitted by fleets of radio units. Those units convert the information to usable formats for mobile users.
Some researchers have explored deploying radio units on moving vehicles such as city buses which run along defined routes. But it has proven difficult to successfully transmit data via moving targets.
In a study published 20 January in IEEE Access, a Japanese research team showed that harnessing radio units on parked cars results in efficient data transmission, all while keeping radio units close to where people are. The team proposed a crowd-sourcing approach, in combination with a monetary or non-monetary incentive, which could be used to get drivers to participate.
With their approach, radio units are charged via the car battery and can be activated when the car is parked. When a crowd-sourced radio unit is available, it establishes a wireless mobile front-haul link with a neighboring distribution unit and starts working to transmit data to nearby phones.
In a series of simulations, the researchers compared the effectiveness of their approach to that of a traditional fleet of stationary radio units. The results show that 100 radio units installed on nearby parked cars that complement 200 stationary units (or 300 radio units total) can deliver data better than 400 uniformly dispersed stationary radio units. Thus, its possible to get better data throughput with fewer radio units that have the added benefit of almost always being where network users happen to be.
The researchers also confirmed the efficacy of this approach through experiments. “The improvement of throughput by locating and activating a radio unit near users was far higher than expected,” says Yu Nakayama, a member of the research team who is a professor at Tokyo University of Agriculture and Technology. “This result implies the effectiveness of adaptively locating and activating radio units based on the distribution of mobile users,” adds Yu.
The group is interested in exploring the commercialization of this technique. “We believe that it is a promising solution for the future mobile networks beyond 5G,” Yu says.
When government officials in India decided to shut down the Internet, software engineers working for an IT and data analytics firm lost half a day of work and fell behind in delivering a project for clients based in London. A hotel was unable to pay its employees or manage online bookings for tourists. A major hospital delayed staff salary payments and restricted its medical services to the outpatient and emergency departments.
At least, that’s what Azhar Hussain, the CEO of IoT company Hanhaa, told me on a phone call late last year. He thinks the end of Wi-Fi is nigh because he believes that allocating spectrum in smaller chunks will let municipalities, universities, and companies create private 5G cellular networks. The convenience of those networks will impel companies to choose cellular connections over Wi-Fi for their IoT devices.
There’s reason to think Hussain is right, at least for higher-value devices, such as medical devices, home appliances, and outdoor gear like pool-cleaning robots. Zach Supalla, the CEO of Particle, a company that supplies IoT components to businesses with little experience building connected products, says more than half of the IoT devices in Particle’s cloud that use cellular connections are also within range of a Wi-Fi network. Supalla says that companies choose cellular modules over Wi-Fi because the modules are easier to set up and businesses can better control the consumer experience.
Wi-Fi devices are notoriously difficult to connect to one another, or pair. To get a connected product on their home Wi-Fi network, consumers must often pair with a software-based access point before switching the device over to their own network.
This process can be fraught with errors. Even I, a reporter who has tested hundreds of connected devices, fail to get a device on my network on the first try roughly a third of the time. To make it easier, Amazon and Google have both created proprietary onboarding processes that handle the setup on behalf of the user, so that when consumers power their devices on, they automatically try to join their network.
However, device manufacturers still have to implement both Amazon’s and Google’s programs separately, and that requires know-how that some companies don’t possess. Thankfully, Amazon, Apple, and Google are now working on a smart-home standard that may simplify things. But the details are scant, and any solution they develop won’t be available until 2021 at the earliest.
When you’re faced with multiple Wi-Fi ecosystems, cellular is just easier, Hussain says. Cellular networks cost more now because you have to install radios on the devices and pay a subscription to use the cellular network. Hussain sees those costs coming down, potentially even disappearing, given time.
That’s because he’s anticipating a future where universities, businesses, and municipalities set up their own cellular networks using spectrum obtained through new spectrum auctions, such as the Citizens Broadband Radio fServices (CBRS) auctions occurring in the United States in June. Cellular equipment makers are already building gear and testing these private networks in factories and offices. If new roaming plans are developed to allow devices to come onto these local networks easily, similar to joining a Wi-Fi hotspot, cellular connectivity will become practically free.
Even if Hussain’s vision doesn’t come to pass in the next 10 years, the costs of low-data-rate cellular contracts will continue to drop, and that could still eventually put the nail in the coffin for Wi-Fi. And I mostly agree: I think there are plenty of reasons to believe that Wi-Fi will never disappear entirely, but I do think small cellular networks will take its place in our lives.
This article appears in the February 2020 print issue as “Wi-Fi’s Long Goodbye.”
Amidst rising tensions after the United States killed Qassem Soleimani, the chief of Iran’s Quds Force, in a drone strike in Baghdad last week, security experts and U.S. government officials warn that Iran may retaliate with cyberattacks.
Iran-based attack groups have expanded their digital offensive capabilities significantly since 2012, when they launched crippling distributed denial-of-service attacks against financial services companies. Since then, the cybersecurity arm of Iran’s Islamic Revolutionary Guard Corps, and private sector contractors acting on behalf of the government, have added tools to their arsenals.
Iran has also “demonstrated a willingness” to use wiper malware, CISA said in its 6 January alert. Wipers refer to a category of malware which erase the contents of the hard drive of an infected machine and then destroy the computer’s master boot record to make it impossible for the machine to boot up again. Just like any other type of malware, wipers rely on various methods for the initial infection, and once in, can steal information or execute unauthorized code. The difference is that wipers don’t care about being stealthy because the primary purpose is to render the machine unusable.
It’s a problem as old as radio: Radios cannot send and receive signals at the same time on the same frequency. Or to be more accurate, whenever they do, any signals they receive are drowned out by the strength of their transmissions.
Being able to send and receive signals simultaneously—a technique called full duplex—would make for far more efficient use of our wireless spectrum, and make radio interference less of a headache. As it stands, wireless communications generally rely on frequency- and time-division duplexing techniques, which separate the send and receive signals based on either the frequency used or when they occur, respectively, to avoid interference.
Kumu Networks, based in Sunnyvale, Calif., is now selling an analog self-interference canceller that the company says can be easily installed in most any wireless system. The device is a plug-and-play component that cancels out the noise of a transmitter so that a radio can hear much quieter incoming signals. It’s not true full duplex, but it tackles one of radio’s biggest problems: Transmitted signals are much more powerful than received signals.
“A transmitter signal is almost a trillion times more powerful than a receiver signal,” says Harish Krishnaswamy, an associate professor of electrical engineering at Columbia University, in New York City. That makes it extra hard to filter out the noise, he adds.
Krishnaswamy says that in order to cancel signals with precision, you have to do it in steps. One step might involve performing some cancellation within the antenna itself. More cancellation techniques can be developed in chips and in digital layers.
While it may be theoretically possible, Krishnaswamy notes that reliably reaching that mark has proven difficult, even for engineers in the lab. Out in the world, a radio’s environment is constantly changing. How a radio hears reflections and echoes of its own transmissions changes as well, and so cancellers must adapt to precisely filter out extraneous signals.
Kumu’s K6 Canceller Co-Site Interference Mitigation Module (the new canceller’s official name) is strictly an analog approach. Joel Brand, the vice president of product management at Kumu, says the module can achieve 50 decibels of cancellation. Put in terms of a power ratio, that means it cancels the transmitted signal by a factor of 100,000. That’s still a far cry from what would be required to fully cancel the transmitted signal, but it’s enough to help a radio hear signals more easily while transmitting.
Kumu’s module cancels a radio’s own transmissions by using analog components tuned to emit signals that are the inverse of the transmitted signals. The signal inversion allows the module to cancel out the transmitted signal and ensure that other signals the radio is listening for make it through.
Despite its limitations, Brand says there’s plenty of interest in a canceller of this caliber. One example he gives is an application in defense. Radio jammers are common tools, but with self-interference cancellation, they can ignore their own jamming signal to continue listening for other radios that are trying to broadcast. Another area where Brand says Kumu’s technology has received some interest is in aerospace, with one customer launching modules into space on satellites.
Kumu also develops digital cancellation techniques that can work in tandem with analog gear like its K6 canceller. However, according to Brand, digital cancellations tend to be highly bespoke. Performing them often means “cleaning” the signal after the radio has received it, which requires a deep knowledge of the radio systems involved.
Analog cancellation simply requires tuning the components to filter out the transmitted signal. And because of that simplicity, Kumu’s module may well find its place in noisy wireless environments such as the home—where digital assistants like Alexa and smart home devices have already moved in.
This article appears in the January 2020 print issue as “Helping Radios Hear Themselves.”
Installing optical fibers with fat cores once seemed like a good idea for local-area or campus data networks. It was easier to couple light into such “multimode” fibers than into the tiny cores of high-capacity “singlemode” fibers used for long-haul networks.
The fatter the core, the slower data flows through the fiber, but fiber with 50-micrometer (µm) cores can carry data at rates of 100 megabits per second up to a few hundred meters—good enough for local transmission.
Now Cailabs in Rennes, France has developed special optics that it says can send signals at rates of 10 gigabits per second (Gbps) up to 10 kilometers through the same fiber, avoiding the need to replace legacy multimode fiber. They hope to reach rates of 100 Gbps, which are now widely required for large data centers.
In February, a group of researchers from Google, Microsoft, Qualcomm, Samsung, and half a dozen universities will gather in San Jose, Calif., to discuss the challenge of bringing machine learning to the farthest edge of the network, specifically microprocessors running on sensors or other battery-powered devices.
The event is called the Tiny ML Summit (ML for “machine learning”), and its goal is to figure out how to run machine learning algorithms on the tiniest microprocessors out there. Machine learning at the edge will drive better privacy practices, lower energy consumption, and build novel applications in future generations of devices.
As a refresher, at its core machine learning is the training of a neural network. Such training requires a ton of data manipulation. The end result is a model that is designed to complete a task, whether that’s playing Go or responding to a spoken command.
Many companies are currently focused on building specialized silicon for machine learning in order to train networks inside data centers. They also want silicon for conducting inference—running data against a machine learning model to see if the data matches the model’s results—at the edge. But the goal of the Tiny ML community is to take inference to the smallest processors out there—like an 8-bit microcontroller that powers a remote sensor.
To be clear, there’s already been a lot of progress in bringing inference to the edge if we’re talking about something like a smartphone. In November 2019, Google open-sourced two versions of its machine learning algorithms, one of which required 50 percent less power to run, and the other of which performed twice as fast as previous versions of the algorithm. There are also several startups such as Flex Logix, Greenwaves, and Syntiant tackling similar challenges using dedicated silicon.
But the Tiny ML community has different goals. Imagine including a machine learning model that can separate a conversation from background noise on a hearing aid. If you can’t fit that model on the device itself, then you need to maintain a wireless connection to the cloud where the model is running. It’s more efficient, and more secure, to run the model directly on the hearing aid—if you can fit it.
Tiny ML researchers are also experimenting with better data classification by using ML on battery-powered edge devices. Jags Kandasamy, CEO of Latent AI, which is developing software to compress neural networks for tiny processors, says his company is in talks with companies that are building augmented-reality and virtual-reality headsets. These companies want to take the massive amounts of image data their headsets gather and classify the images seen on the device so that they send only useful data up to the cloud for later training. For example, “If you’ve already seen 10 Toyota Corollas, do they all need to get transferred to the cloud?” Kandasamy asks.
On-device classification could be a game changer in reducing the amount of data gathered and input into the cloud, which saves on bandwidth and electricity. Which is good, as machine learning typically requires a lot of electricity.
There’s plenty of focus on the “bigger is better” approach when it comes to machine learning, but I’m excited about the opportunities to bring machine learning to the farthest edge. And while Tiny ML is still focused on the inference challenge, maybe someday we can even think about training the networks themselves on the edge.
This article appears in the January 2020 print issue as “Machine Learning on the Edge.”
With the rapid rate of enterprises moving toward cloud providers, security organizations are seeking out methods to implement security controls. Attendees of this webinar will discover how to take advantage of the convenience of cloud access security brokers (CASBs) to integrate modern technologies and secure their AWS footprint with a suite of capabilities.
Attendees will learn to:
How a CASB can help make sense of auditing data
How a CASB can provide data protection and storage security
What common features of CASBs can be leveraged to secure AWS deployments
The collective thoughts of the interwebz
The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.