All posts by Michael Koziol

A Radio Frequency Exposure Test Finds an iPhone 11 Pro Exceeds the FCC’s Limit

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/radio-frequency-exposure-test-iphone-11-pro-double-fcc-limits

A test by Penumbra Brands to measure how much radiofrequency energy an iPhone 11 Pro gives off found that the phone emits more than twice the amount allowable by the U.S. Federal Communications Commission.

The FCC measures exposure to RF energy as the amount of wireless power a person absorbs for each kilogram of their body. The agency calls this the specific absorption rate, or SAR. For a cellphone, the FCC’s threshold of safe exposure is 1.6 watts per kilogram. Penumbra’s test found that an iPhone 11 Pro emitted 3.8 W/kg.

Ryan McCaughey, Penumbra’s chief technology officer, said the test was a follow up to an investigation conducted by the Chicago Tribune last year. The Tribune tested several generations of Apple, Samsung, and Motorola phones, and found that many exceeded the FCC’s limit.

No Job Is Too Small for Compact Geostationary Satellites

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/aerospace/satellites/no-job-is-too-small-for-compact-geostationary-satellites

graphic link to special report landing page

A typical geostationary satellite is as big as a van, as heavy as a hippopotamus, and as long-lived as a horse. These parameters shorten the list of companies and countries that can afford to build and operate the things.

Even so, such communications satellites have been critically important because their signals reach places that would otherwise be tricky or impossible to connect. For example, they can cover rural areas that are too expensive to hook up with optical fiber, and they provide Wi-Fi services to airliners and wireless communications to ocean-going ships.

The problem with these giant satellites is the cost of designing, building, and launching them. To become competitive, some companies are therefore scaling down: They’re building geostationary “smallsats.”

Two companies, Astranis and GapSat, plan to launch GEO smallsats in 2020 and 2021, respectively. Both companies acknowledge that there will always be a place for those hulking, minibus-size satellites. But they say there are plenty of business opportunities for a smaller, lighter, more flexible version—opportunities that the newly popular low Earth orbit (LEO) satellite constellations aren’t well suited for.

Market forces have made GEO smallsats desirable; technological advances have made them feasible. The most important innovations are in software-defined radio and in rocket launching and propulsion.

One reason geostationary satellites have to be so large is the sheer bulk of their analog radio components. “The biggest things are the filters,” says John Gedmark, the CEO and cofounder of Astranis, referring to the electronic component that keeps out unwanted radio frequencies. “This is hardware that’s bolted onto waveguides, and it’s very large and made of metal.”

Waveguides are essentially hollow metal tubes that carry a signal between transmitters and receivers to the actual antennas. For C-band frequencies, for instance, waveguides can be 3.5 centimeters wide. While that may not seem so big, analog satellites need a waveguide for every frequency they send and receive over, and the number of waveguides can add up quickly. Combined, the waveguides, filters, and other signal equipment are a massive, expensive volume of hardware that, until recently, had no work-around.

Software-defined radio provides a much more compact digital alternative. Although SDR as a concept has been around for decades, in recent years improved processors have made it increasingly practical to replace analog parts with much smaller digital equivalents.

Another technological improvement with an even longer pedigree is electric propulsion, which generates thrust by accelerating ionized propellant through an electric field. This method generates less thrust than chemical systems, which means it takes longer to move a satellite into a new orbit. However, electric propulsion gets far more mileage out of a given quantity of propellant, and that saves a lot of space and weight.

It’s worth mentioning that a smallsat may not be as minuscule as its name suggests. Just how small it can be depends on whom you’re talking to. For example, GapSat’s satellite, GapSat-2, weighs 20 kilograms and takes up about as much space as a mailbox, according to David Gilmore, the company’s chief operating officer. At one end, the smallest smallsats—sometimes called microsats or nanosats—are similar in size to CubeSats. Those briefcase-size things are being developed by SpaceX, OneWeb, and other companies for use in massive LEO constellations.

The miniaturization of geostationary satellites owes as much to market forces as to technology. Back in the 1970s, demand for broad spectrum access (for example, to broadcast television) favored large satellites bearing tons of transponders that could broadcast across many frequencies. But most of the fruits of the wireless spectrum have been harvested. Business opportunities now center on the scraps of spectrum remaining.

“There’s a lot of snippets of spectrum left over that the big guys have left behind,” says Rob Schwarz, the chief technology officer of space infrastructure at Maxar Technologies in Palo Alto, Calif. “That doesn’t fit into that big-satellite business model.”

Instead, companies like Astranis and GapSat are building business models around narrower spectrum access, transmitting over a smaller range of frequencies or covering a more localized geographic area.

Meanwhile, private rocket firms are cutting the cost of putting things up in orbit. SpaceX and Blue Origin have both developed reusable rockets that reduce costs per launch. And it’s easy enough to design rockets specifically to carry a lot of small packages rather than one huge one, which means the cost of a single launch can be spread over many more satellites.

That’s not to say that van-size GEO satellites are going extinct. “So there’s a trend to building bigger, more complicated, more sophisticated, much more expensive, and much higher-bandwidth satellites that make all the satellites ever built in the history of the world pale in comparison,” says Gregg Daffner, the chief executive officer of GapSat. These are the satellites that will replace today’s GEO communications satellites, the ones that can give wide-spectrum coverage to an entire continent.

But a second trend, the one GapSat is betting on, is that the new GEO smallsats won’t directly compete against those hemisphere-covering behemoths.

“For a customer that has the spectrum and the market demand, bigger is still better,” says Maxar’s Schwarz. “The challenge is whether or not their business case can consume a terabit per second. It’s sort of like going to Costco—you get the cheapest possible package of ground beef, but if you’re alone, you’ve got to wonder, am I going to eat 25 pounds of ground beef?” Companies like Astranis and GapSat are looking for the consumers that need just a bit of spectrum for only a very narrow application.

Astranis, for example, is targeting service providers that have no need for a terabit per second. “The smaller size means we can find customers who are interested in most, if not all, of the satellite’s capacity right off the bat,” says CEO Gedmark. Bigger satellites, for comparison, can take years to sell all their available spectrum: Astranis instead intends to launch each satellite already knowing what the specific single customer is. Astranis plans to launch its first satellite on a SpaceX Falcon 9 rocket in the last quarter of 2020. The satellite, which is for Alaska-based service provider Pacific Dataport, will have 7.5 gigabits per second of capacity in the Ka-band (26.5 to 40 gigahertz).

According to Gedmark, preventing overheating was one of the biggest technical challenges Astranis faced, because of the great power the company is packing into a small volume. There’s no air up there to carry away excess heat, so the satellite is entirely reliant on thermal radiation.

Although the Astranis satellite in many ways functions like a larger communications satellite, just for customers that need less capacity, GapSat—as its name implies—is looking to plug holes in the coverage of those larger satellites. Often, satellite service providers find that for a few months they need to supply a bit more bandwidth than a satellite in orbit can currently handle. That typically means they’ll need to borrow bandwidth from another satellite, which can involve tricky negotiations. Instead, GapSat believes, GEO smallsats can meet these temporary demands.

Historically, GapSat has been a bandwidth broker, connecting customers that needed temporary coverage with satellite operators that were in a position to offer it. Now the company plans to launch GapSat-2 to provide its own temporary service. The satellite would sit in place for just a few months before moving to another orbital slot for the next customer.

However, GapSat’s plan created a bit of a design conundrum. On the one hand, GapSat-2 needed to be small, to keep costs manageable and to be able to quickly shift into a new orbital slot. On the other, the satellite also had to work in whatever frequencies each customer required. Daffner calls the specifics of the company’s solution its “secret sauce,” but the upshot is that GapSat has developed wideband transponders to offer coverage in the Ku-band (12 to 18 GHz), Ka-band (26.5 to 40 GHz), and V/Q-band (33 to 75 GHz), depending on what each customer needs.

Don’t expect there to be much of a clash between the smallsat companies and the companies deploying LEO constellations. Gedmark says there’s little to no overlap between the two emerging forms of space-based communications.

He notes that because LEO constellations are closer to Earth, they have an advantage for customers that require low latency. LEO constellations may be a better choice for real-time voice communications, for example. If you’ve ever been on a phone call with a delay, you know how jarring it can be to have to wait for the other person to respond to what you said. However, by their nature, LEO constellations are all-or-nothing affairs because you need most if not all of the constellation in space to reliably provide coverage, whereas a small GEO satellite can start operating all by itself.

Astranis and GapSat will test that proposition soon after they launch in late 2020 and early 2021, respectively. They’ll be joined by Ovzon and Saturn Satellite Networks, which are also building GEO smallsats for 2021 launches as well.

There’s one final area in which the Astranis and GapSat satellites will differ from the larger communications satellites: their life span. GapSat-2 will have to be raised into a graveyard orbit after roughly 6 years, compared with 20 to 25 years for today’s huge GEO satellites. Astranis is also intentionally shooting for shorter life spans than those common for GEO satellites.

“And that is a good thing!” Gedmark says. “You just get that much faster of an injection of new technology up there, rather than having these incredibly long, 25-year technology cycles. That’s just not the world we live in anymore.”

This article appears in the January 2020 print issue as “Geostationary Satellites for Small Jobs.”

The article has been updated from its print version to reflect the most recent information on GapSat’s plans.

Kumu Networks Launches an Analog Radio Module That Cancels Its Own Interference

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/kumu-networks-launches-an-analog-radio-module-that-cancels-its-own-interference

It’s a problem as old as radio: Radios cannot send and receive signals at the same time on the same frequency. Or to be more accurate, whenever they do, any signals they receive are drowned out by the strength of their transmissions.

Being able to send and receive signals simultaneously—a technique called full duplex—would make for far more efficient use of our wireless spectrum, and make radio interference less of a headache. As it stands, wireless communications generally rely on frequency- and time-division duplexing techniques, which separate the send and receive signals based on either the frequency used or when they occur, respectively, to avoid interference.

Kumu Networks, based in Sunnyvale, Calif., is now selling an analog self-interference canceller that the company says can be easily installed in most any wireless system. The device is a plug-and-play component that cancels out the noise of a transmitter so that a radio can hear much quieter incoming signals. It’s not true full duplex, but it tackles one of radio’s biggest problems: Transmitted signals are much more powerful than received signals.

“A transmitter signal is almost a trillion times more powerful than a receiver signal,” says Harish Krishnaswamy, an associate professor of electrical engineering at Columbia University, in New York City. That makes it extra hard to filter out the noise, he adds.

Krishnaswamy says that in order to cancel signals with precision, you have to do it in steps. One step might involve performing some cancellation within the antenna itself. More cancellation techniques can be developed in chips and in digital layers.

While it may be theoretically possible, Krishnaswamy notes that reliably reaching that mark has proven difficult, even for engineers in the lab. Out in the world, a radio’s environment is constantly changing. How a radio hears reflections and echoes of its own transmissions changes as well, and so cancellers must adapt to precisely filter out extraneous signals.

Kumu’s K6 Canceller Co-Site Interference Mitigation Module (the new canceller’s official name) is strictly an analog approach. Joel Brand, the vice president of product management at Kumu, says the module can achieve 50 decibels of cancellation. Put in terms of a power ratio, that means it cancels the transmitted signal by a factor of 100,000. That’s still a far cry from what would be required to fully cancel the transmitted signal, but it’s enough to help a radio hear signals more easily while transmitting.

Kumu’s module cancels a radio’s own transmissions by using analog components tuned to emit signals that are the inverse of the transmitted signals. The signal inversion allows the module to cancel out the transmitted signal and ensure that other signals the radio is listening for make it through.

Despite its limitations, Brand says there’s plenty of interest in a canceller of this caliber. One example he gives is an application in defense. Radio jammers are common tools, but with self-interference cancellation, they can ignore their own jamming signal to continue listening for other radios that are trying to broadcast. Another area where Brand says Kumu’s technology has received some interest is in aerospace, with one customer launching modules into space on satellites.

Kumu also develops digital cancellation techniques that can work in tandem with analog gear like its K6 canceller. However, according to Brand, digital cancellations tend to be highly bespoke. Performing them often means “cleaning” the signal after the radio has received it, which requires a deep knowledge of the radio systems involved.

Analog cancellation simply requires tuning the components to filter out the transmitted signal. And because of that simplicity, Kumu’s module may well find its place in noisy wireless environments such as the home—where digital assistants like Alexa and smart home devices have already moved in.

This article appears in the January 2020 print issue as “Helping Radios Hear Themselves.”

The Ultimate Generation

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/the-ultimate-generation

One of 5G’s biggest selling points is faster data rates than any previous generation of wireless—by a lot. Millimeter waves are part of what’s making that possible. 5G’s use of millimeter waves, a higher-frequency band than 2G, 3G, or 4G ever used, has forced service providers like AT&T and T-Mobile to rethink how 5G networks should be deployed, with higher frequencies requiring small cell sites located more closely together.

6G, while still just a hazy idea in the minds of wireless researchers, could very well follow in 5G’s footsteps by utilizing higher frequencies and pushing for faster data rates. So let’s have some fun with that—assuming such qualities remain important for future generations of wireless, where will that take us? What will 8G look like? 10G? And at what point does this extrapolation to future generations of wireless no longer make physical sense?

Expert Discusses Key Challenges for the Next Generation of Wearables

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/the-human-os/biomedical/devices/wireless-wearables-for-biochemistry-still-need-a-handful-of-improvements

For decades, pedometers have counted our steps and offered insights into mobility. More recently, smartwatches have begun to track a suite of vital signs in real-time. There is, however, a third category of information—in addition to mobility and vitals—that could be monitored by wearable devices: biochemistry.

To be clear, there are real-time biochemistry monitoring devices available like the FreeStyle Libre, which can monitor blood glucose in people with diabetes for 14 days at a time, relatively unobtrusively. But Joseph Wang, a professor of nanoengineering at the University of California, San Diego thinks there’s still room for improvement. During a talk about biochemistry wearables at ApplySci’s 12th Wearable Tech + Digital Health + Neurotech event at Harvard on 14 November, Wang outlined some of the key challenges to making such wearables as ubiquitous and unobtrusive as the Apple Watch.

Wang identified three engineering problems that must be tackled: flexibility, power, and treatment delivery. He also discussed potential solutions that his research team has identified for each of these problems.

What’s Next for Spectrum Sharing?

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/with-darpas-spectrum-collaboration-challenge-completed-whats-the-next-step-for-spectrum-sharing-technologies

“You’ve graduated from the school of spectral hard knocks,” Paul Tilghman, a U.S. Defense Advanced Research Projects Agency (DARPA) program manager, told the teams competing in the agency’s Spectrum Collaboration Challenge (SC2) finale on 23 October. The three-year competition had just concluded, and the top three teams were being called on stage as a song that sounded vaguely like “Pomp and Circumstance” played overhead.

“Hard knocks” wasn’t an exaggeration—the 10 teams that made it to the finale, as well as others who were eliminated in earlier rounds of the competition—had been tasked with proving something that hadn’t been demonstrated before. Their challenge was to see if AI-managed radio systems could work together to share wireless spectrum more effectively than static, pre-allocated bands. They had spent years battling it out in match-ups in a specially-built RF emulator DARPA built for the competition, Colosseum.

By the end of the finale, the top teams had demonstrated their systems could transmit more data over less spectrum than existing standards like LTE, and shown an impressive ability to reuse spectrum over multiple radios. In some finale match-ups, the radio systems of five teams were transmitting over 200 or 300 percent more data than is currently possible with today’s rigid spectrum band allocations. And that’s important, given that we’re facing a looming wireless spectrum crunch.

Darpa Grand Challenge Finale Reveals Which AI-Managed Radio System Shares Spectrum Best

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/aimanaged-radio-systems-duked-it-out-to-see-which-one-could-share-spectrum-the-best-in-the-spectrum-collaboration-challenge-finale

It all started when Grant Imahara, electrical engineer and former host of MythBusters, walked onto the stage. A celebrity guest appearance is quite a way to kick off a U.S. Defense Advanced Research Projects Agency (DARPA) event, right?

The event in question was the finale of DARPA’s Spectrum Collaboration Challenge, or SC2. SC2 has been one in the long list of DARPA’s grand challenges—starting with 2004’s eponymous Grand Challenge for self-driving cars. Over the duration of the three-year SC2 competition, teams from around the world have worked to prove that it is possible to create AI-managed radio systems that can manage wireless spectrum better than traditional allocation strategies.

DARPA’s motivation in running the challenge—and the hope of Paul Tilghman, the program manager who’s overseen it—is to develop AI-managed radio systems to help avoid a looming spectrum crunch. Wireless spectrum is a finite and exceedingly valuable resource. And an ever-growing number of applications and industries that want a slice of spectrum (with 5G being just the latest newcomer in a long line of technologies) means there soon won’t be any available spectrum left. That is, if the wireless ecosystem sticks to its century-old method of pre-allocating spectrum for different uses in rigid bands.

At stake in the finale (beyond bragging rights, of course): nearly US $4 million in prize money. The winner: the team whose AI-managed radio system collaborated best when matched up with the diverse lineup of systems other teams had built and brought to the finale. The winning team walked away with a $2 million first place prize, while second and third left with $1 million and $750,000, respectively.

The finale was presented live by Imahara, Tilghman, and Ben Hilburn, president of the GNU Radio Foundation. There to witness it were attendees of MWC Los Angeles, a mobile industry convention. Though it was the culmination of three years of competition, the finale allowed the teams’ radio systems to duke it out in real time for the first time. The head-to-head matchup was made possible by DARPA’s efforts in relocating Colosseum, the massive radio frequency emulator at the heart of the competition, to the Los Angeles Convention Center floor just before the event.

In the end, GatorWings, the team representing the University of Florida’s Electrical and Computer Engineering Department, walked away with the first-place prize. MarmotE came in second, and rounding out the podium was Zylinium.

The format of the finale hewed closely to that of last December’s Preliminary Event 2, but with a few twists. All ten finalists competed in the first set of round-robin match-ups in a scenario called “Alleys of Austin.” Each of the radio systems played the role of a squad of soldiers in a concentrated effort to sweep an urban neighborhood; by the end of the round, the lowest performing team would be eliminated.

The first five rounds of match-ups, run and analyzed while Colosseum was still in its home at Johns Hopkins University Applied Physics Lab, weeded out teams as DARPA tested them on different qualities necessary for AI-managed radio systems to succeed in the real world. In each match-up, teams were awarded points based on how many radio connections their systems could make during the scenario. The catch, however, is that if a competitor stifled one of the other teams and hogged too much of the spectrum, everyone would be penalized and no one would earn points. Awarding points based on collaborative performance avoided what Tilghman calls “spectrum anarchy,” where teams would incentivize their own spectrum usage at the expense of everyone else.

The second round event, called “Off the Cuff,” saw the systems learning to negotiate ad-hoc communications as though the teams were disaster relief agencies setting up operations just after a natural disaster.

“Wildfire,” the third-round challenge, measured the teams’ ability to prioritize traffic, by placing them in a wildfire response scenario where radio communications with the important air tankers had to take precedence over everything else.

“Slice of Life,” the fourth-round scenario, imagined the remaining teams in a shopping mall setting, where each team’s radio acted as a Wi-Fi hotspot for a different shop or restaurant—the idea being that they’d have to collaborate as different stores experienced a surge in traffic at different points during the day. (A coffee shop, for example, would require more spectrum in the morning, and a bar would need more at night).

The last round for the live finals was called “Trash Compactor.” That setup forced the radio systems to share an increasingly narrow slice of spectrum. Over the course of the match, the band narrowed from 20 megahertz to 10 MHz and again to only 5 MHz by the end.

Then, with Colosseum in place outside the portioned room in which MWC Los Angeles attendees had gathered to see the spectacle, the competitors did it again, and results were churned out in real time. The five remaining teams—MarmotE, Erebus, Zylinium, Andersons, and GatorWings—found themselves duking it out again in versions of the previous scenarios that came with added twists that Tilghman described as “fun surprises.” 

The amped-up “Slice of Life” scenario, for example, dropped the teams back into the shopping mall. But this time, there was a nearby legacy incumbent system, like a satellite base station. The incumbent had priority over a portion of the spectrum band, and would trip an alert if the collaborating teams drowned out its own communications. Suddenly, the teams were having to manage random surges in demand for spectrum via their hotspots while also not taking up too much and disrupting the legacy operator.

With a few exceptions, the radio systems showed they could roll with the punches SC2 threw at them. The fifth round of match-ups, “Trash Compactor,” proved to be perhaps the most challenging, as teams seemed to struggle to share the spectrum effectively as the available amount got squeezed. And yet, situations like working alongside an incumbent operator—something that stymied many of the teams last December—seemed much easier this time around.

The teams’ systems also established radio connections more quickly, and made more effective use of the available spectrum to complete more connections than they had less than a year ago. In one of the “Slice of Life” match-ups, teams collaborated to achieve almost 340 percent spectrum occupancy. In essence, their radio systems crammed almost 70 MHz of traditional spectrum applications into 20 MHz of bandwidth.

It’s hard not to be impressed by that level of spectral efficiency. But after the awards were handed out and the photographs taken, Tilghman stressed the fact that the technology has a long way to go to be fully developed. Still, he thinks it will be developed. “I think the paradigm of spectrum collaboration is here to stay,” Tilghman said to conclude the event, “and it will propel us from spectrum scarcity to spectrum abundance.”

The Forklift Ballet: How DARPA Trucked Its Massive Radio-Frequency Testbed Across the United States

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/the-forklift-ballet-how-darpa-trucked-its-massive-radiofrequency-testbed-across-the-united-states

When it comes to relocating a data center, Joel Gabel is an expert. But when the U.S. Defense Advanced Research Projects Agency (DARPA) selected the company he works for, Pivot Technology Services, to help them with a project, he says that it more or less went against all the best practices.

That’s because the project was relocating Colosseum. At first glance, Colosseum may look like a data center, but in reality, it’s a massive radio-frequency emulation testbed that DARPA built for its Spectrum Collaboration Challenge (SC2). SC2 has been a three-year competition to demonstrate the validity of using artificial intelligences to work together in order to use wireless spectrum more efficiently than operating on pre-allocated bands would be.

A Machine Learning Classifier Can Spot Serial Hijackers Before They Strike

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/mit-and-caida-researchers-want-to-use-machine-learning-to-plug-one-of-the-internets-biggest-holes

How would you feel if, every time you had to send sensitive information somewhere, you relied on a chain of people playing the telephone game to get that information to where it needs to go? Sounds like a terrible idea, right? Well, too bad, because that’s how the Internet works.

Data is routed through the Internet’s various metaphorical tubes using what’s called the Border Gateway Protocol (BGP). Any data moving over the Internet needs a physical path of networks and routers to make it from A to B. BGP is the protocol that moves information through those paths—though the downside, like a person in a game of telephone, is that each junction in the path only knows what they’ve been told by their immediate neighbor.

5G’s Waveform Is a Battery Vampire

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/5gs-waveform-is-a-battery-vampire

As carriers roll out 5G, industry group 3GPP is considering other ways to modulate radio signals

5G report logo, link to report landing page

In 2017, members of the mobile telephony industry group 3GPP were bickering over whether to speed the development of 5G standards. One proposal, originally put forward by Vodafone and ultimately agreed to by the rest of the group, promised to deliver 5G networks sooner by developing more aspects of 5G technology simultaneously.

Adopting that proposal may have also meant pushing some decisions down the road. One such decision concerned how 5G networks should encode wireless signals. 3GPP’s Release 15, which laid the foundation for 5G, ultimately selected orthogonal frequency-division multiplexing (OFDM), a holdover from 4G, as the encoding option.

But Release 16, expected by year’s end, will include the findings of a study group assigned to explore alternatives. Wireless standards are frequently updated, and in the next 5G release, the industry could address concerns that OFDM may draw too much power in 5G devices and base stations. That’s a problem, because 5G is expected to require far more base stations to deliver service and connect billions of mobile and IoT devices.

“I don’t think the carriers really understood the impact on the mobile phone, and what it’s going to do to battery life,” says James Kimery, the director of marketing for RF and software-defined radio research at National Instruments Corp. “5G is going to come with a price, and that price is battery consumption.”

And Kimery notes that these concerns apply beyond 5G handsets. China Mobile has “been vocal about the power consumption of their base stations,” he says. A 5G base station is generally expected to consume roughly three times as much power as a 4G base station. And more 5G base stations are needed to cover the same area.

So how did 5G get into a potentially power-guzzling mess? OFDM plays a large part. Data is transmitted using OFDM by chopping the data into portions and sending the portions simultaneously and at different frequencies so that the portions are “orthogonal” (meaning they do not interfere with each other).

The trade-off is that OFDM has a high peak-to-average power ratio (PAPR). Generally speaking, the orthogonal portions of an OFDM signal deliver energy constructively—that is, the very quality that prevents the signals from canceling each other out also prevents each portion’s energy from canceling out the energy of other portions. That means any receiver needs to be able to take in a lot of energy at once, and any transmitter needs to be able to put out a lot of energy at once. Those high-energy instances cause OFDM’s high PAPR and make the method less energy efficient than other encoding schemes.

Yifei Yuan, ZTE Corp.’s chief engineer of wireless standards, says there are a few emerging applications for 5G that make a high PAPR undesirable. In particular, Yuan, who is also the rapporteur for 3GPP’s study group on nonorthogonal multiple-access possibilities for 5G, points to massive machine-type communications, such as large-scale IoT deployments.

Typically, when multiple users, such as a cluster of IoT devices would communicate using OFDM, their communications would be organized using orthogonal frequency-division multiple access (OFDMA), which allocates a chunk of spectrum to each user. (To avoid confusion, remember that OFDM is how each device’s signals are encoded, and OFDMA is the method to make sure that overall, one device’s signals don’t interfere with any others.) The logistics of using distinct spectrum for each device could quickly spiral out of control for large IoT networks, but Release 15 established OFDMA for 5G-connected machines, largely because it’s what was used on 4G.

One promising alternative that Yuan’s group is considering, non-orthogonal multiple access (NOMA), could deliver the advantages of OFDM while also overlapping users on the same spectrum.

For now, Yuan believes OFDM and OFDMA will suit 5G’s early needs. He sees 5G first being used by smartphones, with applications like massive machine-type communications not arriving for at least another year or two, after the completion of Release 16, currently scheduled for December 2019.

But if network providers want to update their equipment to provide NOMA down the line, there could very well be a cost. “This would not come for free,” says Yuan. “Especially for the base station sites.” At the very least, base stations would need software updates to handle NOMA, but they might also require more advanced receivers, more processing power, or other hardware upgrades.

Kimery, for one, isn’t optimistic that the industry will adopt any non-OFDMA options. “It is possible there will be an alternative,” he says. “The probability isn’t great. Once something gets implemented, it’s hard to shift.”

Google’s Equiano Cable Will Extend to the Remote Island of Saint Helena, Flooding It With Data

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/googles-equiano-cable-will-extend-to-the-remote-island-of-saint-helena-flooding-it-with-data

The tiny island will need to turn itself into a data hub to make use of the expected bandwidth

If you know anything about the South Atlantic island of Saint Helena, that’s probably because it was the island where the British government exiled Napoleon until he died in 1821. It was actually the second time Britain attempted to exile Napoleon, and the island was chosen for a very specific reason: It’s incredibly remote.

Napoleon is long gone, but the island’s remoteness continues to pose challenges for its 4,500-odd residents. They used to only be able to reach St. Helena by boat once every 3 weeks, though it’s now possible to catch the occasional flight from Johannesburg, South Africa to what’s been called “the world’s most useless airport.” Residents’ Internet prospects are even worse—the island’s entire population shares a single 50 megabits per second satellite link.

That’s about to change, however, as the St. Helena government has shared a letter of intent describing a plan to connect the island to Google’s recently announced Equiano cable. The cable will be capable of delivering orders of magnitude more data than anything the island has experienced. It will create so much capacity, in fact, that St. Helena could use the opportunity to transform itself from an almost unconnected island to a South Atlantic data hub.

Lunar Pioneers Will Use Lasers to Phone Home

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/lunar-pioneers-will-use-lasers-to-phone-home

NASA’s Orion and Gateway will try out optical communications gear for a high-speed connection to Earth

graphic link to special report landing page
graphic link to special report landing  page

With NASA making serious moves toward a permanent return to the moon, it’s natural to wonder whether human settlers—accustomed to high-speed, ubiquitous Internet access—will have to deal with mind-numbingly slow connections once they arrive on the lunar surface. The vast majority of today’s satellites and spacecraft have data rates measured in kilobits per second. But long-term lunar residents might not be as satisfied with the skinny bandwidth that, say, the Apollo astronauts contended with.

To meet the demands of high-definition video and data-intensive scientific research, NASA and other space agencies are pushing the radio bands traditionally allocated for space research to their limits. For example, the Orion spacecraft, which will carry astronauts around the moon during NASA’s Artemis 2 mission in 2022, will transmit mission-critical information to Earth via an S-band radio at 50 megabits per second. “It’s the most complex flight-management system ever flown on a spacecraft,” says Jim Schier, the chief architect for NASA’s Space Communications and Navigation program. Still, barely 1 Mb/s will be allocated for streaming video from the mission. That’s about one-fifth the speed needed to stream a high-definition movie from Netflix.

To boost data rates even higher means moving beyond radio and developing optical communications systems that use lasers to beam data across space. In addition to its S-band radio, Orion will carry a laser communications system for sending ultrahigh-definition 4K video back to Earth. And further out, NASA’s Gateway will create a long-term laser communications hub linking our planet and its satellite.

Laser communications are a tricky proposition. The slightest jolt to a spacecraft could send a laser beam wildly off course, while a passing cloud could interrupt it. But if they work, robust optical communications will allow future missions to receive software updates in minutes, not days. Astronauts will be sheltered from the loneliness of working in space. And the scientific community will have access to an unprecedented flow of data between Earth and the moon.

Today, space agencies prefer to use radios in the S band (2 to 4 gigahertz) and Ka band (26.5 to 40 GHz) for communications between spacecraft and mission control, with onboard radios transmitting course information, environmental conditions, and data from dozens of spaceflight systems back to mission control. The Ka band is particularly prized—Don Cornwell, who oversees radio and optical technology development at NASA, calls it “the Cadillac of radio frequencies”—because it can transmit up to gigabits per second and propagates well in space.

Any spacecraft’s ability to transmit data is constrained by some unavoidable physical truths of the electromagnetic spectrum. For one, radio spectrum is finite, and the prized bands for space communications are equally prized by commercial applications. Bluetooth and Wi-Fi use the S band, and 5G cellular networks use the Ka band.

The second big problem is that radio signals disperse in the vacuum of space. By the time a Ka-band signal from the moon reaches Earth, it will have spread out to cover an area about 2,000 kilometers in diameter—roughly the size of India. By then, the signal is a lot weaker, so you’ll need either a sensitive receiver on Earth or a powerful transmitter on the moon.

Laser communications systems also have dispersion issues, and beams that intersect can muddle up the data. But a laser beam sent from the moon would cover an area only 6 km across by the time it arrives on Earth. That means it’s much less likely for any two beams to intersect. Plus, they won’t have to contend with an already crowded chunk of spectrum. You can transmit a virtually limitless quantity of data using lasers, says Cornwell. “The spectrum for optical is unconstrained. Laser beams are so narrow, it’s almost impossible [for them] to interfere with one another.”

Higher frequencies also mean shorter wavelengths, which bring more benefits. Ka-band signals have wavelengths from 7.5 millimeters to 1 centimeter, but NASA plans to use lasers that have a 1,550-nanometer wavelength, the same wavelength used for terrestrial optical-fiber networks. Indeed, much of the development of laser communications for space builds on existing optical-fiber engineering. Shorter wavelengths (and higher frequencies) mean that more data can be packed into every second.

The advantages of laser communications have been known for many years, but it’s only recently that engineers have been able to build systems that outperform radio. In 2013, for example, NASA’s Lunar Laser Communications Demonstration proved that optical signals can reliably send information from lunar orbit back to Earth. The month-long experiment used a transmitter on the Lunar Atmosphere and Dust Environment Explorer to beam data back to Earth at speeds of 622 Mb/s, more than 10 times as fast as Orion’s S-band radio will.

“I was shocked to learn [Orion was] going back to the moon with an S-band radio,” says Bryan Robinson, an optical communications expert at MIT Lincoln Laboratory in Lexington, Mass. Lincoln Lab has played an important role in developing many of the laser communications systems on NASA missions, starting with the early optical demonstrations of the classified GeoLITE satellite in 2001. “Humans have gotten used to so much more, here on Earth and in low Earth orbit. I was glad they came around and put laser comm back on the mission.”

As a complement to its S-band radio, during the Artemis 2 mission Orion will carry a laser system called Optical to Orion, or O2O. NASA doesn’t plan to use O2O for any mission-critical communications. Its main task will be to stream 4K ultrahigh-definition video from the moon to a curious public back home. O2O will receive data at 80 Mb/s and transmit at 20 Mb/s while in lunar orbit. If you’re wondering why O2O will transmit at 20 Mb/s when a demonstration project six years ago was able to transmit at 622 Mb/s, it’s simply because the Orion developers “never asked us to do 622,” says Farzana Khatri, a senior staff member in Lincoln Lab’s optical communications group. Cornwell confirms that O2O’s downlink will deliver a minimum of 80 Mb/s from Earth, though the system is capable of higher data rates.

If successful, O2O will open the door for data-heavy communications on future crewed missions, allowing for video chats with family, private consultations with doctors, or even just watching a live sports event during downtime. The more time people spend on the moon, the more important all of these connections will be to their mental well-being. And eventually, video will become mission critical for crews on board deep-space missions.

Before O2O can even be tested in space, it first has to survive the journey. Laser systems mounted on spacecraft use telescopes to send and receive signals. Those telescopes rely on a fiddly arrangement of mirrors and other moving parts. O2O’s telescope will use an off-axis Cassegrain design, a type of telescope with two mirrors to focus the captured light, mounted on a rotating gimbal. Lincoln Lab researchers selected the design because it will allow them to separate the telescope from the optical transceiver, making the entire system more modular. The engineers must ensure that the Space Launch System rocket carrying Orion won’t shake the whole delicate arrangement apart. The researchers at Lincoln Lab have developed clasps and mounts that they hope will reduce vibrations and keep everything intact during the tumultuous launch.

Once O2O is in space, it will have to be precisely aimed. It’s hard to miss a receiver when your radio signal has the cross section the size of a large country. A 6-km-diameter signal, on the other hand, could miss Earth entirely with just a slight bump from the spacecraft. “If you [use] a laser pointer when you’re nervous and your hand is shaking, it’s going to go all over the place,” says Cornwell.

Orion’s onboard equipment will also generate constant minuscule vibrations, any one of which would be enough to throw off an optical signal. So engineers at NASA and Lincoln Lab will place the optical system on an antijitter platform. The platform measures the jitters from the spacecraft and produces an opposite pattern of vibrations to cancel them out—“like noise-canceling headphones,” Cornwell says.

One final hurdle for O2O will be dealing with any cloud cover back on Earth. Infrared wavelengths, like the O2O’s 1,550 nm, are easily absorbed by clouds. A laser beam might travel the nearly 400,000 km from the moon without incident, only to be blocked just above Earth’s surface. Today, the best defense against losing a signal to a passing stratocumulus is to beam transmissions to multiple receivers. O2O, for example, will use ground stations at Table Mountain, Calif., and White Sands, N.M.

The Gateway, scheduled to be built in the 2020s, will present a far bigger opportunity for high-speed laser communications in space. NASA, with help from its Canadian, European, Japanese, and Russian counterparts, will place this space station in orbit around the moon; the station will serve as a staging area and communications relay for lunar research.

NASA’s Schier suspects that research and technology demonstrations on the Gateway could generate 5 to 8 Gb/s of data that will need to be sent back to Earth. That data rate would dwarf the transmission speed of anything in space right now—the International Space Station (ISS) sends data to Earth at 25 Mb/s. “[Five to 8 GB/s is] the kind of thing that if you turned everything on in the [ISS], you’d be able to run it for 2 seconds before you overran the buffers,” Schier says.

The Gateway offers an opportunity to build a permanent optical trunk line between Earth and the moon. One thing NASA would like to use the Gateway for is transmitting positioning, navigation, and timing information to vehicles on the lunar surface. “A cellphone in your pocket needs to see four GPS satellites,” says Schier. “We’re not going to have that around the moon.” Instead, a single beam from the Gateway could provide a lunar rover with accurate distance, azimuth, and timing to find its exact position on the surface.

What’s more, using optical communications could free up radio spectrum for scientific research. Robinson points out that the far side of the moon is an optimal spot to build a radio telescope, because it would be shielded from the chatter coming from Earth. (In fact, radio astronomers are already planning such an observatory: Our article “Rovers Will Unroll a Telescope on the Moon’s Far Side” explains their scheme.) If all the communication systems around the moon were optical, he says, there’d be nothing to corrupt the observations.

Beyond that, scientists and engineers still aren’t sure what else they’ll do with the Gateway’s potential data speeds. “A lot of this, we’re still studying,” says Cornwell.

In the coming years, other missions will test whether laser communications work well in deep space. NASA’s mission to the asteroid Psyche, for instance, will help determine how precisely an optical communications system can be pointed and how powerful the lasers can be before they start damaging the telescopes used to transmit the signals. But closer to home, the communications needed to work and live on the moon can be provided only by lasers. Fortunately, the future of those lasers looks bright.

This article appears in the July 2019 print issue as “Phoning Home, With Lasers.”

Ossia’s Wireless Charging Tech Could Be Available By Next Year

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/energywise/consumer-electronics/gadgets/ossias-wireless-charging-tech-may-be-available-by-2020

The Cota wireless power system delivers 1 watt up to 1 meter away

Wireless power company Ossia has received authorization from the U.S. Federal Communications Commission (FCC) for its Cota wireless power system. The FCC authorization is a crucial step toward Ossia’s goal of seeing devices that incorporate Cota on the market in 2020.

While supplying power wirelessly seems like a promising idea in theory, the technology has been explored for years and its reality has never quite lived up to the hype. In all likelihood, the most sophisticated example you’ve seen of wireless charging is something like the wireless charging pads on coffee shop tables. Such pads require you to leave your device on them, making it difficult or impossible to use your gadget at the same time. That’s a far cry from being able to set down your device anywhere in the room and have it charge, or charge while you’re using it.

The physics of making such a thing possible are, it turns out, tricky. There’s been no shortage of ideas for wireless charging methods; startup uBeam, for example, has promised ultrasonic power transmitters but struggled to deliver for years. Energous, another startup, sells a charger that emits a pocket of 5.8 GHz radio waves to provide power to nearby devices.

Melting Arctic Ice Opens a New Fiber Optic Cable Route

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/melting-sea-ice-opens-the-floodgate-for-a-new-fiber-optic-cable-route

Cinia and MegaFon’s proposed Arctic cable would bring lower latencies and geographical diversity

The most reliable way to reduce latency is to make sure your signal travels over the shortest physical distance possible. Nowhere is that more evident than the fiber optic cables that cross oceans to connect continents. There will always be latency between Europe and North America, for example, but where you lay the cable connecting the two continents affects that latency a great deal.

To that end, Helsinki-based Cinia, which owns and operates about 15,000 kilometers of fiber optic cable, and MegaFon, a Russian telecommunications operator, signed a memorandum of understanding to lay a fiber optic cable across the Arctic Ocean. The cable, if built, would not only reduce latency between users in Europe, Asia, and North America, but provide some much-needed geographical diversity to the world’s undersea cable infrastructure.

The vast majority of undersea cable encircles the world along a relatively similar path: From the east coast of the United States, cables stretch across the northern Atlantic to Europe, through the Mediterranean and Red Seas, across the Indian Ocean before swinging up through the South China Sea and finally spanning the Pacific to arrive at the west coast of the U.S. Other cable routes exist, but none have anywhere near the data throughput that this world-girding trunk line has.

Ari-Jussi Knaapila, the CEO of Cinia, estimates that the planned Arctic cable, which would stretch from London to Alaska, would shorten the physical cable distance between Europe and the western coast of North America by 20 to 30 percent. Additional cable will extend the route down to China and Japan, for a planned total of 10,000 kilometers of new cable.

Knaapila also says that the cable is an important effort to increase geographic diversity of the undersea infrastructure. Because many cables run along the same route, various events—earthquakes, tsunamis, seabed landslides, or even an emergency anchoring by a ship—can damage several cables at once. On December 19, 2008, 14 countries lost their connections to the Internet after ship anchors cut five major undersea cables in the Mediterranean Sea and Red Sea.

“Submarine cables are much more reliable than terrestrial cables with the same length,” Knaapila says. “But when a fault occurs in the submarine cable, the repair time is much longer.” The best way to avoid data crunches when a cable is damaged is to already have cables elsewhere that were unaffected by whatever event broke the original cable in the first place.

Stringing a cable across the Arctic Ocean is not a new idea, though other proposed projects, including the semi-built Arctic Fibre project, have never been completed. In the past, the navigational season in the Arctic was too short to easily build undersea cables. Now, melting sea ice due to climate change is expanding that window and making it more feasible (The shipping industry is coming to similar realizations as new routes open up).

The first step for the firms under the memorandum of understanding is to establish a company by the end of the year tasked with the development of the cable. Then come route surveys of the seabed, construction permits (for both on-shore and off-shore components), and finally, the laying of the cable. According to Knaapila, 700 million euros have been budgeted for investment in the route.

The technical specifics of the cable itself have yet to be determined. Most fiber optic cables use wavelengths of light around 850, 1300, or 1550 nanometers, but for now, the goal is to remain as flexible as possible. For the same reason, the data throughput of the proposed cable remains undecided.

Of course, not all cable projects succeed. The South Atlantic Express (SAex), would be one of the first direct links between Africa and South America, and connect remote islands like St. Helena along the way. But SAex has struggled with funding and currently sits in limbo. Cinia and MegaFon hope to avoid a similar fate.

Loon’s Balloons Deliver Emergency Internet Service to Peru Following 8.0 Earthquake

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/loons-balloons-deliver-emergency-service-to-peru-following-80-earthquake

The company was able to respond quickly because it had already begun tests in the country

When a magnitude 8.0 earthquake struck Peru on Sunday, it wreaked havoc on the country’s communications infrastructure. Within 48 hours, though, people in affected regions could use their mobile phones again. Loon, the Alphabet company, was there delivering wireless service by balloon.

Such a rapid response was possible because Loon happened to be in the country, testing its equipment while working out a deal with provider Telefonica. Both terrestrial infrastructure and the balloons themselves were already in place, and Loon simply had to reorganize its balloons to deliver service on short notice.

Creating a Better Online Experience for Billions is Just a Fix Away

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/at-work/innovation/creating-a-better-online-experience-for-billions-is-just-a-fix-away

Code that makes network applications compliant with universal-acceptance standards is an easy way to build a more inclusive Internet

Because much of the earliest work in computing and networking occurred in the United States and Europe, the Latin alphabet and its conventions—such as a left-to-right ordering of characters—got baked into software and hardware. But after spending years as the general manager of a domain registry for the Asia Pacific region, Don Hollander became convinced that Internet applications should support as many languages and writing systems as possible.

Which is why Hollander is now the secretary general of the Universal Acceptance Steering Group (UASG), a group that champions the idea that all valid top-level domains (TLDs), such as .com, .tech, or .信息, should function in any Web or email application. In the process, not only would the Web become more globally accessible, but companies would also be able to make sales or capture customer information that they currently lose, with the UASG estimating that the economic benefits could be US $9.8 billion per annum.

“The domain name space has changed a lot in the last few years,” Hollander says. Originally, TLDs used to be either three letters long, such as .edu, or two letters long, for country codes like .de. But around 2010, things started changing. People were clamoring for more diversity in what could be used for a TLD.

That led to two big changes. First was the creation of extended gTLDS—generic TLDs that can be three letters or longer—which is why .law and .info are now valid options (the UASG website itself uses .tech). Second, TLDs could be set up in languages that don’t use the Latin alphabet, allowing general Unicode characters in email addresses and TLDs. By 2013, over 2,000 new TLDs had been established.

By 2015, Hollander says, the ability to handle these new and various TLDs had been largely sorted out at the Domain Name System (DNS) level—that is, at the level of the directories that manage TLDs and associate them with specific numeric Internet addresses. (There are still some problems, however. Emojis are fickle because from a code perspective, the same emoji can be composed in multiple ways. That’s why emoji-based URLs, while they do exist, are difficult to work with.)

The remaining challenge, according to Hollander, is spreading the word, because it doesn’t matter if everything works at the network level if the code driving specific applications still supports only two- or three-letter TLDs and Latin-character email addresses. And unfortunately, many application developers have not kept up with the times.

Creating a software routine to check if an email address or TLD was valid used to be pretty straightforward. Ten years ago, if an application asked a user for an email address, for example, the developer could check if the response was valid by testing it in a few simple ways: It should have the symbol @, and it should end in a period followed by two or three letters. If it didn’t pass those tests, it was garbage.

When longer domain names and Unicode came along, those developers’ tests got more convoluted. “Now, I need to look for 2, 3, 4, 6, or 7 characters,” Hollander says. Nevertheless, it’s a largely solved problem: “It’s not a hard fix,” he says, adding that there is plenty of code available on GitHub and Stack Overflow for developers looking to make sure their applications are universal-acceptance compliant. For those looking to dig deeper into the issue, the UASG’s website offers documentation and links to relevant standards. UASG also has information about various languages and code libraries and which ones are up to date. (Hollander says, for example, that Python is currently not up to date.)

Ultimately, universal acceptance is an easy way to make the Internet more accessible for the billions of people whose first language is not written in Latin characters. Hollander wants developers to be mindful of that. “The world changed, and they should bring their systems up to date,” he says.

This article appears in the June 2019 print issue as “The Universal Internet.”

For Specialized Optimizing Machines, It’s All in the Connections

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/computing/hardware/for-specialized-optimizing-machines-its-all-in-the-connections

Whether it’s an Ising machine or a quantum annealer, the machine seems to matter less than how the parts connect

Suppose you want to build a special machine, one that can plot a traveling salesperson’s route or schedule all the flights at an international airport. That is, the sorts of problems that are incredibly complex, with a staggering number of variables. For now, the best way to crunch the numbers for these optimization problems remains a powerful computer.

But research into developing analog optimizers—machines that manipulate physical components to determine the optimized solution—is providing insight into what is required to make them competitive with traditional computers.

To that end, a paper published today in Science Advances provides the first experimental evidence that high connectivity, or the ability for each physical component to directly interact with the others, is a vital component for these novel optimization machines. “Connectivity is very, very important, it’s not something one should ignore,” says Peter McMahon, a postdoctoral researcher at Stanford, who participated in the research.

Terahertz Waves Could Push 5G to 6G

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/at-the-6th-annual-brooklyn-5g-summit-some-eyes-are-on-6g

At the Brooklyn 5G summit, experts said terahertz waves could fix some of the problems that may arise with millimeter-wave networks

5G report logo, link to report landing page

It may be the sixth year for the Brooklyn 5G Summit, but in the minds of several speakers, 2019 is also Year Zero for 6G. The annual summit, hosted by Nokia and NYU Wireless, is a four-day event that covers all things 5G, including deployments, lessons learned, and what comes next.

This year, that meant preliminary research into terahertz waves, the frequencies that some researcher believe will make up a key component of the next next generation of wireless. In back-to-back talks, Gerhard Fettweis, a professor at TU Dresden, and Ted Rappaport, the founder and director of NYU Wireless, talked up the potential of terahertz waves.

Australia’s Troubled National Broadband Network Delivers a Fraction of What Was Promised

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/internet/australias-troubled-national-broadband-network-delivers-a-fraction-of-what-was-promised

The newly elected government will inherit a floundering AUD $51 billion broadband network that’s providing slower service to fewer properties than planned

On 18 May, voters in Australia’s federal election will determine whether the Liberal-National Coalition will remain in control or the Australian Labor Party will win the government. Either way, the new leaders will have to contend with the National Broadband Network (NBN), a lumbering disaster that began as an ambitious effort by the Australian government to construct a countrywide broadband network.

When the NBN was first proposed in April 2009, the government aimed to build a fiber-optic network that would deliver connections of up to 100 megabits per second to 90 percent of Australian homes, schools, and workplaces within eight years. A decade later, however, the NBN has failed to deliver on that promise. NBN Co., the government-owned company created to construct and manage the network, now expects to deliver 50 Mb/s connections to 90 percent of the country by the end of 2020.

“None of the promises have ever been met,” says Rod Tucker, a telecommunications engineer and professor emeritus at the University of Melbourne. The watershed moment for the network was the 2013 federal election. That year, the Labor government that had championed the network was replaced by a conservative coalition government. The new government promptly reevaluated the NBN plan, taking issue with its projected cost.

The original plan, estimated to cost AU $43 billion, was a fiber-to-the-premise (or FTTP) plan. FTTP, as the name implies, requires threading fiber-optic cable to each and every building. The coalition government balked at the price tag, fired NBN Co.’s CEO, restructured the company, and proposed an alternative fiber-to-the-node (or FTTN) strategy, which was estimated to cost no more than AU $29.5 billion. The cost has now ballooned to more than AU $51 billion.

The reason an FTTN network theoretically costs less is because there’s less fiber to install. Rather than run fiber to every premise, FTTN runs fiber to centralized “nodes,” from which any number of technologies can then connect to individual premises. Fiber could be used, but more typically these last-mile connections are made with copper cabling.

The rationale behind the FTTN pivot was that there’s no need to lay fiber to homes and offices because copper landlines already connect those buildings. Unfortunately, copper doesn’t last forever. “A lot of the copper is very old,” says Mark Gregory, an associate professor of engineering at RMIT University, in Melbourne. “Some of it is just not suitable for fiber to the node.” Gregory says that NBN Co. has run into delays more than once as it encounters copper cabling near the end of its usable life and must replace it.

NBN Co. has purchased roughly 26,000 kilometers of copper so far to construct the network, according to Gregory—enough to wrap more than halfway around the Earth. Buying that copper added roughly AU $10 billion to the FTTN price tag, says Tucker. On top of that, NBN Co. is paying AU $1 billion a year to Telstra, the Australian communications company, for the right to use the last-mile copper that Telstra owns.

But perhaps the worst part is that even after the cost blowouts, the lackluster connections, and the outdated copper technology, there doesn’t seem to be a good path forward for the network, which will be obsolete upon arrival. “In terms of infrastructure,” says Gregory, “it’s pretty well the only place I know of that’s spent [AU] $50 billion and built an obsolete network.”

Upgrading the network to deliver the original connection speeds will require yet another huge investment. That’s because copper cables can’t compete with fiber-optic cables. To realize 100 Mb/s, NBN Co. will eventually have to lay fiber from all the nodes to every premise anyway. Gregory, for one, estimates that could cost NBN Co. an additional AU $16 billion, a hard number to swallow for a project that’s already massively over budget. “Fiber to the node is a dead-end technology in that it’s expensive to upgrade,” says Tucker, who also wrote about the challenges the NBN would face in the December 2013 issue of IEEE Spectrum.

After the federal election, the incoming government will have to figure out what to do with this bloated project. NBN Co. is far from profitable, and even if it was, it still owes billions of dollars to the Australian government. If the government does decide to bite the bullet and upgrade to FTTP, it will have to contend with other commercial networks now delivering equivalent speeds.

Had Australia delivered the NBN as originally promised, it would have been one of the fastest, most robust national networks in the world at the time. Instead, the country has watched its place in rankings of broadband speeds around the world continue to drop, says Tucker, while places like New Zealand, Australia’s neighbor “across the ditch,” have invested in and largely completed robust FTTP networks.

“It was an opportunity lost,” says Gregory.

This article appears in the May 2019 print issue as “Australia’s Fiber-Optic Mess.”