Tag Archives: Telecom

The Internet Is Coming to the Rest of the Animal Kingdom

Post Syndicated from Elie Dolgin original https://spectrum.ieee.org/tech-talk/telecom/internet/internet-of-living-things-can-communication-tools-break-down-the-interspecies-divide

A new Doolittlesque initiative aims to promote Internet communication among smart animals

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Lunar Pioneers Will Use Lasers to Phone Home

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/wireless/lunar-pioneers-will-use-lasers-to-phone-home

NASA’s Orion and Gateway will try out optical communications gear for a high-speed connection to Earth

graphic link to special report landing page
graphic link to special report landing  page

With NASA making serious moves toward a permanent return to the moon, it’s natural to wonder whether human settlers—accustomed to high-speed, ubiquitous Internet access—will have to deal with mind-numbingly slow connections once they arrive on the lunar surface. The vast majority of today’s satellites and spacecraft have data rates measured in kilobits per second. But long-term lunar residents might not be as satisfied with the skinny bandwidth that, say, the Apollo astronauts contended with.

To meet the demands of high-definition video and data-intensive scientific research, NASA and other space agencies are pushing the radio bands traditionally allocated for space research to their limits. For example, the Orion spacecraft, which will carry astronauts around the moon during NASA’s Artemis 2 mission in 2022, will transmit mission-critical information to Earth via an S-band radio at 50 megabits per second. “It’s the most complex flight-management system ever flown on a spacecraft,” says Jim Schier, the chief architect for NASA’s Space Communications and Navigation program. Still, barely 1 Mb/s will be allocated for streaming video from the mission. That’s about one-fifth the speed needed to stream a high-definition movie from Netflix.

To boost data rates even higher means moving beyond radio and developing optical communications systems that use lasers to beam data across space. In addition to its S-band radio, Orion will carry a laser communications system for sending ultrahigh-definition 4K video back to Earth. And further out, NASA’s Gateway will create a long-term laser communications hub linking our planet and its satellite.

Laser communications are a tricky proposition. The slightest jolt to a spacecraft could send a laser beam wildly off course, while a passing cloud could interrupt it. But if they work, robust optical communications will allow future missions to receive software updates in minutes, not days. Astronauts will be sheltered from the loneliness of working in space. And the scientific community will have access to an unprecedented flow of data between Earth and the moon.

Today, space agencies prefer to use radios in the S band (2 to 4 gigahertz) and Ka band (26.5 to 40 GHz) for communications between spacecraft and mission control, with onboard radios transmitting course information, environmental conditions, and data from dozens of spaceflight systems back to mission control. The Ka band is particularly prized—Don Cornwell, who oversees radio and optical technology development at NASA, calls it “the Cadillac of radio frequencies”—because it can transmit up to gigabits per second and propagates well in space.

Any spacecraft’s ability to transmit data is constrained by some unavoidable physical truths of the electromagnetic spectrum. For one, radio spectrum is finite, and the prized bands for space communications are equally prized by commercial applications. Bluetooth and Wi-Fi use the S band, and 5G cellular networks use the Ka band.

The second big problem is that radio signals disperse in the vacuum of space. By the time a Ka-band signal from the moon reaches Earth, it will have spread out to cover an area about 2,000 kilometers in diameter—roughly the size of India. By then, the signal is a lot weaker, so you’ll need either a sensitive receiver on Earth or a powerful transmitter on the moon.

Laser communications systems also have dispersion issues, and beams that intersect can muddle up the data. But a laser beam sent from the moon would cover an area only 6 km across by the time it arrives on Earth. That means it’s much less likely for any two beams to intersect. Plus, they won’t have to contend with an already crowded chunk of spectrum. You can transmit a virtually limitless quantity of data using lasers, says Cornwell. “The spectrum for optical is unconstrained. Laser beams are so narrow, it’s almost impossible [for them] to interfere with one another.”

Higher frequencies also mean shorter wavelengths, which bring more benefits. Ka-band signals have wavelengths from 7.5 millimeters to 1 centimeter, but NASA plans to use lasers that have a 1,550-nanometer wavelength, the same wavelength used for terrestrial optical-fiber networks. Indeed, much of the development of laser communications for space builds on existing optical-fiber engineering. Shorter wavelengths (and higher frequencies) mean that more data can be packed into every second.

The advantages of laser communications have been known for many years, but it’s only recently that engineers have been able to build systems that outperform radio. In 2013, for example, NASA’s Lunar Laser Communications Demonstration proved that optical signals can reliably send information from lunar orbit back to Earth. The month-long experiment used a transmitter on the Lunar Atmosphere and Dust Environment Explorer to beam data back to Earth at speeds of 622 Mb/s, more than 10 times as fast as Orion’s S-band radio will.

“I was shocked to learn [Orion was] going back to the moon with an S-band radio,” says Bryan Robinson, an optical communications expert at MIT Lincoln Laboratory in Lexington, Mass. Lincoln Lab has played an important role in developing many of the laser communications systems on NASA missions, starting with the early optical demonstrations of the classified GeoLITE satellite in 2001. “Humans have gotten used to so much more, here on Earth and in low Earth orbit. I was glad they came around and put laser comm back on the mission.”

As a complement to its S-band radio, during the Artemis 2 mission Orion will carry a laser system called Optical to Orion, or O2O. NASA doesn’t plan to use O2O for any mission-critical communications. Its main task will be to stream 4K ultrahigh-definition video from the moon to a curious public back home. O2O will receive data at 80 Mb/s and transmit at 20 Mb/s while in lunar orbit. If you’re wondering why O2O will transmit at 20 Mb/s when a demonstration project six years ago was able to transmit at 622 Mb/s, it’s simply because the Orion developers “never asked us to do 622,” says Farzana Khatri, a senior staff member in Lincoln Lab’s optical communications group. Cornwell confirms that O2O’s downlink will deliver a minimum of 80 Mb/s from Earth, though the system is capable of higher data rates.

If successful, O2O will open the door for data-heavy communications on future crewed missions, allowing for video chats with family, private consultations with doctors, or even just watching a live sports event during downtime. The more time people spend on the moon, the more important all of these connections will be to their mental well-being. And eventually, video will become mission critical for crews on board deep-space missions.

Before O2O can even be tested in space, it first has to survive the journey. Laser systems mounted on spacecraft use telescopes to send and receive signals. Those telescopes rely on a fiddly arrangement of mirrors and other moving parts. O2O’s telescope will use an off-axis Cassegrain design, a type of telescope with two mirrors to focus the captured light, mounted on a rotating gimbal. Lincoln Lab researchers selected the design because it will allow them to separate the telescope from the optical transceiver, making the entire system more modular. The engineers must ensure that the Space Launch System rocket carrying Orion won’t shake the whole delicate arrangement apart. The researchers at Lincoln Lab have developed clasps and mounts that they hope will reduce vibrations and keep everything intact during the tumultuous launch.

Once O2O is in space, it will have to be precisely aimed. It’s hard to miss a receiver when your radio signal has the cross section the size of a large country. A 6-km-diameter signal, on the other hand, could miss Earth entirely with just a slight bump from the spacecraft. “If you [use] a laser pointer when you’re nervous and your hand is shaking, it’s going to go all over the place,” says Cornwell.

Orion’s onboard equipment will also generate constant minuscule vibrations, any one of which would be enough to throw off an optical signal. So engineers at NASA and Lincoln Lab will place the optical system on an antijitter platform. The platform measures the jitters from the spacecraft and produces an opposite pattern of vibrations to cancel them out—“like noise-canceling headphones,” Cornwell says.

One final hurdle for O2O will be dealing with any cloud cover back on Earth. Infrared wavelengths, like the O2O’s 1,550 nm, are easily absorbed by clouds. A laser beam might travel the nearly 400,000 km from the moon without incident, only to be blocked just above Earth’s surface. Today, the best defense against losing a signal to a passing stratocumulus is to beam transmissions to multiple receivers. O2O, for example, will use ground stations at Table Mountain, Calif., and White Sands, N.M.

The Gateway, scheduled to be built in the 2020s, will present a far bigger opportunity for high-speed laser communications in space. NASA, with help from its Canadian, European, Japanese, and Russian counterparts, will place this space station in orbit around the moon; the station will serve as a staging area and communications relay for lunar research.

NASA’s Schier suspects that research and technology demonstrations on the Gateway could generate 5 to 8 Gb/s of data that will need to be sent back to Earth. That data rate would dwarf the transmission speed of anything in space right now—the International Space Station (ISS) sends data to Earth at 25 Mb/s. “[Five to 8 GB/s is] the kind of thing that if you turned everything on in the [ISS], you’d be able to run it for 2 seconds before you overran the buffers,” Schier says.

The Gateway offers an opportunity to build a permanent optical trunk line between Earth and the moon. One thing NASA would like to use the Gateway for is transmitting positioning, navigation, and timing information to vehicles on the lunar surface. “A cellphone in your pocket needs to see four GPS satellites,” says Schier. “We’re not going to have that around the moon.” Instead, a single beam from the Gateway could provide a lunar rover with accurate distance, azimuth, and timing to find its exact position on the surface.

What’s more, using optical communications could free up radio spectrum for scientific research. Robinson points out that the far side of the moon is an optimal spot to build a radio telescope, because it would be shielded from the chatter coming from Earth. (In fact, radio astronomers are already planning such an observatory: Our article “Rovers Will Unroll a Telescope on the Moon’s Far Side” explains their scheme.) If all the communication systems around the moon were optical, he says, there’d be nothing to corrupt the observations.

Beyond that, scientists and engineers still aren’t sure what else they’ll do with the Gateway’s potential data speeds. “A lot of this, we’re still studying,” says Cornwell.

In the coming years, other missions will test whether laser communications work well in deep space. NASA’s mission to the asteroid Psyche, for instance, will help determine how precisely an optical communications system can be pointed and how powerful the lasers can be before they start damaging the telescopes used to transmit the signals. But closer to home, the communications needed to work and live on the moon can be provided only by lasers. Fortunately, the future of those lasers looks bright.

This article appears in the July 2019 print issue as “Phoning Home, With Lasers.”

Is Ham Radio a Hobby, a Utility…or Both? A Battle Over Spectrum Heats Up

Post Syndicated from Julianne Pepitone original https://spectrum.ieee.org/tech-talk/telecom/wireless/is-ham-radio-a-hobby-a-utilityor-both-a-battle-over-spectrum-heats-up

Some think automated radio emails are mucking up the spectrum reserved for amateur radio, while others say these new offerings provide a useful service

Like many amateur radio fans his age, Ron Kolarik, 71, still recalls the “pure magic” of his first ham experience nearly 60 years ago. Lately, though, encrypted messages have begun to infiltrate the amateur bands in ways that he says are antithetical to the spirit of this beloved hobby.

So Kolarik filed a petition, RM-11831 [PDF], to the U.S. Federal Communications Commission (FCC) proposing a rule change to “Reduce Interference and Add Transparency to Digital Data Communications.” And as the proposal makes its way through the FCC’s process, it has stirred up heated debate that goes straight to the heart of what ham radio is, and ought to be.

The core questions: Should amateur radio—and its precious spectrum—be protected purely as a hobby, or is it a utility that delivers data traffic? Or is it both? And who gets to decide?

Shipping Industry Bets Big on IoT in Bid to Save Billions

Post Syndicated from Manon Verchot original https://spectrum.ieee.org/tech-talk/telecom/internet/shipping-industry-bets-big-on-iot-in-bid-to-save-billions

Across the shipping industry, IoT technology is finally graduating from pilots to real-world commercial products

In a bid to save billions of dollars annually, the shipping industry is graduating from pilot projects and finally starting to adopt a smattering of Internet of Things (IoT) technologies for real-world, commercial use. Lately, several large and small shipping companies have turned to Traxens, a French technology firm, to help them deploy IoT devices across their fleets.

Traxens develops technology that tracks and monitors cargo. Since it launched in 2012, the company has earned investments from leading shipping companies. Shipping is responsible for carrying 90 percent of the world’s traded goods, according to the International Chamber of Shipping. This year, A.P. Møller—Mærsk A/S, which is the world’s largest container ship and supply vessel operator, became a Traxens shareholder and customer. 

Then, earlier this month, Traxens equipped Indonesian shipping company, PT TKSolusindo with a set of devices, each slightly longer and thinner than a brick, with sensors including GPS.  These devices can track geolocation, detect shock and motion, and check the temperature, humidity, and alarms on refrigerated containers, often called reefers. 

Cyberespionage Collective Platinum Targets South Asian Governments

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/security/cyberespionage-collective-platinum-returns-with-a-steganographybased-attack

Kaspersky says the group used an HTML-based exploit that’s almost impossible to detect

Following a trail of suspicious digital crumbs left in cloud-based systems across South Asia, Kaspersky Lab’s security researchers have uncovered a steganography-based attack carried out by a cyberespionage group called Platinum. The attack targeted government, military, and diplomatic entities in the region.

Platinum was active years ago, but was since believed to have been disarmed. Kaspersky’s cyber-sleuths, however, now suspect that Platinum might have been operating covertly since 2012, through an “elaborate and thoroughly crafted” campaign that allowed it to go undetected for a long time.

The group’s latest campaign harnessed a classic hacking tool known as steganography. “Steganography is the art of concealing a file of any format or communication in another file in order to deceive unwanted people from discovering the existence of [the hidden] initial file or message,” says Somdip Dey, a U.K.-based computer scientist with a special interest in steganography at the University of Essex and the Samsung R&D Institute.

The Factories of the Future Will Be Fast, Flexible, and Free of Wires

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/the-factories-of-the-future-will-be-fast-flexible-and-free-of-wires

AI, 5G, and the IoT will allow factories to produce new goods on the fly

The future of manufacturing is software defined. You don’t have to look further than ABB to understand why companies are turning to 5G networks, artificial intelligence, and computer vision. The Swiss company is using these new tools to boost reliability and agility in its nearly 300 factories around the world, which produce a host of goods, from simple plastic zip ties to complex robotic arms.

For ABB and other companies pushing software-defined networking, it’s all about being safer while adapting to a growing clamor for personalized products.

When it comes to safety, adding more sensors to machines and deploying AI can make the end product more consistently reliable. At its Heidelberg factory, for example, ABB makes circuit breakers. But even with 99.999 percent reliability at ABB’s factory, faulty circuit breakers would still kill 3,000 people a year, according to Guido Jouret, the company’s chief digital officer.

People can’t achieve 100 percent reliability when making and inspecting the completed circuit breakers—but a camera with machine learning can. When that camera detects any sort of variation, factory managers can go back to the machine to figure out what’s causing that defect.

Boosting safety and reliability isn’t exactly news to anyone who has followed the adoption of Japanese Kaizen or Six Sigma manufacturing, efforts to improve reliability and reduce waste, in the automotive industry. But adding automation and robots is becoming more important today as our culture increasingly wants customized products. These so-called lean-manufacturing methods allow factory managers to reconfigure their production lines on the fly.

“It’s not always about being more efficient,” Jouret says. “It’s about being more agile.”

Kiva Allgood, the head of Internet of Things and automotive at Ericsson, calls the shift from efficiency to agility a move away from economies of scale toward an economy of one. In other words, the inefficiencies traditionally associated with making low quantities of goods will no longer apply. She saw this change coming as an executive at General Electric. Now she’s working on the wireless technology that will help make this possible.

But before we can reprogram the factory floor, we have to understand it. That starts with individual machines. We’ll need manufacturing equipment with sensors measuring both the machine’s work and the machine’s health. This is the stage where many manufacturers are today.

The factory should also have sensors that provide context to the overall environment, including temperature, workers’ movements, and more. Armed with that understanding as well as computer-vision algorithms designed to detect flaws in the manufactured product, it will become possible to quickly repurpose robots to make something new.

Perhaps more interestingly, future agile factories will remove the wires littering factory floors. Historically, factory automation has meant building a rigidly defined manufacturing line dictated by the robots making the product. But with developing tech, factories will free those robots from their data and power wires, and replace the wires with low-latency wireless 5G networks. Then, factories can turn days-long reconfiguration efforts into an overnight project.

By emphasizing agility over efficiency, the factories of the future will be able to turn on a dime to meet the demands of our fast-paced society.

This article appears in the July 2019 print issue as “One Factory Fits All.”

Melting Arctic Ice Opens a New Fiber Optic Cable Route

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/melting-sea-ice-opens-the-floodgate-for-a-new-fiber-optic-cable-route

Cinia and MegaFon’s proposed Arctic cable would bring lower latencies and geographical diversity

The most reliable way to reduce latency is to make sure your signal travels over the shortest physical distance possible. Nowhere is that more evident than the fiber optic cables that cross oceans to connect continents. There will always be latency between Europe and North America, for example, but where you lay the cable connecting the two continents affects that latency a great deal.

To that end, Helsinki-based Cinia, which owns and operates about 15,000 kilometers of fiber optic cable, and MegaFon, a Russian telecommunications operator, signed a memorandum of understanding to lay a fiber optic cable across the Arctic Ocean. The cable, if built, would not only reduce latency between users in Europe, Asia, and North America, but provide some much-needed geographical diversity to the world’s undersea cable infrastructure.

The vast majority of undersea cable encircles the world along a relatively similar path: From the east coast of the United States, cables stretch across the northern Atlantic to Europe, through the Mediterranean and Red Seas, across the Indian Ocean before swinging up through the South China Sea and finally spanning the Pacific to arrive at the west coast of the U.S. Other cable routes exist, but none have anywhere near the data throughput that this world-girding trunk line has.

Ari-Jussi Knaapila, the CEO of Cinia, estimates that the planned Arctic cable, which would stretch from London to Alaska, would shorten the physical cable distance between Europe and the western coast of North America by 20 to 30 percent. Additional cable will extend the route down to China and Japan, for a planned total of 10,000 kilometers of new cable.

Knaapila also says that the cable is an important effort to increase geographic diversity of the undersea infrastructure. Because many cables run along the same route, various events—earthquakes, tsunamis, seabed landslides, or even an emergency anchoring by a ship—can damage several cables at once. On December 19, 2008, 14 countries lost their connections to the Internet after ship anchors cut five major undersea cables in the Mediterranean Sea and Red Sea.

“Submarine cables are much more reliable than terrestrial cables with the same length,” Knaapila says. “But when a fault occurs in the submarine cable, the repair time is much longer.” The best way to avoid data crunches when a cable is damaged is to already have cables elsewhere that were unaffected by whatever event broke the original cable in the first place.

Stringing a cable across the Arctic Ocean is not a new idea, though other proposed projects, including the semi-built Arctic Fibre project, have never been completed. In the past, the navigational season in the Arctic was too short to easily build undersea cables. Now, melting sea ice due to climate change is expanding that window and making it more feasible (The shipping industry is coming to similar realizations as new routes open up).

The first step for the firms under the memorandum of understanding is to establish a company by the end of the year tasked with the development of the cable. Then come route surveys of the seabed, construction permits (for both on-shore and off-shore components), and finally, the laying of the cable. According to Knaapila, 700 million euros have been budgeted for investment in the route.

The technical specifics of the cable itself have yet to be determined. Most fiber optic cables use wavelengths of light around 850, 1300, or 1550 nanometers, but for now, the goal is to remain as flexible as possible. For the same reason, the data throughput of the proposed cable remains undecided.

Of course, not all cable projects succeed. The South Atlantic Express (SAex), would be one of the first direct links between Africa and South America, and connect remote islands like St. Helena along the way. But SAex has struggled with funding and currently sits in limbo. Cinia and MegaFon hope to avoid a similar fate.

Loon’s Balloons Deliver Emergency Internet Service to Peru Following 8.0 Earthquake

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/loons-balloons-deliver-emergency-service-to-peru-following-80-earthquake

The company was able to respond quickly because it had already begun tests in the country

When a magnitude 8.0 earthquake struck Peru on Sunday, it wreaked havoc on the country’s communications infrastructure. Within 48 hours, though, people in affected regions could use their mobile phones again. Loon, the Alphabet company, was there delivering wireless service by balloon.

Such a rapid response was possible because Loon happened to be in the country, testing its equipment while working out a deal with provider Telefonica. Both terrestrial infrastructure and the balloons themselves were already in place, and Loon simply had to reorganize its balloons to deliver service on short notice.

Household Radar Can See Through Walls and Knows How You’re Feeling

Post Syndicated from Fadel Adib original https://spectrum.ieee.org/telecom/wireless/household-radar-can-see-through-walls-and-knows-how-youre-feeling

Modern wireless tech isn’t just for communications. It can also sense a person’s breathing and heart rate, even gauge emotions

When I was a boy, I secretly hoped I would discover latent within me some amazing superpower—say, X-ray vision or the ability to read people’s minds. Lots of kids have such dreams. But even my fertile young brain couldn’t imagine that one day I would help transform such superpowers into reality. Nor could I conceive of the possibility that I would demonstrate these abilities to the president of the United States. And yet two decades later, that’s exactly what happened.

There was, of course, no magic or science fiction involved, just new algorithms and clever engineering, using wireless technology that senses the reflections of radio waves emanating from a nearby transmitter. The approach my MIT colleagues and I are pursuing relies on inexpensive equipment that is easy to install—no more difficult than setting up a Wi-Fi router.

These results are now being applied in the real world, helping medical researchers and clinicians to better gauge the progression of diseases that affect gait and mobility. And devices based on this technology will soon become commercially available. In the future, they could be used for monitoring elderly people at home, sending an alert if someone has fallen. Unlike users of today’s medical alert systems, the people being monitored won’t have to wear a radio-equipped bracelet or pendant. The technology could also be used to monitor the breathing and heartbeat of newborns, without having to put sensors in contact with the infants’ fragile skin.

You’re probably wondering how this radio-based sensing technology works. If so, read on, and I will explain by telling the story of how we managed to push our system to increasing levels of sensitivity and sophistication.

It all started in 2012, shortly after I became a graduate student at MIT. My faculty advisor, Dina Katabi, and I were working on a way to allow Wi-Fi networks to carry data faster. We mounted a Wi-Fi device on a mobile robot and let the robot navigate itself to the spot in the room where data throughput was highest. Every once in a while, our throughput numbers would mysteriously plummet. Eventually we realized that when someone was walking in the adjacent hallway, the person’s presence disrupted the wireless signals in our room.

We should have seen this coming. Wireless communication systems are notoriously vulnerable to electromagnetic noise and interference, which engineers work hard to combat. But seeing these effects firsthand got us thinking along completely different lines about our research. We wondered whether this “noise,” caused by passersby, could serve as a new source of information about the nearby environment. Could we take a Wi-Fi device, point it at a wall, and see on a computer screen how someone behind the wall was moving?

That should be possible, we figured. After all, walls don’t block wireless signals. You can get a Wi-Fi connection even when the router is in another room. And if there’s a person on the other side of a wall, the wireless signal you send out on this side will reflect off his or her body. Naturally, after the signal traverses the wall, gets reflected back, and crosses through the wall again, it will be very attenuated. But if we could somehow register these minute reflections, we would, in some sense, be able to see through the wall.

Using radio waves to detect what’s on the other side of a wall has been done before, but with sophisticated radar equipment and expensive antenna arrays. We wanted to use equipment not much different from the kind you’d use to create a Wi-Fi local area network in your home.

As we started experimenting with this idea, we discovered a host of practical complications. The first came from the wall itself, whose reflections were 10,000 to 100,000 times as strong as any reflection coming from beyond it. Another challenge was that wireless signals bounce off not only the wall and the human body but also other objects, including tables, chairs, ceilings, and floors. So we had to come up with a way to cancel out all these other reflections and keep only those from someone behind the wall.

To do this, we initially used two transmitters and one receiver. First, we sent off a signal from one transmitter and measured the reflections that came back to our receiver. The received signal was dominated by a large reflection coming off the wall.

We did the same with the second transmitter. The signal it received was also dominated by the strong reflection from the wall, but the magnitude of the reflection and the delay between transmitted and reflected signals were slightly different.

Then we adjusted the signal given off by the first transmitter so that its reflections would cancel the reflections created by the second transmitter. Once we did that, the receiver didn’t register the giant reflection from the wall. Only the reflections that didn’t get canceled, such as those from someone moving behind the wall, would register. We could then boost the signal sent by both transmitters without overloading the receiver with the wall’s reflections. Indeed, we now had a system that canceled reflections from all stationary objects.

Next, we concentrated on detecting a person as he or she moved around the adjacent room. For that, we used a technique called inverse synthetic aperture radar, which is sometimes used for maritime surveillance and radar astronomy. With our simple equipment, that strategy was reasonably good at detecting whether there was a person moving behind a wall and even gauging the direction in which the person was moving. But it didn’t show the person’s location.

As the research evolved, and our team grew to include fellow graduate student Zachary Kabelac and Professor Robert Miller, we modified our system so that it included a larger number of antennas and operated less like a Wi-Fi router and more like conventional radar.

People often think that radar equipment emits a brief radio pulse and then measures the delay in the reflections that come back. It can, but that’s technically hard to do. We used an easier method, called frequency-modulated continuous-wave radar, which measures distance by comparing the frequency of the transmitted and reflected waves. Our system operated between about 5 and 7 gigahertz, transmitted signals that were just 0.1 percent the strength of Wi-Fi, and could determine the distance to an object to within a few centimeters.

Using one transmitting antenna and multiple receiving antennas mounted at different positions allowed us to measure radio reflections for each transmit-receive antenna pair. Those measurements showed the amount of time it took for a radio signal to leave the transmitter, reach the person in the next room, and reflect back to the receiver—typically several tens of nanoseconds. Multiplying that very short time interval by the speed of light gives the distance the signal traveled from the transmitter to the person and back to the receiver. As you might recall from middle-school geometry class, that distance defines an ellipse, with the two antennas located on the ellipse’s two foci. The person generating the reflection in the next room must be located somewhere on that ellipse.

With two receiving antennas, we could map out two such ellipses, which intersected at the person’s location. With more than two receiving antennas, it was possible to locate the person in 3D—you could tell whether a person was standing up or lying on the floor, for example. Things get trickier if you want to locate multiple people this way, but as we later showed, with enough antennas, it’s doable.

It’s easy to think of applications for such a system. Smart homes could track the location of their occupants and adjust the heating and cooling of different rooms. You could monitor elderly people, to be sure they hadn’t fallen or otherwise become immobilized, without requiring these seniors to wear a radio transmitter. We even developed a way for our system to track someone’s arm gestures, enabling the user to control lights or appliances by pointing at them.

The natural next step for our research team—which by this time also included graduate students Chen-Yu Hsu and Hongzi Mao, and Professor Frédo Durand—was to capture a human silhouette through the wall.

The fundamental challenge here was that at Wi-Fi frequencies, the reflections from some body parts would bounce back at the receiving antenna, while other reflections would go off in other directions. So our wireless imaging device would capture some body parts but not others, and we didn’t know which body parts they were.

Our solution was quite simple: We aggregated the measurements over time. That works because as a person moves, different body parts as well as different perspectives of the same body part get exposed to the radio waves. We designed an algorithm that uses a model of the human body to stitch together a series of reflection snapshots. Our device was then able to reconstruct a coarse human silhouette, showing the location of a person’s head, chest, arms, and feet.

While this isn’t the X-ray vision of Superman fame, the low resolution might be considered a feature rather than a bug, given the concerns people have about privacy. And we later showed that the ghostly images were of sufficient resolution to identify different people with the help of a machine-learning classifier. We also showed that the system could be used to track the palm of a user to within a couple of centimeters, which means we might someday be able to detect hand gestures.

Initially, we assumed that our system could track only a person who was moving. Then one day, I asked the test subject to remain still during the initialization stage of our device. Our system accurately registered his location despite the fact that we had designed it to ignore static objects. We were surprised.

Looking more closely at the output of the device, we realized that a crude radio image of our subject was appearing and disappearing—with a periodicity that matched his breathing. We had not realized that our system could capture human breathing in a typical indoor setting using such low-power wireless signals. The reason this works, of course, is that the slight movements associated with chest expansion and contraction affect the wireless signals. Realizing this, we refined our system and algorithms to accurately monitor breathing.

Earlier research in the literature had shown the possibility of using radar to detect breathing and heartbeats, and after scrutinizing the received signals more closely, we discovered that we could also measure someone’s pulse. That’s because, as the heart pumps blood, the resulting forces cause different body parts to oscillate in subtle ways. Although the motions are tiny, our algorithms could zoom in on them and track them with high accuracy, even in environments with lots of radio noise and with multiple subjects moving around, something earlier researchers hadn’t achieved.

By this point it was clear that there were important real-world applications, so we were excited to spin off a company, which we called Emerald, to commercialize our research. Wearing our Emerald hats, we participated in the MIT $100K Entrepreneurship Competition and made it to the finals. While we didn’t win, we did get invited to showcase our device in August 2015 at the White House’s “Demo Day,” an event that President Barack Obama organized to show off innovations from all over the United States.

It was thrilling to demonstrate our work to the president, who watched as our system detected my falling down and monitored my breathing and heartbeat. He noted that the system would make for a good baby monitor. Indeed, one of the most interesting tests we had done was with a sleeping baby.

Video on a typical baby monitor doesn’t show much. But outfitted with our device, such a monitor could measure the baby’s breathing and heart rate without difficulty. This approach could also be used in hospitals to monitor the vital signs of neonatal and premature babies. These infants have very sensitive skin, making it problematic to attach traditional sensors to them.

About three years ago, we decided to try sensing human emotions with wireless signals. And why not? When a person is excited, his or her heart rate increases; when blissful, the heart rate declines. But we quickly realized that breathing and heart rate alone would not be sufficient. After all, our heart rates are also high when we’re angry and low when we’re sad.

Looking at past research in affective computing—the field of study that tries to recognize human emotions from such things as video feeds, images, voice, electroencephalography (EEG), and electrocardiography (ECG)—we learned that the most important vital sign for recognizing human emotions is the millisecond variation in the intervals between heartbeats. That’s a lot harder to measure than average heart rate. And in contrast to ECG signals, which have very sharp peaks, the shape of a heartbeat signal on our wireless device isn’t known ahead of time, and the signal is quite noisy. To overcome these challenges, we designed a system that learns the shape of the heartbeat signal from the pattern of wireless reflections and then uses that shape to recover the length of each individual beat.

Using features from these heartbeat signals as well as from the person’s breathing patterns, we trained a machine-learning system to classify them into one of four fundamental emotional states: sadness, anger, pleasure, and joy. Sadness and anger are both negative emotions, but sadness is a calm emotion, whereas anger is associated with excitement. Pleasure and joy, both positive emotions, are similarly associated with calm and excited states.

Testing our system on different people, we demonstrated that it could accurately sense emotions 87 percent of the time when the testing and training were done on the same subject. When it hadn’t been trained on the subject’s data, it still recognized the person’s emotions with more than 73 percent accuracy.

In October 2016, fellow graduate student Mingmin Zhao, Katabi, and I published a scholarly article [PDF] about these results, which got picked up in the popular press. A few months later, our research inspired an episode of the U.S. sitcom “The Big Bang Theory.” In the episode, the characters supposedly borrow the device we’d developed to try to improve Sheldon’s emotional intelligence.

While it’s unlikely that a wireless device could ever help somebody in this way, using wireless signals to recognize basic human mental states could have other practical applications. For example, it might help a virtual assistant like Amazon’s Alexa recognize a user’s emotional state.

There are many other possible applications that we’ve only just started to explore. Today, Emerald’s prototypes are in more than 200 homes, where they are monitoring test subjects’ sleep and gait. Doctors at Boston Medical Center, Brigham and Women’s Hospital, Massachusetts General Hospital, and elsewhere will use the data to study disease progression in patients with Alzheimer’s, Parkinson’s, and multiple sclerosis. We hope that in the not-too-distant future, anyone will be able to purchase an Emerald device.

When someone asks me what’s next for wireless sensing, I like to answer by asking them what their favorite superpower is. Very possibly, that’s where this technology is going.

This article appears in the June 2019 print issue as “Seeing With Radio.”

About the Author

Fadel Adib is an assistant professor at MIT and founding director of the Signal Kinetics research group at the MIT Media Lab.

If DARPA Has Its Way, AI Will Rule the Wireless Spectrum

Post Syndicated from Paul Tilghman original https://spectrum.ieee.org/telecom/wireless/if-darpa-has-its-way-ai-will-rule-the-wireless-spectrum

DARPA’s Spectrum Collaboration Challenge demonstrates that autonomous radios can manage spectrum better than humans can


In the early 2000s, Bluetooth almost met an untimely end. The first Bluetooth devices struggled to avoid interfering with Wi-Fi routers, a higher-powered, more-established cohort on the radio spectrum, with which Bluetooth devices shared frequencies. Bluetooth engineers eventually modified their standard—and saved their wireless tech from early extinction—by developing frequency-hopping techniques for Bluetooth devices, which shifted operation to unoccupied bands upon detecting Wi-Fi signals.

Frequency hopping is just one way to avoid interference, a problem that has plagued radio since its beginning. Long ago, regulators learned to manage spectrum so that in the emerging wireless ecosystem, different radio users were allocated different frequencies for their exclusive use. While this practice avoids the challenges of detecting transmissions and shifting frequencies on the fly, it makes very inefficient use of spectrum, as portions lay fallow.

Today, demand is soaring for the finite resource of radio spectrum. Over the last several years, wireless data transmission has grown by roughly 50 percent per year, driven largely by people streaming videos and scrolling through social media on their smartphones. To meet this demand, we must allocate spectrum as efficiently as possible. Increasingly, that means that wireless technologies cannot have exclusive frequencies, but rather must share available spectrum. Frequency hopping, which Bluetooth uses, will be part of the solution, but to cope with the surging demand we are going to have to go far beyond it.

To tackle spectrum scarcity, I created the Spectrum Collaboration Challenge (SC2) at the U.S. Defense Advanced Research Projects Agency (DARPA), where I am a program manager. SC2 is a three-year open competition in which teams from around the world are rethinking the spectrum-management problem with a clean slate. Teams are designing new radios that use artificial intelligence (AI) to learn how to share spectrum with their competitors, with the ultimate goal of increasing overall data throughput. These teams are vying for nearly US $4 million in prizes to be awarded at the SC2 championship this coming October in Los Angeles. Thanks to two years of competition, we have witnessed, for the first time, autonomous radios collectively sharing wireless spectrum to transmit far more data than would be possible by assigning exclusive frequencies to each radio.

Before SC2, various DARPA projects had demonstrated that a handful of radios could autonomously manage spectrum by frequency hopping, as Bluetooth does, in order to avoid one another. So why can’t we just extend the use of the frequency-hopping technique to a wider array of radios, and solve the problem of limited spectrum that way?

Unfortunately, frequency hopping works only up to a point. It depends on the availability of unused spectrum, and if there are too many radios trying to send signals, there won’t be much, if any, unused spectrum available. To make SC2 work, we realized, we would need to test competing teams on scenarios with dozens of radios trying to share a spectrum band simultaneously. That way, we could ensure that each radio couldn’t have its own dedicated channel, because there wouldn’t be enough spectrum to go around.

With that in mind, we developed scenarios that would be played out in a series of round-robin matches, in which three, four, or five independent radio networks all broadcast together in a roughly one-square-kilometer area. The radio networks would be permitted access to the same frequencies, and each network would use an AI system to figure out how to share those frequencies with the other networks. We would determine how successful a given match was based on how many tasks, such as phone calls and video streams, were completed. A group of radio networks completing more tasks than another group would be crowned the winner for that match. However, our main goal was to see teams develop AI-managed radio networks that would be capable of completing more tasks collectively than would be possible if each radio was using an exclusive frequency band.

We realized quickly that placing these radios in the real world would have been impractical. We would never be able to guarantee that the wireless conditions would be the same for each team that competed. Also, moving individual radios around to set up each scenario and each match would have been far too complicated and time consuming.

So we built Colosseum, the world’s largest radio-frequency emulation test-bed. Currently housed in Laurel, Md., at the Johns Hopkins University Applied Physics Laboratory, Colosseum occupies 21 server racks, consumes 65 kilowatts, and requires roughly the same amount of cooling as 10 large homes. It can emulate more than 65,000 unique interactions, such as text messages or video streams, between 128 radios at once. There are 64 field-programmable gate arrays that handle the emulation by together performing more than 150 trillion floating-point operations (teraflops).

For each match, we plug in radios so that they can “broadcast” radio-frequency signals straight into Colosseum. This test-bed has enough computing power to calculate how those signals will behave, according to a detailed mathematical model of a given environment. For example, within Colosseum are emulated walls, off which signals “bounce.” There are emulated rainstorms and ponds, within which signals are partly “absorbed.”

The emulation provides all the information necessary for the teams’ AIs to make appropriate decisions based on their observations during each emulated scenario. Faced with a cellphone jammer that is flooding a frequency with meaningless noise, for example, an AI might choose to change its frequency to one not affected by the jammer.

It’s one thing to build an environment for AIs to collaboratively manage spectrum, but it’s another thing entirely to create those AIs. To understand how the teams competing in SC2 are building these AI systems, you need a bit of background on how AI has developed in the past several decades.

Broadly speaking, researchers have advanced AI in a couple of “waves” that have redefined how these systems learn. The first wave of AI was expert systems. These AIs are created by interviewing experts in a particular area and deriving a set of rules from them that an autonomous system can use to make decisions while trying to accomplish something. These AIs excel at problems, such as chess, where the rules can be written down in a straightforward fashion. In fact, one of the best-known examples of first-wave AI is IBM’s Deep Blue, which first beat chess master Garry Kasparov in 1997.

There’s a newer, second wave of AI that relies on huge amounts of data, rather than human expertise, to learn the rules of a given task. Second-wave AI is particularly good at problems where humans have trouble writing down all the nuances of a problem and where there often seem to be more exceptions than rules. Recognizing speech is an example of such a problem. These systems ingest complex raw data, such as audio signals, and then make decisions about the data, such as what words were spoken. This wave of AI is the type we find in the speech recognition used by digital assistants like Siri and Alexa.

Today, neither first- nor second-wave AI is used for managing wireless spectrum. That meant that we could consider both waves of AI and the ways in which researchers teach those AIs how to solve problems, to find the best solution to our problem. Ultimately, it is easiest to treat spectrum management as a reinforcement-learning problem, in which we reward the AI when it succeeds and penalize it when it fails. For example, the AI may receive one point for successfully transmitting data, or lose one point for a transmission that was dropped. By accumulating points during a training period, the AI remembers successes and tries to repeat them, while also moving away from unsuccessful tactics.

In our competition, a dropped transmission often happens because of interference from another radio’s transmission. So we also have to think of wireless management as a collaborative challenge, because there are multiple radios broadcasting at the same time. The key to AI-managed radios performing better than traditional, static allocation is developing AIs that can maximize their own points while leaving room for the other AIs to do the same. Teams are rewarded when they make as many successful transmissions as possible without constantly bumping into one another in the pursuit of available spectrum, which would prevent them all from maximizing use of that spectrum.

As if that’s not difficult enough, there’s one additional wrinkle that makes spectrum collaboration harder than many similar problems. Imagine playing a game of pickup basketball with people you’ve never met before: Your team’s ability to play together is not going to be anywhere near as good as that of teammates who have trained together for years. To date, the most successful challenges involving multiple agents have been ones where AIs have been trained together. A recent example was a project in 2018, in which the nonprofit AI research company OpenAI demonstrated that a group of five AIs could beat a team of human players in the video game Dota 2.

It’s 9 December 2018, and my DARPA colleagues and I are finally getting our chance to learn if a group of AIs can succeed at such a complex multiagent problem. We’re huddled around a set of computers in a hotel conference room, just a block away from where Colosseum is installed. The hotel has been our command center for a week now, and we’ve analyzed more than 300 matches to determine the top-scoring teams. In three days, we expect to award up to eight $750,000 awards, one for each of the top teams. But for the moment, we don’t actually know how many prizes we’re going to be handing out.

In the first qualifying event a year earlier, teams were judged solely on their relative rankings. This time, however, to win an award, the top teams must also demonstrate that their radios can manage spectrum better than by using traditional dedicated channels.

To compare autonomous radios against exclusive-frequency management, we designed one last set of matches. First, we took a baseline, in which each team was assigned exclusive frequencies, to measure how much data they could transmit. Then we removed the restrictions to see whether a team’s network could transmit more data without hampering the four other radio networks sharing the spectrum.

In the hotel room, we’re waiting anxiously for the last set of matches to complete. If no one is able to clear the bar that we’ve set for them, two years of hard work could be dashed. It occurs to us that in our zeal, we had no backup plan should everyone fail. And it didn’t necessarily soothe our nerves that by this point in SC2’s existence, we’ve started to see the limitations of some approaches.

Fortunately, we’ve also begun to spot some of the keys to success. When the competition began, nearly all of the teams started with first-wave AI approaches. This made sense as a starting point—remember that there are no AI systems being used to manage spectrum. In this first-wave approach, the teams are trying to write the general rules for collaboratively using spectrum.

Of course, each team has written slightly different rules, but every system they developed had some general principles in common. First, the systems should listen for what frequencies each network has asked to use. Second, from the remaining frequency bands, only one radio should be assigned to each band—and teams should be good neighbors by not claiming more than their fair share. And third, if there are no empty frequency bands, radios should select the ones with the least interference.

Unfortunately, these rules fail to catch all the idiosyncrasies of wireless management, which result in unintended consequences that hamper the radios’ ability to work together. During SC2, we’ve seen plenty of examples where these seemingly straightforward rules fail.

For example, remember the second rule, to be a good neighbor and not hog frequencies? In principle, this cooperative approach should provide opportunities for other radios to use more spectrum if they need it. In practice, we saw how this strategy goes awry: In one instance, three teams left a large amount of the spectrum completely unused.

Looking at the results, we realized that one team insisted on using no more than one-third of the spectrum. While this strategy was very altruistic, it also limited the connections they could make to complete their own tasks—and therefore limited their score as well. It got worse when another system noticed that the first system was not scoring enough points, so it limited its own spectrum use to allow the first system to use more, which it would never do. Basically, the systems were being too deferential, and the result was wasted spectrum.

To fix that problem with a first-wave AI, the teams have to write another rule. And when that new rule results in another unexpected outcome, they deal with that by writing another rule. And so on. These constant surprises and consequent new rules are the main shortcoming of first-wave AIs. What can seem like a straightforward problem might end up being more difficult than it appeared to be.

Rather than rely on a few hard-and-fast rules, it seems that a better approach is for each radio to adapt its strategy based on the other radios with which it is sharing spectrum. In effect, the radio should develop an ever-growing series of rules by mining them from a large volume of data—the kind of data that Colosseum is good at generating. That’s why now, during this trial on 9 December 2018, we’re seeing teams shift to a second-wave AI approach. Several teams have built fledgling second-wave AI networks that can quickly characterize how the other networks are playing a match, and use this information to change their own radios’ rules on the fly.

When SC2 started, we suspected that many teams would take the simple approach of employing a “sense and avoid” strategy. This is what a Bluetooth device does when it discovers that the spectrum it wants is being used by a Wi-Fi router: It jumps to a new frequency. But Bluetooth’s frequency hopping works, in part, because Wi-Fi acts in a predictable way (that is, it broadcasts on a specific frequency and won’t change that behavior). However, in our competition, each team’s radios behave very differently and not at all predictably, making a sense-and-avoid strategy, well, senseless.

Instead, we’re seeing that a better approach is to predict what the spectrum will look like in the future. Then, a radio could use those predictions to decide which frequencies might open up—even if only for a moment or two, just enough to push through even a small amount of data. More precise predictions will allow collaborating radios to capitalize on every opportunity to transmit more data, without interfering by grabbing for the same frequency at the same time. Now our hope is that second-wave AIs can learn to predict the spectrum environment with enough precision to not let a single hertz go to waste.

Of course, all of this theory is useless if the AI-managed systems can’t outperform traditional allocation. That’s why we’re delighted to see, that night in the hotel room with the results rolling out of Colosseum, that six of the top eight teams had succeeded! The teams demonstrated that their radios, when they collaborated to share spectrum, could collectively deliver more data than if they had used exclusive frequencies. Three weeks later, four additional teams would do the same, bringing the total to 10.

We were, of course, ecstatic. But encouraging though the results are, it’s too soon to say when we might see radios using AI actively to manage their use of radio spectrum. The important thing to understand about the DARPA grand challenges is that they’re not about the state of technology at the end of the competition. Rather, the challenges are designed to determine whether a fundamental shift is possible. Look at DARPA’s autonomous driving Grand Challenge in 2004: It took another decade for autonomous technology to start being used in a very limited way in commercial cars.

That said, the results from our initial tournaments are promising. So far, we’ve found that when three radio networks share the spectrum, their predictions are much better than when four or five teams try to share the same amount. But we’re not done yet, and our teams are currently building even better systems. Perhaps, on 23 October 2019 at SC2’s live championship event at Mobile World Congress Americas, in Los Angeles, those systems will demonstrate, more successfully than ever before, that AI-operated radios can work together to create a new era of wireless communications.

This article appears in the June 2019 print issue as “AI Will Rule the Airwaves.”

About the Author

Paul Tilghman is a program manager in the Microsystems Technology Office of the Defense Advanced Research Projects Agency, where he’s been overseeing DARPA’s Spectrum Collaboration Challenge.

Rural Cooperatives Deliver High-Speed Internet for Less

Post Syndicated from Lucas Laursen original https://spectrum.ieee.org/telecom/internet/rural-cooperatives-deliver-highspeed-internet-for-less

Tired of waiting, some rural residents are funding their own Wi-Fi networks to bring broadband to their homes

The U.S. Federal Communications Commission proposed a rule update in April that would make it easier for people to install wireless repeater antennas—also known as signal boosters. In some rural regions, Wi-Fi can be the best way to extend the reach of fiber-optic networks at broadband speeds and at affordable prices.

The Wireless Internet Service Providers Association (WISPA), the lobbying group that proposed the change, argues that it’s necessary to keep up with market forces that are driving networks toward smaller hubs operating over shorter ranges. The update would also make it easier for communities to build more of their own Internet infrastructure. Already, despite rules in 26 U.S. states that create obstacles for community networks, some 3 million Americans, mostly rural, get their Internet that way.

The federal government spends around US $1.5 billion a year to expand broadband Internet service and subsidize plans for people with low incomes. Still, the so-called digital divide remains.

Some communities in the United States have already taken Internet service into their own hands by ­self-funding high-speed fiber and Wi-Fi networks. A handful of rural electricity cooperatives have even launched broadband cooperatives to build fiber-optic infrastructure. One Missouri electrical cooperative, Co-Mo, applied for U.S. stimulus funding to offer broadband to its members. Despite not winning that grant, the cooperative went ahead with its plans and now offers members broadband for US $50 a month.

The phenomenon reaches across the border to Canada, too. Bell Canada’s CAN $100-a-month DSL service didn’t provide enough bandwidth or speed for entrepreneur Brian Reid’s software and business-service companies, so he paid to extend fiber-optic cables from a nearby road to his business in the village of Lawrencetown, Nova ­Scotia. Soon, others heard about his fast fiber and asked if he would share. He suggested the municipality set up a cooperative to cover the costs of connecting his fiber-optic Internet to members via high-throughput Wi-Fi towers that can transmit from 10 kilometers up to 32 km.

“In rural areas, there is just this sense…that they can’t wait for the federal government or the states to do anything,” says Christopher Mitchell, the director of community broadband networks at the Institute for Local Self-Reliance, a nonprofit advocacy organization.

Once a fiber-optic line is in place, the hardware to transmit Wi-Fi costs very little. When Reid began researching options for transmitting Wi-Fi outdoors over long distances, he found that the transmitter was the cheap part, costing maybe CAN $1,500. It would cost 10 times as much to lay the concrete and raise a tower, even with volunteer labor.

The Lawrencetown cooperative has since installed four of these towers—including one on a farmer’s silo—and is building a fifth. The high-throughput Wi-Fi he tested early on “kind of blew us away,” he recalls. “I could see 600 megabits per second,” or about 7 times as fast as Nova Scotia’s average fixed broadband speed in 2018.

If government agencies such as the FCC are willing to open up unused licensed spectrum in rural regions, community networks could offer even better service, says telecom engineer ­Carlos Rey-Moreno of the Association for Progressive Communications. “The breakthrough will be at the regulatory level in the U.S. and other countries when they open up 6 gigahertz [and other bands],” he says, to take advantage of technology that can transmit more data than existing Wi-Fi.

Cooperatives have already shown that they can be popular and profitable with today’s technology. At first, the Lawrencetown cooperative, which began offering services in 2017, charged its 150 or so members CAN $60 per month, based on what a neighboring private ISP charged, but the cooperative soon found that it was generating a profit, which it had to repay to members, minus taxes. It has since lowered its prices to CAN $40 a month, and grown its membership to 350, but Reid says it could go lower.

RS Fiber Cooperative, which built fiber-optic service for 6,000 households, farms, and businesses in rural Minnesota, has a similar story. Both RS Fiber and the Lawrencetown co-op relied on municipal loans or backing to build the initial infrastructure. They’ve both since become self-sufficient.

And being first with fiber is always an advantage, Mitchell says: “It’s a hard business plan to make work, but if you can make it work…[you’ll] probably have an effective monopoly for many years.” The only difference, Reid says, is that the community owns this monopoly.

This article appears in the June 2019 print issue as “Rural Co-ops Deliver High-Speed Internet.”

IoT Can Make Construction Less of a Headache

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-can-make-construction-less-of-a-headache

With more data, constructing buildings can look more like factory manufacturing

The construction industry can be a mess. When constructing any building, there are several steps you must take in a specific order, so a snag in one step tends to snowball into more problems down the line. You can’t start on drywall until the plumbers and electricians complete their work, for example, and if the drywall folks are behind, the crew working on the interior finish gets delayed even more.

Developers and general contractors hope that by adopting Internet of Things (IoT) solutions to cut costs, build faster, and use a limited labor pool more efficiently, they can turn the messy, fragmented world of building construction into something more closely resembling what it actually is—a manufacturing process.

“We’re looking at the most fragmented and nonstructured process ever, but it is still a manufacturing process,” says Meirav Oren, the CEO of Versatile Natures, an Israeli company that provides on-site data-collection technology to construction sites. The more you understand the process, says Oren, the better you are at automating it. In other conversations I’ve had with people in the construction sector, the focus isn’t on prefabricated housing or cool bricklaying robots. The focus is on turning construction into a regimented process that can be better understood and optimized.

Like agriculture—which has undergone its own revolution, thanks to connected tech—construction is labor intensive, dependent on environmental factors, and highly regulated. In farming today, lidar identifies insects while robots pick weeds with the aid of computer vision. The goal in agricultural tech is to make workers more efficient, rather than eliminating them. Construction technology startups, using artificial intelligence and the IoT, have a similar goal.

Oren’s goal, for example, is to make construction work more like a typical manufacturing process by using a sensor-packed device that’s mounted to a crane to track the flow of materials on a site. Versatile Natures’ devices also monitor environmental factors, such as wind speed, to make sure the crane isn’t pushed beyond its capabilities.

Another construction-tech startup, Pillar Technologies of New York City, aims to reduce the impact of on-site environments on workers’ safety and construction schedules. Pillar makes sensors that measure eight environmental metrics, including temperature, humidity, carbon monoxide, and particulates. The company then uses the gathered data to evaluate what is happening at the site and to make predictions about delays, such as whether the air is too humid to properly drywall a house.

Because many work crews sign up for multiple job sites, a delay at one site often means a delay at others. Alex Schwarzkopf, cofounder and CEO of Pillar, hopes that one day Pillar’s devices will use data to monitor construction progress and then inform general contractors in advance that the plumbers are behind, for example. That way, the contractor can reschedule the drywall group or help the plumbers work faster.

Construction is full of fragmented processes, and understanding each fragment can lead to an improvement of the whole. As Oren says, there is a lot of low-hanging fruit in the construction industry, which means that startups can attack a small individual problem and still make a big impact.

This article appears in the June 2019 print issue as “Deconstructing the Construction Industry.”

Digital Doppelgängers Fool Advanced Anti-Fraud Tech

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/security/digital-doppelgngers-fool-advanced-antifraud-tech

With traces of a user’s browsing history and online behavior, hackers can build a fake virtual “twin” and use it to log in to a victim’s accounts

As new security technologies shield us from cybercrime, a slew of adversarial technologies match them, step for step. The latest such advance is the rise of digital doppelgängers—virtual entities that mimic real user behaviors authentic enough to fool advanced anti-fraud algorithms.

In February, Kaspersky Lab’s fraud-detection teams busted a darknet marketplace called Genesis that was selling digital identities starting from US $5 and going up to US $200. The price depended on the value of the purchased profile—for example, a digital mask that included a full user profile with bank login information would cost more than just a browser fingerprint.

The masks purchased at Genesis could be used through a browser and proxy connection to mimic a real user’s activity. Coupled with stolen (legitimate) user accounts, the attacker was then free to make new, trusted transactions in their name—including with credit cards.

Build Smart Prototypes for Autonomous and Connected Cars

Post Syndicated from National Instruments original https://spectrum.ieee.org/telecom/wireless/build-smart-prototypes-for-autonomous-and-connected-cars

Validation of embedded connectivity in V2X designs and self-driving cars mark an inflection point in automotive wireless designs, opening the door to modular solutions based on off-the-shelf hardware and flexible software

Automotive OEMs and Tier-1 suppliers are now in the thick of the technology world’s two gigantic engineering challenges: connected cars and autonomous vehicles. That inevitably calls for more flexible development, test, validation, and verification programs that can quickly adapt to the changing technologies and standards.

For a start, the vehicle-to-everything (V2X) technology, which embodies the connected car movement, is also the point where the wireless industry most intersects with autonomous vehicles (AVs). Collision detection and avoidance is a classic example of this technology crossover between the AV and connected car technologies.

It shows how vehicles and infrastructure work in tandem for the creation of a smart motoring network. Here, at this technology crossroads, when V2X technology converges and collides with AV connectivity, reliability and low latency become requirements that are even more critical.

It is worth mentioning here that the demand for extreme reliability in stringent environments is already a precondition in automotive designs. When added to the connected car and self-driving vehicle design realms, well-tested connectivity becomes a major stepping stone.

The immensely complex hardware and software in a highly automated vehicle connected to the outside world also opens the door to malicious attacks from hackers and spoofs. And that calls for future-proof design solutions that demonstrate safeguards against hacking and spoofing attacks.

Not surprisingly, the convergence of connected cars and self-driving vehicles significantly expands development, test, validation and verification requirements. And that makes it imperative for engineers to employ highly-integrated development frameworks for multiple system components like RF quality and protocol conformance.

Modular Test Solutions

Take the example of a V2X system that requires certification for radio frequency identification tags and readers in electronic toll collection systems. Here, design engineers must also ensure that this connected car application protects data privacy and prevents unauthorized access.

However, being a new technology, this could entail a higher cost for validation at different development stages. The testing of communication equipment supporting different regional V2X standards could also lead to the purchase of measurement instruments for each standard and design layer.

That is why test and prototype solutions based on modular hardware and software building blocks can prove more efficient and cost-effective, as they can explore new technologies, standards and architectural options (see Figure 2). Software defined radio (SDR)-based test solutions, in particular, are flexible and cheaper in the long run.

The following sections will show how modular hardware and flexible software can help create RF calibration and validation solutions for autonomous and connected vehicles. It will explain how these highly customized systems can validate embedded wireless technology in V2X communications designed to save lives on the road.

URLLC Experimental Testbed

The immense amount of compute power involved in autonomous and connected car designs may lead to the expanded use of flexible SDR platforms for tackling the increasing complexity of embedded software and the rising number of usage scenarios, especially when automotive engineers must carefully balance extreme-reliability demands with low-latency requirements.

It is a vital design consideration amid the massive breadth of inputs and outputs for multiple RF streams serving a diverse array of cameras and sensors in autonomous vehicles. Moreover, SDR platforms can efficiently validate connected vehicles’ RF links to each other and to roadside units for information regarding traffic and construction work.

That is why the ultra-reliable low-latency communications mechanism is becoming so critical in both V2X systems and autonomous vehicles. It boosts system capacity and network coverage by reporting 99.999% reliability with a latency of 1 ms.

The URLLC reference design, for instance, enables engineers to create physical layer mechanisms for different driving environments and then compares them via simulation to analyze trade-offs between latency and reliability. So, a real-time experimental testbed like this one can substitute for expensive and cumbersome on-road testing to prove that the vehicle is safe for autonomous driving and V2X communications.

Shanghai University has joined with National Instruments to create a URLLC experimental testbed for advanced V2X services like vehicle platooning. The URLLC reference design and vehicle-to-vehicle communication are built around National Instruments’ SDR-based hardware for rapid prototyping of mobile communication channels.

Vector Signal Transceiver

Another design platform worth mentioning in the context of AV connectivity and connected car technology is Vector Signal Transceiver (VST). It is a customizable platform that combines an RF and baseband vector signal analyzer and generator with a user-programmable field-programmable gate array (FPGA) and high-speed interfaces for real-time signal processing and control.

That enables comprehensive RF characteristic measurements and features like dynamic obstacle generation in a variety of road conditions. A VST system, for example, can simulate Doppler effect velocity from multiple angles or simulate scenarios such as a pedestrian walking across the street and a vehicle changing lanes.

It is another customized system that combines flexible, off-the-shelf hardware with a software development environment like LabVIEW to create user-defined prototyping and test solutions. That allows design engineers to transform VST into what they need it to be at the firmware level and address the most demanding development, test and validation challenges.

National Instruments introduced the first VST system in 2012 with an FPGA programmable with LabVIEW to accelerate design time and lower validation cost. Fast forward to 2019, the second-generation VST is ready to serve the autonomous and connected car designs where bandwidth and latency are crucial factors.

Automotive Testing’s Inflection Point

Industry observers call 2019 the year of V2X communications, while self-driving cars are still a work in progress. In the connected car realm, engineers are busy testing and validating the dedicated short-range communications-based vehicle-to-vehicle and vehicle-to-infrastructure devices to ensure that the V2X communication will work all the time and in all possible scenarios.

It is clear that both connected cars and self-driving vehicles share similar imperatives: they must be trustworthy and they must be credible. What is also evident by now is that these high-tech vehicles require high-tech prototyping and validation tools.

That marks an inflection point in automotive design and validation where two manifestations of smart mobility are striving to make traffic safer and more efficient. And the industry demands efficient and cost-effective test and verification solutions for these rapidly expanding automotive markets.

If these systems are based on modular hardware and flexible software, they can be efficiently customized for the autonomous and connected car designs. More importantly, these verification arrangements can significantly lower the design cost at different development stages.

For more on V2X testing, go to National Instruments.

Operation ShadowHammer Exploited Weaknesses in the Software Pipeline

Post Syndicated from Fahmida Rashid original https://spectrum.ieee.org/tech-talk/telecom/security/operation-shadowhammer-exploited-weaknesses-in-the-software-pipeline

Kaspersky security researchers described how hackers used software updates to push malware onto victims’ computers

When security researchers at Kaspersky Lab  disclosed Operation ShadowHammer in March, they described how attackers tampered with software updates from PC-maker ASUSTeK Computer to install malware on victims’ computers. Now, new details revealed last week indicate the operation was even more insidious—it sabotaged developer tools, an approach that could spread malware much faster and more discreetly than conventional methods.

In ShadowHammer, a sophisticated group of attackers modified an old version of the ASUS Live Update Utility software and pushed out the tampered copy to ASUS computers around the world, said Kaspersky Lab. The Live Update Utility, which comes preinstalled in most new ASUS computers, automatically updates the set of firmware instructions that control the computer’s input and output operations, hardware drivers, and applications. The modified tool, signed with legitimate ASUSTeK certificates and stored on official servers, looked like the real thing. But once it was planted, it gave the attackers the ability to control the computer through a remote server and install additional malware.

ShadowHammer is an “example of how sophisticated and dangerous a smart supply chain attack can be,” said Vitaly Kamluk, Kaspersky’s director of the global research and analysis team.

Floating Cell Towers Are the Next Step for 5G

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/internet-balloons-and-drones-look-to-rise-in-the-5g-era

Terrestrial 5G networks will support high-altitude balloons and drones, and could someday merge with them

5G report logo, link to report landing page

As the world races to deploy speedy 5G mobile networks on the ground, some companies remain focused on floating cell towers in the sky. During the final session of the sixth annual Brooklyn 5G Summit on Thursday, Silicon Valley and telecom leaders discussed whether aerial drones and balloons could finally begin providing commercial mobile phone and Internet service from the air.

Terahertz Waves Could Push 5G to 6G

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/at-the-6th-annual-brooklyn-5g-summit-some-eyes-are-on-6g

At the Brooklyn 5G summit, experts said terahertz waves could fix some of the problems that may arise with millimeter-wave networks

5G report logo, link to report landing page

It may be the sixth year for the Brooklyn 5G Summit, but in the minds of several speakers, 2019 is also Year Zero for 6G. The annual summit, hosted by Nokia and NYU Wireless, is a four-day event that covers all things 5G, including deployments, lessons learned, and what comes next.

This year, that meant preliminary research into terahertz waves, the frequencies that some researcher believe will make up a key component of the next next generation of wireless. In back-to-back talks, Gerhard Fettweis, a professor at TU Dresden, and Ted Rappaport, the founder and director of NYU Wireless, talked up the potential of terahertz waves.

Boingo and Mettis Aerospace Test Wi-Fi 6 for Airports and Factories

Post Syndicated from Lucas Laursen original https://spectrum.ieee.org/tech-talk/telecom/wireless/wifi6trial

A single Wi-Fi 6 access point can deliver high-speed, low-latency service to hundreds of users at once

An airport operator in California and a British aerospace manufacturer are among the first organizations to put Wi-Fi 6 networks to the test. Boingo Wireless has begun testing Wi-Fi 6 access points and compatible smartphones at John Wayne Airport in Santa Ana, California, it announced earlier this month, while Mettis Aerospace says it will begin its own trial of the new protocol at its manufacturing facility in Redditch, England, in the second half of this year. 

“We’re still defining what the success criteria will be,” says Mettis IT head Dave Green, but the firm plans to test the next-generation of Wi-Fi for a few applications: collecting sensor data from machines on the factory floor, allowing staff to use augmented reality-enabled tablets to troubleshoot problems, and transmitting live video feeds from harder-to-reach areas on the factory floor. Green says they know these applications will require low latency—one of Wi-Fi 6’s improvements over its predecessor—and the ability to handle hundreds of simultaneous connections at once—another of the standard’s selling points.

Australia’s Troubled National Broadband Network Delivers a Fraction of What Was Promised

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/internet/australias-troubled-national-broadband-network-delivers-a-fraction-of-what-was-promised

The newly elected government will inherit a floundering AUD $51 billion broadband network that’s providing slower service to fewer properties than planned

On 18 May, voters in Australia’s federal election will determine whether the Liberal-National Coalition will remain in control or the Australian Labor Party will win the government. Either way, the new leaders will have to contend with the National Broadband Network (NBN), a lumbering disaster that began as an ambitious effort by the Australian government to construct a countrywide broadband network.

When the NBN was first proposed in April 2009, the government aimed to build a fiber-optic network that would deliver connections of up to 100 megabits per second to 90 percent of Australian homes, schools, and workplaces within eight years. A decade later, however, the NBN has failed to deliver on that promise. NBN Co., the government-owned company created to construct and manage the network, now expects to deliver 50 Mb/s connections to 90 percent of the country by the end of 2020.

“None of the promises have ever been met,” says Rod Tucker, a telecommunications engineer and professor emeritus at the University of Melbourne. The watershed moment for the network was the 2013 federal election. That year, the Labor government that had championed the network was replaced by a conservative coalition government. The new government promptly reevaluated the NBN plan, taking issue with its projected cost.

The original plan, estimated to cost AU $43 billion, was a fiber-to-the-premise (or FTTP) plan. FTTP, as the name implies, requires threading fiber-optic cable to each and every building. The coalition government balked at the price tag, fired NBN Co.’s CEO, restructured the company, and proposed an alternative fiber-to-the-node (or FTTN) strategy, which was estimated to cost no more than AU $29.5 billion. The cost has now ballooned to more than AU $51 billion.

The reason an FTTN network theoretically costs less is because there’s less fiber to install. Rather than run fiber to every premise, FTTN runs fiber to centralized “nodes,” from which any number of technologies can then connect to individual premises. Fiber could be used, but more typically these last-mile connections are made with copper cabling.

The rationale behind the FTTN pivot was that there’s no need to lay fiber to homes and offices because copper landlines already connect those buildings. Unfortunately, copper doesn’t last forever. “A lot of the copper is very old,” says Mark Gregory, an associate professor of engineering at RMIT University, in Melbourne. “Some of it is just not suitable for fiber to the node.” Gregory says that NBN Co. has run into delays more than once as it encounters copper cabling near the end of its usable life and must replace it.

NBN Co. has purchased roughly 26,000 kilometers of copper so far to construct the network, according to Gregory—enough to wrap more than halfway around the Earth. Buying that copper added roughly AU $10 billion to the FTTN price tag, says Tucker. On top of that, NBN Co. is paying AU $1 billion a year to Telstra, the Australian communications company, for the right to use the last-mile copper that Telstra owns.

But perhaps the worst part is that even after the cost blowouts, the lackluster connections, and the outdated copper technology, there doesn’t seem to be a good path forward for the network, which will be obsolete upon arrival. “In terms of infrastructure,” says Gregory, “it’s pretty well the only place I know of that’s spent [AU] $50 billion and built an obsolete network.”

Upgrading the network to deliver the original connection speeds will require yet another huge investment. That’s because copper cables can’t compete with fiber-optic cables. To realize 100 Mb/s, NBN Co. will eventually have to lay fiber from all the nodes to every premise anyway. Gregory, for one, estimates that could cost NBN Co. an additional AU $16 billion, a hard number to swallow for a project that’s already massively over budget. “Fiber to the node is a dead-end technology in that it’s expensive to upgrade,” says Tucker, who also wrote about the challenges the NBN would face in the December 2013 issue of IEEE Spectrum.

After the federal election, the incoming government will have to figure out what to do with this bloated project. NBN Co. is far from profitable, and even if it was, it still owes billions of dollars to the Australian government. If the government does decide to bite the bullet and upgrade to FTTP, it will have to contend with other commercial networks now delivering equivalent speeds.

Had Australia delivered the NBN as originally promised, it would have been one of the fastest, most robust national networks in the world at the time. Instead, the country has watched its place in rankings of broadband speeds around the world continue to drop, says Tucker, while places like New Zealand, Australia’s neighbor “across the ditch,” have invested in and largely completed robust FTTP networks.

“It was an opportunity lost,” says Gregory.

This article appears in the May 2019 print issue as “Australia’s Fiber-Optic Mess.”

How the U.S. Can Prepare to Live in China’s 5G World

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/standards/how-america-can-prepare-to-live-in-chinas-5g-world

China’s first-mover advantage in deploying 5G networks capable of transforming national economies has major implications for the United States

5G report logo, link to report landing page

If you believe the triumphalist messaging from U.S. president Donald Trump’s White House and from the U.S. telecommunications industry, the United States is racing neck and neck with China in a global competition to roll out speedy 5G mobile networks. But the U.S. military’s premier advisory board of academic researchers and private sector technologists has warned that China’s front-runner position means it will likely win much of the world’s business in deploying 5G infrastructure and services. With that in mind, it has advised that the United States would be wise to adopt a strategy akin to “if you can’t beat ’em, join ’em.”