Tag Archives: Telecom

Quantum Memory Milestone Boosts Quantum Internet Future

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/milestone-for-quantum-memory-efficiency-makes-quantum-internet-possible

An array of cesium atoms just 2.5 centimeters long has demonstrated a record level of storage-and-retrieval efficiency for quantum memory. It’s a pivotal step on the road to eventually building large-scale quantum communication networks spanning entire continents.

NYU Wireless Picks Up Its Own Baton to Lead the Development of 6G

Post Syndicated from NYU Wireless original https://spectrum.ieee.org/telecom/wireless/nyu_wireless_picks_up_its_own_baton_to_lead_the_development_of_6g

The fundamental technologies that have made 5G possible are unequivocally massive MIMO (multiple-input multiple-output) and millimeter wave (mmWave) technologies. Without these two technologies there would be no 5G network as we now know it.

The two men, who were the key architects behind these fundamental technologies for 5G, have been leading one of the premier research institutes in mobile telephony since 2012: NYU Wireless, a part of NYU’s Tandon School of Engineering

Ted Rappaport is the founding director of NYU Wireless, and one of the key researchers in the development of mmWave technology. Rappaport also served as the key thought leader for 5G by planting the flag in the ground nearly a decade ago that argued mmWave would be a key enabling technology for the next generation of wireless.  His earlier work at two other wireless centers that he founded at Virginia Tech and The University of Austin laid the early groundwork that helped NYU Wireless catapult into one of the premier wireless institutions in the world.

Thomas Marzetta, who now serves as the director of NYU Wireless, was the scientist who led the development of massive MIMO while he was at Bell Labs and has championed its use in 5G to where it has become a key enabling technology for it.

These two researchers, who were so instrumental in developing the technologies that have enabled 5G, are now turning their attention to the next generation of mobile communications, and, according to them both, we are facing some pretty steep technical challenges to realizing a next generation of mobile communications.

“Ten years ago, Ted was already pushing mobile mmWave, and I at Bell Labs was pushing massive MIMO,” said Marzetta.  “So we had two very promising concepts ready for 5G. The research concepts that the wireless community is working on for 6G are not as mature at this time, making our focus on 6G even more important.”

This sense of urgency is reflected by both men, who are pushing against any sense of complacency that may exist in starting the development of 6G technologies as soon as possible. With this aim in mind, Rappaport, just as he did 10 years ago, has planted a new flag in the world of mobile communications with his publication last year of an article with the IEEE, entitled “Wireless Communications and Applications Above 100 GHz: Opportunities and Challenges for 6G and Beyond” 

“In this paper, we said for the first time that 6G is going to be in the sub-terahertz frequencies,” said Rappaport. “We also suggested the idea of wireless cognition where human thought and brain computation could be sent over wireless in real time. It’s a very visionary look at something. Our phones, which right now are flashlights, emails, TV browsers, calendars, are going to be become much more.”

While Rappaport feels confident that they have the right vision for 6G, he is worried about the lack of awareness of how critical it is for the US Government funding agencies and companies to develop the enabling technologies for its realization. In particular, both Rappaport and Marzetta are concerned about the economic competitiveness of the US and the funding challenges that will persist if it is not properly recognized as a priority.

“These issues of funding and awareness are critical for research centers, like NYU Wireless,” said Rappaport. “The US needs to get behind NYU Wireless to foster these ideas and create these cutting-edge technologies.”

With this funding support, Rappaport argues, teaching research institutes like NYU Wireless can create the engineers that end up going to companies and making technologies like 6G become a reality. “There are very few schools in the world that are even thinking this far ahead in wireless; we have the foundations to make it happen,” he added.

Both Rappaport and Marzetta also believe that making national centers of excellence in wireless could help to create an environment in which students could be exposed constantly to a culture and knowledge base for realizing the visionary ideas for the next generation of wireless.

“The Federal government in the US needs to pick a few winners for university centers of excellence to be melting pots, to be places where things are brought together,” said Rappaport. “The Federal government has to get together and put money into these centers to allow them to hire talent, attract more faculty, and become comparable to what we see in other countries where huge amounts of funding is going in to pick winners.”

While research centers, like NYU Wireless, get support from industry to conduct their research, Rappaport and Marzetta see that a bump in Federal funding could serve as both amplification and a leverage effect for the contribution of industrial affiliates. NYU Wireless currently has 15 industrial affiliates with a large number coming from outside the US, according to Rappaport.

“Government funding could get more companies involved by incentivizing them through a financial multiplier,” added Rappaport

Of course, 6G is not simply about setting out a vision and attracting funding, but also tackling some pretty big technical challenges.

Both men believe that we will need to see the development of new forms of MIMO, such as holographic MIMO, to enable more efficient use of the sub 6 GHz spectrum. Also, solutions will need to be developed to overcome the blockage problems that occur with mmWave and higher frequencies.

Fundamental to these technology challenges is accessing new frequency spectrums so that a 6G network operating in the sub-terahertz frequencies can be achieved. Both Rappaport and Marzetta are confident that technology will enable us to access even more challenging frequencies.

“There’s nothing technologically stopping us right now from 30, and 40, and 50 gigahertz millimeter wave, even up to 700 gigahertz,” said Rappaport. “I see the fundamentals of physics and devices allowing us to take us easily over the next 20 years up to 700 or 800 gigahertz.”

Marzetta added that there is much more that can be done in the scarce and valuable sub-6GHz spectrum. While massive MIMO is the most spectrally efficient wireless scheme ever devised, it is based on extremely simplified models of how antennas create electromagnetic signals that propagate to another location, according to Marzetta, adding, “No existing wireless system or scheme is operating close at all to limits imposed by nature.”

While expanding the spectrum of frequencies and making even better use of the sub-6GHz spectrum are  the foundation for the realization of future networks, Rappaport and Marzetta also expect that we will see increased leveraging of AI and machine learning. This will enable the creation of intelligent networks that can manage themselves with much greater efficiency than today’s mobile networks.

“Future wireless networks are going to evolve with greater intelligence,” said Rappaport. An example of this intelligence, according to Rappaport, is the new way in which the Citizens Broadband Radio Service (CBRS) spectrum is going to be used in a spectrum access server (SAS) for the first time ever.

“It’s going to be a nationwide mobile system that uses these spectrum access servers that mobile devices talk to in the 3.6 gigahertz band,” said Rappaport. “This is going to allow enterprise networks to be a cross of old licensed cellular and old unlicensed Wi-Fi. It’s going to be kind of somewhere in the middle. This serves as an early indication of how mobile communications will evolve over the next decade.”

These intelligent networks will become increasingly important when 6G moves towards so-called cell-less (“cell-free”) networks.

Currently, mobile network coverage is provided through hundreds of roughly circular cells spread out across an area. Now with 5G networks, each of these cells will be equipped with a massive MIMO array to serve the users within the cell. But with a cell-less 6G network the aim would be to have hundreds of thousands, or even millions, of access points, spread out more or less randomly, but with all the networks operating cooperatively together.

“With this system, there are no cell boundaries, so as a user moves across the city, there’s no handover or handoff from cell to cell because the whole city essentially constitutes one cell,” explained Marzetta. “All of the people receiving mobile services in a city get it through these access points, which in principle, every user is served by every access point all at once.”

One of the obvious challenges of this cell-less architecture is just the economics of installing so many access points all over the city. You have to get all of the signals to and from each access point from or to one sort of central point that does all the computing and number crunching.

While this all sounds daunting when thought of in the terms of traditional mobile networks, it conceptually sounds far more approachable when you consider that the Internet of Things (IoT) will create this cell-less network.

“We’re going to go from 10 or 20 devices today to hundreds of devices around us that we’re communicating with, and that local connectivity is what will drive this cell-less world to evolve,” said Rappaport. “This is how I think a lot of 5G and 6G use cases in this wide swath of spectrum are going to allow these low-power local devices to live and breathe.”

To realize all of these technologies, including intelligent networks, cell-less networks, expanded radio frequencies, and wireless cognition, the key factor will be training future engineers. 

To this issue, Marzetta noted: “Wireless communications is  a growing and dynamic field that  is a real opportunity for the next generation of young engineers.”

4G on the Moon: One Small Leap, One Giant Step

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/wireless/4g-moon

Standing up a 4G/LTE network might seem well below the pay grade of a legendary innovation hub like Nokia Bell Labs. However, standing up that same network on the moon is another matter. 

Nokia and 13 other companies — including SpaceX and Lockheed-Martin — have won five-year contracts totaling over US$ 370 million from NASA to demonstrate key infrastructure technologies on the lunar surface. All of which is part of NASA’s Artemis program to return humans to the moon.

NASA also wants the moon to be a stepping stone for further explorations in the solar system, beginning with Mars.

For that, says L.K. Kubendran, lead for commercial space technology partnerships, a lunar outpost would need, at the very least, power, shelter, and a way to communicate.

And the moon’s 4G network will not only be carrying voice and data signals like those found in terrestrial applications, it’ll also be handling remote operations like controlling lunar rovers from a distance. The antennas and base stations will need to be ruggedized for the harsh, radiation-filled lunar environment. Plus, over the course of a typical lunar day-night cycle (28 earth days), the network infrastructure will face temperature extremes of over 250ºC from sunlight to shadow. Of course, every piece of hardware in the network must first be transported to the moon, too. 

“The equipment needs to be hardened for environmental stresses such as vibration, shock and acceleration, especially during launch and landing procedures, and the harsh conditions experienced in space and on the lunar surface,” says Thierry Klein, head of the Enterprise & Industrial Automation Lab, Nokia Bell Labs. 

Oddly enough, it’ll probably all work better on the moon, too. 

“The propagation model will be different,” explains Dola Saha, assistant professor in Electrical and Computer Engineering at the University at Albany, SUNY. She adds that the lack of atmosphere as well as the absence of typical terrestrial obstructions like trees and buildings will likely mean better signal propagation. 

To ferry the equipment for their lunar 4G network, Nokia will be collaborating with Intuitive Machines — who’s also developing a lunar ice drill for the moon’s south pole. “Intuitive Machines is building a rover that they want to land on the moon in late 2022,” says Kubendran. That rover mission now seems likely to be the rocket that’ll begin hauling 4G infrastructure to the lunar surface.    

For all the technological innovation a lunar 4G network might bring about, its signals could unfortunately also mean bad news for radio astronomy. Radio telescopes are notoriously vulnerable to interference (a.k.a. radio frequency interference, or RFI). For instance, a stray phone signal from Mars carries power enough to interfere with the Jodrell Bank radio observatory in Manchester, U.K. So, asks Emma Alexander, an astrophysicist writing for The Conversation, “how would it fare with an entire 4G network on the moon?”

That depends, says Saha. 

“Depends on which frequencies you’re using, and…the side lobes and…filters [at] the front end of the of these networks,” Saha says. On Earth, she continues, there are so many applications that the FCC in the US, and corresponding bodies in other countries, allocate frequencies to each of them. 

“RFI can be mitigated at the source with appropriate shielding and precision in the emission of signals,” Alexander writes. “Astronomers are constantly developing strategies to cut RFI from their data. But this increasingly relies on the goodwill of private companies.” As well as regulations by governments, she adds, looking to shield earthly radio telescopes from human-generated interference from above. 

IoT Network Companies Have Cracked Their Chicken-and-Egg Problem

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/wireless/iot-network-companies-have-cracked-their-chicken-and-egg-problem

Along with everything else going on, we may look back at 2020 as the year that companies finally hit upon a better business model for Internet of Things (IoT) networks. Established network companies such as the Things Network and Helium, and new players such as Amazon, have seemingly given up on the idea of making money from selling network connectivity. Instead, they’re focused on getting the network out there for developers to use, assuming in the process that they’ll benefit from the effort in the long run.

IoT networks have a chicken-and-egg problem. Until device makers see widely available networks, they don’t want to build products that run on the network. And customers don’t want to pay for a network, and thus, fund its development, if there aren’t devices available to use. So it’s hard to raise capital to build a new wide-area network (WAN) that provides significant coverage and supports a plethora of devices that are enticing to use.

It certainly didn’t help such network companies as Ingenu, MachineQ, Senet, and SigFox that they’re all marketing half a dozen similar proprietary networks. Even the cellular carriers, which are promoting both LTE-M for machine-to-machine networks and NB-IoT for low-data-rate networks, have historically struggled to justify their investments in IoT network infrastructure. After COVID-19 started spreading in Japan, NTT DoCoMo called it quits on its NB-IoT network, citing a lack of demand.

“Personally, I don’t believe in business models for [low-power WANs],” says Wienke Giezeman, the CEO and cofounder of the Things Network. His company does deploy long-range low-power WAN gateways for customers that use the LoRa Alliance’s LoRaWAN specifications. (“LoRa” is short for “long-range.”) But Giezeman sees that as the necessary element for later providing the sensors and software that deliver the real IoT applications customers want to buy. Trying to sell both the network and the devices is like running a restaurant that makes diners buy and set up the stove before it cooks their meals.

The Things Network sets up the “stove” and includes the cost of operating it in the “meal.” But, because Giezeman is a big believer in the value of open source software and creating a sense of abundance around last-mile connectivity, he’s also asking customers to opt in to turning their networks into public networks.

Senet does something similar, letting customers share their networks. Helium is using cryptocurrencies to entice people to set up LoRaWAN hotspots on their networks and rewarding them with tokens for keeping the networks operational. When someone uses data from an individual’s Helium node, that individual also gets tokens that might be worth something one day. I actually run a Helium hotspot in my home, although I’m more interested in the LoRa coverage than the potential for wealth.

And there’s Amazon, which plans to embed its own version of a low-power WAN into its Echo and its Ring security devices. Whenever someone buys one of these devices they’ll have the option of adding it as a node on the Amazon Sidewalk network. Amazon’s plan is to build out a decentralized network for IoT devices, starting with a deal to let manufacturer Tile use the network for its ­Bluetooth tracking devices.

After almost a decade of following various low-power IoT networks, I’m excited to see them abandon the idea that the network is the big value, and instead recognize that it’s the things on the network that entice people. Let’s hope this year marks a turning point for low-power WANs.

This article appears in the November 2020 print issue as “Network Included.”

Breaking the Latency Barrier

Post Syndicated from Shivendra Panwar original https://spectrum.ieee.org/telecom/wireless/breaking-the-latency-barrier

For communications networks, bandwidth has long been king. With every generation of fiber optic, cellular, or Wi-Fi technology has come a jump in throughput that has enriched our online lives. Twenty years ago we were merely exchanging texts on our phones, but we now think nothing of streaming videos from YouTube and Netflix. No wonder, then, that video now consumes up to 60 percent of Internet bandwidth. If this trend continues, we might yet see full-motion holography delivered to our mobiles—a techie dream since Princess Leia’s plea for help in Star Wars.

Recently, though, high bandwidth has begun to share the spotlight with a different metric of merit: low latency. The amount of latency varies drastically depending on how far in a network a signal travels, how many routers it passes through, whether it uses a wired or wireless connection, and so on. The typical latency in a 4G network, for example, is 50 milliseconds. Reducing latency to 10 milliseconds, as 5G and Wi-Fi are currently doing, opens the door to a whole slew of applications that high bandwidth alone cannot. With virtual-reality headsets, for example, a delay of more than about 10 milliseconds in rendering and displaying images in response to head movement is very perceptible, and it leads to a disorienting experience that is for some akin to seasickness.

Multiplayer games, autonomous vehicles, and factory robots also need extremely low latencies. Even as 5G and Wi-Fi make 10 milliseconds the new standard for latency, researchers, like my group at New York University’s NYU Wireless research center, are already working hard on another order-of-magnitude reduction, to about 1 millisecond or less.

Pushing latencies down to 1 millisecond will require reengineering every step of the communications process. In the past, engineers have ignored sources of minuscule delay because they were inconsequential to the overall latency. Now, researchers will have to develop new methods for encoding, transmitting, and routing data to shave off even the smallest sources of delay. And immutable laws of physics—specifically the speed of light—will dictate firm restrictions on what networks with 1-millisecond latencies will look like. There’s no one-size-fits-all technique that will enable these extremely low-latency networks. Only by combining solutions to all these sources of latency will it be possible to build networks where time is never wasted.

Until the 1980s, latency-sensitive technologies used dedicated end-to-end circuits. Phone calls, for example, were carried on circuit-switched networks that created a dedicated link between callers to guarantee minimal delays. Even today, phone calls need to have an end-to-end delay of less than 150 milliseconds, or it’s difficult to converse comfortably.

At the same time, the Internet was carrying delay-tolerant traffic, such as emails, using technologies like packet switching. Packet switching is the Internet equivalent of a postal service, where mail is routed through post offices to reach the correct mailbox. Packets, or bundles of data between 40 and 1,500 bytes, are sent from point A to point B, where they are reassembled in the correct order. Using the technology available in the 1980s, delays routinely exceeded 100 milliseconds, with the worst delays well over 1 second.

Eventually, Voice over Internet Protocol (VoIP) technology supplanted circuit-switched networks, and now the last circuit switches are being phased out by providers. Since VoIP’s triumph, there have been further reductions in latency to get us into the range of tens of milliseconds.

Latencies below 1 millisecond would open up new categories of applications that have long been sought. One of them is haptic communications, or communicating a sense of touch. Imagine balancing a pencil on your fingertip. The reaction time between when you see the pencil beginning to tip over and then moving your finger to keep it balanced is measured in milliseconds. A human-controlled tele­operated robot with haptic feedback would need a similar latency level.

Robots that aren’t human controlled would also benefit from 1-millisecond latencies. Just like a person, a robot can avoid falling over or dropping something only if it reacts within a millisecond. But the powerful computers that process real-time reactions and the batteries that run them are heavy. Robots could be lighter and operate longer if their “brains” were kept elsewhere on a low-latency wireless network.

Before I get into all the ways engineers might build ultralow-latency networks, it would help to understand how data travels from one device to another. Of course, the signals have to physically travel along a transmission link between the devices. That journey isn’t limited by just the speed of light, because delays are caused by switches and routers along the way. And even before that, data must be converted and prepped on the device itself for the journey. All of these parts of the process will need to be redesigned to consistently achieve submillisecond latencies.

I should mention here that my research is focused on what’s called the transport layer in the multiple layers of protocols that govern the exchange of data on the Internet. That means I’m concerned with the portion of a communications network that controls transmission rates and checks to make sure packets reach their destination in the correct order and without errors. While there are certainly small sources of delay on devices themselves, I’m going to focus on network delays.

The first issue is frame duration. A wireless access link—the link that connects a device to the larger, wired network—schedules transmissions within periodic intervals called frames. For a 4G link, the typical frame duration is 1 millisecond, so you could potentially lose that much time just waiting for your turn to transmit. But 5G has shrunk frame durations, lessening their contribution to delay.

Wi-Fi functions differently. Rather than using frames, Wi-Fi networks use random access, where a device transmits immediately and reschedules the transmission if it collides with another device’s transmission in the link. The upside is that this method has shorter delays if there is no congestion, but the delays build up quickly as more device transmissions compete for the same channel. The latest version of Wi-Fi, Wi-Fi Certified 6 (based on the draft standard IEEE P802.11ax) addresses the congestion problem by introducing scheduled transmissions, just as 4G and 5G networks do.

When a data packet is traveling through the series of network links connecting its origin to its destination, it is also subject to congestion delays. Packets are often forced to queue up at links as both the amount of data traveling through a link and the amount of available bandwidth fluctuate naturally over time. In the evening, for example, more data-heavy video streaming could cause congestion delays through a link. Sending data packets through a series of wireless links is like drinking through a straw that is constantly changing in length and diameter. As delays and bandwidth change, one second you may be sucking up dribbles of data, while the next you have more packets than you can handle.

Congestion delays are unpredictable, so they cannot be avoided entirely. The responsibility for mitigating congestion delays falls to the Transmission Control Protocol (TCP), one part of the collection of Internet protocols that governs how computers communicate with one another. The most common implementations of TCP, such as TCP Cubic, measure congestion by sensing when the buffers in network routers are at capacity. At that point, packets are being lost because there is no room to store them. Think of it like a bucket with a hole in it, placed underneath a faucet. If the faucet is putting more water into the bucket than is draining through the hole, it fills up until eventually the bucket overflows. The bucket in this example is the router buffer, and if it “overflows,” packets are lost and need to be sent again, adding to the delay. The sender then adjusts its transmission rate to try and avoid flooding the buffer again.

The problem is that even if the buffer doesn’t overflow, data can still be stuck there queuing for its turn through the bucket “hole.” What we want to do is allow packets to flow through the network without queuing up in buffers. ­YouTube uses a variation of TCP developed at Google called TCP BBR, short for Bottleneck Bandwidth and Round-Trip propagation time, with this exact goal. It works by adjusting the transmission rate until it matches the rate at which data is passing through routers. Going back to our bucket analogy, it constantly adjusts the flow of water out of the faucet to keep it the same as the flow out of the hole in the bucket.

For further reductions in congestion, engineers have to deal with previously ignorable tiny delays. One example of such a delay is the minuscule variation in the amount of time it takes each specific packet to transmit during its turn to access the wireless link. Another is the slight differences in computation times of different software protocols. These can both interfere with TCP BBR’s ability to determine the exact rate to inject packets into the connection without leaving capacity unused or causing a traffic jam.

A focus of my research is how to redesign TCP to deliver low delays combined with sharing bandwidth fairly with other connections on the network. To do so, my team is looking at existing TCP versions, including BBR and others like the Internet Engineering Task Force’s L4S (short for Low Latency Low Loss ­Scalable throughput). We’ve found that these existing versions tend to focus on particular applications or situations. BBR, for example, is optimized for ­YouTube videos, where video data typically arrives faster than it is being viewed. That means that BBR ignores bandwidth variations because a buffer of excess data has already made it to the end user. L4S, on the other hand, allows routers to prioritize time-sensitive data like real-time video packets. But not all routers are capable of doing this, and L4S is less useful in those situations.

My group is figuring out how to take the most effective parts of these TCP versions and combine them into a more versatile whole. Our most promising approach so far has devices continually monitoring for data-packet delays and reducing transmission rates when delays are building up. With that information, each device on a network can independently figure out just how much data it can inject into the network. It’s somewhat like shoppers at a busy grocery store observing the checkout lines to see which ones are moving faster, or have shorter queues, and then choosing which lines to join so that no one line becomes too long.

For wireless networks specifically, reliably delivering latencies below 1 millisecond is also hampered by connection handoffs. When a cellphone or other device moves from the coverage area of one base station (commonly referred to as a cell tower) to the coverage of a neighboring base station, it has to switch its connection. The network initiates the switch when it detects a drop in signal strength from the first station and a corresponding rise in the second station’s signal strength. The handoff occurs during a window of several seconds or more as the signal strengths change, so it rarely interrupts a connection.

But 5G is now tapping into millimeter waves—the frequencies above 20 ­gigahertz—which bring unprecedented hurdles. These frequency bands have not been used before because they typically don’t propagate as far as lower frequencies, a shortcoming that has only recently been addressed with technologies like beamforming. Beamforming works by using an array of antennas to point a narrow, focused transmission directly at the intended receiver. Unfortunately, obstacles like pedestrians and vehicles can entirely block these beams, causing frequent connection interruptions.

One option to avoid interruptions like these is to have multiple millimeter-wave base stations with overlapping coverage, so that if one base station is blocked, another base station can take over. However, unlike regular cell-tower handoffs, these handoffs are much more frequent and unpredictable because they are caused by the movement of people and traffic.

My team at NYU is developing a solution in which neighboring base stations are also connected by a fiber-optic ring network. All of the traffic to this group of base stations would be injected into, and then circulate around, the ring. Any base station connected to a cellphone or other wireless device can copy the data as it passes by on the ring. If one base station becomes blocked, no problem: The next one can try to transmit to the device. If it, too, is blocked, the one after that can have a try. Think of a baggage carousel at an airport, with a traveler standing near it waiting for her luggage. She may be “blocked” by others crowding around the carousel, so she might have to move to a less crowded spot. Similarly, the fiber-ring system would allow reception as long as the mobile device is anywhere in the range of at least one unblocked base station.

There’s one final source of latency that can no longer be ignored, and it’s immutable: the speed of light. Because light travels so quickly, it used to be possible to ignore it in the presence of other, larger sources of delay. It cannot be disregarded any longer.

Take for example the robot we discussed earlier, with its computational “brain” in a server in the cloud. Servers and large data centers are often located hundreds of kilometers away from end devices. But because light travels at a finite speed, the robot’s brain, in this case, cannot be too far away. Moving at the speed of light, wireless communications travel about 300 kilometers per millisecond. Signals in optical fiber are even slower, at about 200 km/ms. Finally, considering that the robot would need round-trip communications below 1 millis­econd, the maximum possible distance between the robot and its brain is about 100 km. And that’s ignoring every other source of possible delay.

The only solution we see to this problem is to simply move away from traditional cloud computing toward what is known as edge computing. Indeed, the push for submillisecond latencies is driving the development of edge computing. This method places the actual computation as close to the devices as possible, rather than in server farms hundreds of kilo­meters away. However, finding the real estate for these servers near users will be a costly challenge to service providers.

There’s no doubt that as researchers and engineers work toward 1-millisecond delays, they will find more sources of latency I haven’t mentioned. When every microsecond matters, they’ll have to get creative to whittle away the last few slivers on the way to 1 millisecond. Ultimately, the new techniques and technologies that bring us to these ultralow latencies will be driven by what people want to use them for.

When the Internet was young, no one knew what the killer app might be. Universities were excited about remotely accessing computing power or facilitating large file transfers. The U.S. Department of Defense saw value in the Internet’s decentralized nature in the event of an attack on communications infrastructure. But what really drove usage turned out to be email, which was more relevant to the average person. Similarly, while factory robots and remote surgery will certainly benefit from submillisecond latencies, it is entirely possible that neither emerges as the killer app for these technologies. In fact, we probably can’t confidently predict what will turn out to be the driving force behind vanishingly small delays. But my money is on multiplayer gaming.

This article appears in the November 2020 print issue as “Breaking the Millisecond Barrier.”

About the Author

Shivendra Panwar is director of the New York State Center for Advanced Technology in Telecommunications and a professor in the electrical and computer engineering department at New York University.

Fake News Is a Huge Problem, Unless It’s Not

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/fake-news-is-a-huge-problem-unless-its-not

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Jonathan Swift in 1710 definitely said, “Falsehood flies, and the truth comes limping after it.” Mark Twain, on the other hand, may or may not have said, “A lie can travel halfway around the world while the truth is putting on its shoes.”

Especially in the context of politics, we lately use the term “fake news” instead of “political lies” and the problem of fake news—especially when it originates abroad—seems to be much with us these days. It’s believed by some to have had a decisive effect upon the 2016 U.S. Presidential election and fears are widespread that the same foreign adversaries are at work attempting to influence the vote in the current contest.

A report in 2018 commissioned by the U.S. Senate Intelligence Committee centered its attention on the Internet Research Agency, a shadowy arm of Russia’s intelligence services. The report offers, to quote an account of it in Wired magazine, “the most extensive look at the IRA’s attempts to divide Americans, suppress the vote, and boost then-candidate Donald Trump before and after the 2016 presidential election.”

Countless hours of research have gone into identifying and combating fake news. A recent study found more than 2000 articles about fake news published between 2017 and 2020.

Nonetheless, there’s a dearth of actual data when it comes to the magnitude, extent, and impact, of fake news.

For one thing, we get news that might be fake in various ways—from the Web, from our phones, from television—yet it’s hard to aggregate these disparate sources. Nor do we know what portion of all our news is fake news. Finally, the impact of fake news may or may not exceed its prevalence—we just don’t know.

A new study looks into these very questions. Its authors include two researchers at Microsoft who listeners of the earlier incarnation of this podcast will recognize: David Rothschild and Duncan Watts were both interviewed here back in 2012. The lead author, Jennifer Allen, was a software engineer at Facebook before becoming a researcher at Microsoft in its Computational Social Science Group and she is also a Ph.D. student at the MIT Sloan School of Management and the MIT Initiative on the Digital Economy. She’s my guest today via Skype.

Jenny, welcome to the podcast.

Jennifer Allen Thank you, Steven. Happy to be here.

Steven Cherry [[COPY]] Jenny, Wikipedia defines “fake news” as “a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media.” The term made its way into the online version of the Random House dictionary in 2017 as “false news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or discrediting a public figure, political movement, company, etc.” Jenny, are you okay with either of these definitions? More simply, what is fake news?

Jennifer Allen Yeah. Starting off with a tough question. I think the way that we define fake news really changes whether or not we consider it to be a problem or the magnitude of the problem. So the way that we define fake news in our research—and how the academic community has defined fake news—is that it is false or misleading information masquerading as legitimate news. I think the way that, you know, we’re using fake news is really sort of this hoax news that’s masquerading as true news. And that is the definition that I’m going to be working with today.

Steven Cherry The first question you tackled in this study and here I’m quoting it: “Americans consume news online via desktop computers and increasingly mobile devices as well as on television. Yet no single source of data covers all three modes.” Was it hard to aggregate these disparate sources of data?

Jennifer Allen Yes. So that was one thing that was really cool about this paper, is that usually when people study fake news or misinformation, they do so in the context of a single platform. So there’s a lot of work that happens on Twitter, for example. And Twitter is interesting and it’s important for a lot of reasons, but it certainly does not give a representative picture of the way that people consume information today or consume news today. It might be popular among academics and journalists. But the average person is not necessarily on Twitter. And so one thing that was really cool and important, although, as you mentioned, it was difficult as well, was to combine different forms of data.

And so we looked at a panel of Nielsen TV data, as well as a desktop panel of individual Web traffic, also provided by Nielsen. And then finally, we also looked at mobile traffic with an aggregate data set provided to us by ComScore. And so we have these three different datasets that really allow us to triangulate the way that people are consuming information and give sort of a high-level view.

Steven Cherry You found a couple of interesting things. One was that most media consumption is not news-related—maybe it isn’t surprising—and there’s a big difference across age lines.

Jennifer Allen Yes, we did find that. And so—as perhaps it might not be surprising—older people consume a lot more television than younger people do. And younger people spend more time on mobile and online than older people do. However, what might be surprising to you is that no matter whether it’s old people or younger people, the vast majority of people are consuming more news on TV. And so that is a stat that surprises a lot of people, even as we look across age groups—that television is the dominant news source, even among people age 18 to 24.

Steven Cherry When somebody looked at a television-originating news piece on the web instead of actually on television, you characterized it as online. That is to say, you characterized by the consumption of the news, not its source. How did you distinguish news from non-news, especially on social media?

Jennifer Allen Yes. So there are a lot of different definitions of news here that you could use. We tried to take the widest definition possible across all of our platforms. So on television, we categorized as news anything that Nielsen categorizes as news as part of their dataset. And they are the gold standard dataset for TV consumption. And so, think Fox News, think the Today show. But then we also added things that maybe they wouldn’t consider news. So Saturday Night Live often contains news clips and touches on the topical events of the day. And so we also included that show as news. And so, again, we tried to take a really wide definition. And the same online.

And so online, we also aggregated a list of, I think, several thousand Web site that were both mainstream news and hyper-partisan news, as well as fake news. And we find hyper-partisan news and fake news using these news lists that have emerged in the large body of research that has come out of the 2016 elections / fake news phenomenon. And so there again, we tried to take the widest definition of fake news. And so not only are things like your crappy single-article site but also things like Breitbart and the Daily Wire we categorize as hyper-partisan sites.

Steven Cherry Even though we associate online consumption with young people and television with older people, you found that fake news stories were more likely to be encountered on social media and that older viewers were heavier consumers than younger ones.

Jennifer Allen Yes, we did find that. This is a typical finding within the fake news literature, which is that older people tend to be more drawn to fake news for whatever reason. And there’s been work looking at why that might be. Maybe it’s digital literacy. Maybe it’s just more interested in news generally. And it’s true that on social media, there’s more fake and hyper-partisan news than, you know, on the open web.

That being said, I would just emphasize that the dominant … that the majority of news that is consumed even on social media—and even among older Americans—is still mainstream. And so, think your New York Times or your Washington Post instead of your Daily Wire or Breitbart.

Steven Cherry You didn’t find much in the way of fake news on television at all.

Jennifer Allen Yes. And so this is sort of a function, as I was saying before, of the way that we defined fake news. We, by definition, did not find any fake news on television, because the way the fake news has really been studied and in the literature and also talked about sort of in the mainstream media is as this phenomenon of Web sites masquerading as legitimate news outlets. That being said, I definitely believe that there is misinformation that occurs on television. You know, a recent study came out looking at who the biggest spreader of misinformation around the coronavirus was and found it to be Donald Trump. And just because we aren’t defining that content as fake news—because it’s not deceptive in the way that it is presenting itself—doesn’t mean that it is necessarily legitimate to information. It could still be misinformation, even though we do not define it as fake news.

Steven Cherry I think the same thing would end up being true about radio. I mean, there certainly seems to be a large group of voters—including, it’s believed, the core supporters of one of the presidential candidates—who are thought to get a lot of their information, including fake information from talk radio.

Jennifer Allen Yeah, talk radio is unfortunately a hole in our research; we were not able to get a good dataset looking at talk radio. And indeed, you know, talk radio. And, you know, Rush Limbaugh’s talk show, for example, can really be seen as the source of a lot of the polarization in the news and the news environment.

And there’s been work done by Yochai Benkler at the Harvard Berkman Klein Center that looks at the origins of talk radio in creating a polarized and swampy news environment.

Steven Cherry Your third finding, and maybe the most interesting or important one, is and I’m going to quote again, “fake news consumption is a negligible fraction of Americans’ daily information diet.”

Jennifer Allen Yes. So it might be a stat that surprises people. We find that fake news comprises only 0.15 percent of Americans’ daily media diet. Despite the outsized attention that fake news gets in the mainstream media and especially within the academic community: more than half of the journal articles that contain the word news are about fake news in recent years. It is actually just a small fraction of the news that people consume. And also a small fraction of the information that people consume. The vast majority of the content that people are engaging with online is not news at all. It’s YouTube music videos. It’s entertainment. It’s Netflix.

And so I think that it’s an important reminder that when we consider conversations around fake news and its potential impact, for example, on the 2016 election, that we look at this information in the context of the information ecosystem and we look at it not just in terms of the numerator and the raw amount of fake news that people are consuming, but with the denominator as well. So how much of the news that people consume is actually fake news?

Steven Cherry So fake news ends up being only one percent or even less of our overall media diet. What percentage is it of news consumption?

Jennifer Allen It occupied less than one percent of overall news consumption. So that is, including TV. Of course, when you zoom in, for example, to fake news on social media, the scale of the problem gets larger. And so maybe seven to 10 percent—and perhaps more, depending on your definition of fake news—of news that is consumed on social media could be considered what we say is hyper-partisan or fake news. But still, again, to emphasize, the majority of people on Facebook are not seeing any news at all. So, you know, over 50 percent of people on Facebook and in our data don’t click on any news articles that we can see.

Steven Cherry You found that our diet is pretty deficient in news in general. The one question that you weren’t able to answer in your study is whether fake news, albeit just a fraction of our news consumption—and certainly a tiny fraction of our media consumption—still might have an outsized impact compared with regular news.

Jennifer Allen Yeah, that’s very true. And I think here it’s important to distinguish between the primary and secondary impact of fake news. And so in terms of, you know, the primary exposure of people consuming fake news online and seeing a news article about Hillary Clinton running a pedophile ring out of a pizzeria and then changing their vote, I think we see very little data to show that that could be the case. 

That being said, I think there’s a lot we don’t know about the secondary sort of impact of fake news. So what does it mean for our information diets that we now have this concept of fake news that is known to the public and can be used and weaponized?

And so, the extent to which fake news is covered and pointed to by the mainstream media as a problem also gives ammunition to people who oppose journalists, you know, mainstream media and want to erode trust in journalism and give them ammunition to attack information that they don’t agree with. And I think that is a far more dangerous and potentially negatively impactful effect of fake news and perhaps its long-lasting legacy.

The impetus behind this paper was that there’s all this conversation around fake news out of the 2016 election. There is a strong sense that was perpetuated by the mainstream media that fake news on Facebook was responsible for the election of Trump. And that people were somehow tricked into voting for him because of a fake story that they saw online. And I think the reason that we wanted to write this paper is to contradict that narrative because you might read those stories and think people are just living in an alternate fake news reality. I think that this paper really shows that that just isn’t the case.

To the extent that people are misinformed or they make voting decisions that we think are bad for democracy, it is more likely due to the mainstream media or the fact that people don’t read news at all than it is to a proliferation of fake news on social media. And you know, one thing that David [Rothschild]—and in one piece of research that David and Duncan [Watts] did prior to this study that I thought was really resonant was to say that, let’s look at the New York Times. And in the lead-up to the 2016 election, there were more stories about Hillary Clinton’s email scandal in the seven days before the 2016 election than there were about policy at all over the whole scope of the election process. And so instead of zeroing in on fake news, really push our attention to really take a hard look at the way the mainstream media operates. And also, you know, what happens in this news vacuum where people aren’t consuming any news at all.

Steven Cherry So people complain about people living inside information bubbles. What your study shows is fake news, if it’s a problem at all, is really the smallest part of the problem. A bigger part of the problem would be false news—false information that doesn’t rise to the level of fake news. And then finally, the question that you raise here of balance when it comes to the mainstream media. “Balance”—I should even say “emphasis.”

Jennifer Allen Yes. So I think, again, the extent to which people are misinformed, I think that we can look to the mainstream news. And, you know, for example, it’s overwhelming coverage of Trump and the lies that often he spreads. And I think some of the new work that we’re doing is trying to look at the mainstream media and its potential role and not reporting false news that is masquerading as true. But, you know, reporting on people who say false things without appropriately taking the steps to discredit those things and really strongly punch back against them. And so I think that is an area that is really understudied. And I would hope that researchers look at this research and sort of look at the conversation that is happening around Covid and, you know, mail in voting and the 2020 election and really take a hard look at mainstream media, you know, so-called experts or politicians making wild claims in a way that we would not consider them to be fake news, but are still very dangerous.

Steven Cherry Well, Jenny, it’s all too often true that the things we all know to be true aren’t so true. And as usual, the devil is in the details. Thanks for taking a detailed look at fake news, maybe with a better sense of it quantitatively, people can go on and get a better sense of its qualitative impact. So thank you for your work here and thanks for joining us today.

Jennifer Allen Thank you so much. Happy to be here.

We’ve been speaking with Jennifer Allan, lead author of an important new study, “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” This interview was recorded October 7, 2020. Our thanks to Raul at Gotham Podcast’s Studio for our engineering today and to Chad Crouch for our music.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Problem of Filter Bubbles Hasn’t Gone Away

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/the-problem-of-filter-bubbles-hasnt-gone-away

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In 2011, the former executive director of MoveOn gave a widely-viewed TED talk, “Beware Online Filter Bubbles“ that became a 2012 book and a startup. In all the talk of fake news these days, many of us have forgotten the unseen power of filter bubbles in determining the ways in which we think about politics, culture, and society. That startup tried to get people to read news they might otherwise not see by repackaging them with new headlines.

A recent app, called Ground News, has a different approach. It lets you look up a topic and see how it’s covered by media outlets with identifiably left-leaning or right-leaning slants. You can read the coverage itself, right–left–moderate or internationally; look at its distribution; or track a story’s coverage over time. Most fundamentally, it’s a way of seeing stories that you wouldn’t ordinarily come across.

My guest today is Sukh Singh, the chief technology officer of Ground News and one of its co-founders.

Sukh, welcome to the podcast.

Sukh Singh Thanks for having me on, Steven.

Steven Cherry Back in the pre-Internet era, newspapers flourished, but overall news sources were limited. Besides a couple of newspapers in one’s area, there would be two or three television stations you could get, and a bunch of radio stations that were mainly devoted to music. Magazines were on newsstands and delivered by subscription, but only a few concentrated on weekly news. That world is gone forever in favor of a million news sources, many of them suspect. And it seems the Ground News strategy is to embrace that diversity instead of lamenting it, putting stories in the context of who’s delivering them and what their agendas might be, and most importantly, break us out of the bubbles we’re in.

Sukh Singh That’s true that we are embracing the diversity as you mentioned, moving from the print era and the TV era to the Internet era. The costs of having a news outlet or any kind of a media distribution outlet have dropped dramatically to the point of a single-person operation becoming viable. That has had the positive benefit of allowing people to cater to very niche interests that were previously glossed over. But on the negative side, the explosion of a number of outlets out there certainly has—it has a lot of drawbacks. Our approach at Ground News is to take as wide a swath as we meaningfully can and put it in one destination for our subscribers.

Steven Cherry So has the problem of filter bubbles gotten worse since that term was coined a decade ago?

Sukh Singh It has. It has certainly gotten much worse. In fact, I would say by the time it was even coined, the development of filter bubbles were well underway before the phenomena was observed.

And that’s largely because it’s a natural outcome of the algorithms and, later, machine learning models used to determine what is being served, what content is being served to people. In the age of the Internet, personalization of a news feed became possible. And that meant more and more individual personalization and machines picking—virtually handpicking—what anybody get served. By and large, we saw the upstream effect of that being that news publications found out what type of content appealed to a large enough number of people to make to make them viable. And that has resulted in a shift from a sort of aspiration of every news outlet to be the universal record of truth to a more of an erosion of that. And now many outlets, certainly not all of them, but many outlets embracing the fact that they’re not going to cater to everyone; they are going to cater to a certain set of people who agree with their worldview. And their mission then becomes reinforcing that worldview, that agenda, that that specific set of beliefs through reiterated and repeated content for everything that’s happening in the world.

Steven Cherry People complain about the filtering of social media. But that original TED talk was about Google and its search results, which seems even more insidious. Where’s is the biggest problem today?

Sukh Singh I would say that social media has shocked us all in terms of how bad the problem can get. If you think back 10 years, 15 years … Social media as we have it today, would not have been the prediction of most people. If we think back to the origin of, say, Facebook, it was very much in the social networking era over most of the content you were receiving was from friends and family. It’s a relatively recent phenomenon, certainly in this decade. Not the last one, where we saw this chimera of social network plus digital media becoming social media and the news feed there—going back to the personalization—catered to the one-engagement rhetoric, one-success metric being how much time-on-platform could they get the user to spend. Which, comparing against Google, isn’t as much about the success metrics, it’s more getting you to the right page or to the most relevant page as quickly as possible. But with social media, when that becomes a multi-hour-per-day activity, it certainly had more wide-reaching and deeper consequences.

Steven Cherry So I gave just a little sketch of how Ground News works. Do you want to say a little bit more about that?

Sukh Singh Absolutely. So I think you alluded to this in your earlier questions … going from the past age of print and TV media to the Internet age. What we’ve seen is that news delivery today has really bifurcated into two types, into two categories. We have the more traditional, more legacy based news outlets coming to the digital age, to the mobile age, with websites and mobile apps that are tailored along the conventional sections of, here’s the sports section, here’s the entertainment section, here’s a politics section—that was roughly how the world of information was divided up by these publications, and that has carried over. And on the other side, we see the social media feed, which has in many ways blown the legacy model out of the water.

It’s a single-drip feed where the user has to do no work and can just scroll and keep finding more and more and more … an infinite supply of engaging content. That divide doesn’t map exactly to education versus entertainment. Entertainment and sensationalism has been a part of media as far as back media goes. But there certainly is more affinity toward entertainment in a social media feed which caters to engagement.

So we at Ground News serve both those needs, through both those models, with two clearly labeled and divided feeds. One is Top Stories and the other one is My Feed. The Top Stories is more. … there’s a legacy model of, here are universally important news events that you should know about, no matter which walk of life you come from, no matter where you are located, no matter where your interests lie. And the second being My Feed, which is the recognition that ultimately people will care more about certain interests, certain topics, certain issues than other ones. So it is a nod to that personalization within the limits of not delving down to the same spiral of filter bubbles.

Steven Cherry There’s only so many reporters at a newspaper. There’s only so many minutes we have in a day to read the news. So in all of the coverage, for example, of the protests this past month—coverage we should be grateful for of an issue that deserves all the prominence it can get—a lot of other stories got lost. For example, there was a dramatic and sudden announcement of a reduction in our U.S. troop count in Germany. [Note: This episode was recorded June 18, 2020 — Ed.] I happened to catch that story in the New York Times myself. But it was a pretty small headline, buried pretty far down. It was covered in your weekly newsletter, though. I take it you see yourself as having a second mission besides one-sided news. The problem of under-covered news.

Sukh Singh Yes, we do, and that’s been a realization as we’ve made the journey of Ground News. It wasn’t something that we recognized from the onset, but something that we discovered as we were … as our throughput of news increased. We spoke about the problem of filter bubbles. And we initially thought the problem was bias. The problem was that a news event happens, some real concrete event happens in the real world, and then it is passed on as information through various news outlets, each one spinning it or at least wording it in a way that aligned to either their core agenda or to the likings of their audience. More and more, we found that the problem isn’t just bias and spin, it’s also the mission.

So if we look at the wide swath of left-leaning and right-leaning publications, news publications, in America today … If we were to go to the home page of two publications fairly wide apart on the political spectrum, you would not just find the same news stories with different headlines or different lenses, but an entirely different set of news stories. So much so—you mentioned our newsletter to the Blindspot report—in the Blindspot report. We pick each week five to six stories that were covered massively on one side of the political spectrum but entirely omitted from the other.

So in this case—the event that you mentioned about the troop withdrawal from Germany—it did go very unnoticed by certain parts of the political spectrum. So as a consumer, as a consumer who wants to be informed, going to one or two news sources, no matter how valuable, no matter how rigorous they are, will inevitably result in very large parts of the news out there that will be emitted from your field of view. It’s a secondary conversation, whether that’s whether if you’re going to the right set of publications or not. But what a more primary and more concerning conversation is: how do you communicate with your neighbor when they’re coming from a completely different set of news stories and a different worldview informed by them?

Steven Cherry The name Ground News seems to be a reference to the idea that there’s a ground truth. There are ground truths in science and engineering; it would be wonderful, for example, if we could do some truly random testing for coronavirus and get the ground truth on rates of infection. But are there ground truths in the news business anymore? Or are there only counterbalancings of partial truths?

Sukh Singh That’s a good question. I wouldn’t be as cynical to say as that there’s no news publications out there reporting what they truly believe to be the ground truth. But we do find ourselves in a world where … in a world of court counterbalances. We do turn on the TV news networks and we do see a set of three talking heads with a moderator in the middle and differing opinions on either side. So what we do, at Ground News—as you said, the reference to the name—is try to have that flat, even playing field where different perspectives can come and make their case.

So our aspiration is always to take the—whether that’s the ground truth, whether that’s in the world of science or in the world philosophy, whatever you want to call an atomic fact—is, take the real event and then have dozens, typically, on average, we have about 20 different news perspectives, local, national, international, left, right, all across the board covering the same news event. And are our central thesis is that the ultimate solution is reader empowerment, that no publication or technology can truly come to the conclusions for a person. And there’s perhaps a “shouldn’t” in there as well. So our mission really is to take the different news perspectives, present them on an even playing field to the user, to our subscribers, and then allow them to come to their own conclusion.

Steven Cherry So without getting entirely philosophical about this, it seems to me that—let’s say in the language of Plato’s Republic and the allegory of the cave—you’re able to help us look at more than just the shadows that are projected on the wall of the cave. We get to see all of the different people projecting the shadows, but we’re still not going to get to the Platonic forms of the actual truth of what’s happening in the world. Is that fair to say?

Sukh Singh Yes. Keeping with that allegory, I would say that our assertion is not that every single perspective is equally valid. That’s not a value judgment view that we ever make. We don’t label the left-right-moderate biases on any news publication or a platform. We actually source them from three arms-length nonprofit agencies that have the mission of labeling news publications by their demonstrated bias. So we aggregate and use those as labels in our platform. So we never pass a value judgment on any perspective. But my hope personally and ours as a company really is that some perspectives are getting you closer to the glimpse of the outside rather than just being another shadow on the wall. The onus really is on the reader to be able to say which perspective or which coverage they think most closely resembles what the ground truth is.

Steven Cherry I think that’s fair enough. And I think I would also be fair to add that, even for issues for which there really isn’t a pairing of two opposing sides—for example, climate change, responsible journalists pretty much ignore the idea of there being no climate change—but still, it’s important for people politically to understand that there are people out there who have not accepted climate change and that they’re still writing about it and still sharing views and so forth. And so it seems to me that what you’re doing is shining a light on that aspect of it.

Sukh Singh Absolutely, and one of our key aspirations and our mission is to enable people to have those conversations. So even if you are 100 percent convinced that you are going to credible news publications and you’re getting the most vetted and journalistically rigorous news coverage that is available on the free market, it may still be that you might not be able to reach across the aisle or just go next door and talk to your neighbor or your friend, who is living in a very different … different world view. Better or worse, again, we won’t pass judgment, but just having a more expanded scope of news stories that come into your field of view, on your radar, does enable you to have those conversations, even if you feel some of your peers may be misguided.

Steven Cherry The fundamental problem in news is that there are financial incentives for aggregators like Google and Facebook and for the news sources themselves to keep us in the bubbles that we’re in, feeding us only stories that fit our world view and giving us extreme versions of the news instead of more moderate ones. You yourself noted that with Facebook and other social networks, the user does no work in those cases. Using Ground News is something you have to do actively. Do you ever fear that it’s just a sort of Band-Aid that we can place on this gaping social wound?

Sukh Singh So let me deal with that in two parts, the first part is the financial sustainability of journalism. There certainly is a crisis there. And then I think we can have another of several more of these conversations about the financial sustainability in journalism and solutions to that crisis.

But one very easily identifiable problem is the reliance on advertising. I think a lot of news publications all too willingly started publicizing their content on the Internet to increase their reach and any advertising revenue that they could get off of that from Facebook, sorry, from Google and later Facebook, was incremental revenue to their print subscription. And they were, on the whole, very chipper to get incremental revenue by using the Internet. As we’ve seen, that problem has become a more and more of a stranglehold on news publications and media publications in general, where they’re trying to find a fight for these ad dollars. And the natural end of that, that competition is sensationalism and clickbait. That’s speaking to the financial sustainability in journalism there.

I mean, the path we’ve chosen to go down—exactly for that reason—is to charge subscriptions directly to our users. So we have thousands of paying subscribers now paying a dollar a month or ten dollars a year to access the features on Ground News. And that’s a nominal price point. But it also has an ulterior motive to that. It really is about habit-building and getting people to pay for news again. There are many of us have forgotten over the last couple of decades that news paying for news, which almost used to be that the same as paying for electricity or water, that sense of having to pay for news, has disappeared. We’re trying to revive that, which again, will hopefully pay dividends down the line for financial sustainability in journalism.

In terms of being a Band-Aid solution, we do think there is more of a movement for people accepting the responsibility to do the work, to inform themselves, which is direct and stands in direct contrast to the social media feed, which I think most of us have come to distrust, especially in recent years. There was a, I believe, Reuters study two years ago that showed that 2013 was the first year where people went to Facebook for their news, fewer people than to Facebook for their news in twenty eighteen than they did in the year before. And that was the first time in a decade. So I do think there’s a recognition of that. There’s a recognition a social media feed is no longer a viable news delivery mechanism. So people we do see come doing that little bit of work and on our part, we make it as accessible as possible here. Your question reminds me of the kind of adage that as a consumer, if you’re not the customer, you’re the product. And that really is the divide using a free social media feed as opposed to paying for a news delivery mechanism.

Steven Cherry Ground News is actually a service of your earlier startup Snapwise. Do you want to say a little bit about it, what it does.

Sukh Singh My co-founder was a former NASA engineer of NASA’s satellite engineer who worked on earth observation satellites. So she was working on a constellation of satellites that went across the planet every 24 hours and mapped every square foot of the planet for literally the ground truth, what was happening everywhere on the planet. And once she left her space career and she and I was starting to talk about the impact of technology in journalism, we realized that if we can map the entire planet every 24 hours and have an undeniable record of what what’s happening in the world, why can’t we have the same in the news industry? So our earliest iteration of what is now Ground News was much more focused on citizen journalism and getting folks to use their phones to communicate what was happening in the world around them and getting that firsthand data into the information stream, which we consume as news consumers.

If this is starting to sound like Twitter, we ran into several of the same drawbacks, especially when it came to news integrity and verifying the facts and making sure that what people were using as information really was to them to the same grade as professional journalists. And more and more, we realized we couldn’t diminish the role of professional journalists in delivering what the news is. So we started to advocate more and more vetted, credible news publications from across the world. And before we knew it, we had fifty thousand different unique sources of news, local, national, international, left-to-right, all the way down from your town newspaper to you to a giant multi-national press wire service like Thomson Reuters. We were taking all those different news sources and putting them in the same platform. So so that’s really been our evolution, as people trying to solve some of these problems in the journalistic industry.

Steven Cherry How do you identify publications as being on the left or on the right?

Sukh Singh As we started aggregating more and more news sources, we got over to the 10,000 mark. And before we knew what we were up to 50,000 news sources that we were covering. It’s humanly impossible for our small team or imagine even a much, much larger team to really carefully go and label each of them. So we’ve taken that from a number of news monitoring agencies whose mission and entire purpose as organizations is to review and review news publications.

So we use three different ones today. Media Bias Fact Check, AllSides, as well as Ad Fontes Media and all three of these, I would call them rating agencies, if you want to use the stock market analogy, that sort of rate, the political leanings and factual sort of demonstrated factuality of these news organizations. We take that as inputs. We aggregate them. But you do make exactly their original labels available on our platform, to use an analogy from the movie world, where we’re sort of like Metacritic, aggregating ratings from IMDb and Rotten Tomatoes and different platforms and making that all transparently available for consumers.

Steven Cherry You’re based in Canada, in Kitchener, which is a small city about an hour from Toronto. I think Americans think of Canada as having avoided some of the extremisms of the U.S. I mean, other than maybe burning down the White House a couple of centuries ago, it’s been a pretty easy-going get-along kind of place. Do you think being Canadian and looking at the U.S. from a bit of a distance contributed to what you’re doing?

Sukh Singh We had I don’t think we’ve had a belligerent reputation since since the War of 1812. As Canadians, we do enjoy a generally nice-person kind of stereotype. We are, as you said, at arm’s length. And sitting’s not quite a safe distance away, but across the border from everything that happens in the US, but with frequent trips down and just being deeply integrating with the United States as as a country, we do get a very, very close view on what’s happening.

North of the border, we do have our own political … I mean, we do have our own political system to deal with all of its workings and all of its ins and outs. But in terms of where we’ve really seen Ground News deliver value, it certainly has been in the United States. That is are both our biggest market and our largest set of subscribers by far.

Steven Cherry Thank you so much for giving us this time today and explaining a service that’s really providing an essential function in this chaotic news political world.

Sukh Singh Thanks, Steven.

Steven Cherry We’ve been speaking with Sukh Singh, CTO and co-founder of Ground News, an app that helps break us out of our filter bubbles and tries to provide a 360-degree view of the news.

Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

This interview was recorded June 18, 2020.

Resources

Spotify, Machine Learning, and the Business of Recommendation Engines

https://mediabiasfactcheck.com/

AllSides

Ad Fontes Media

“Beware Online Filter Bubbles” (TED talk)

The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, by Eli Pariser (Penguin, 2012)

Indoor Air-Quality Monitoring Can Allow Anxious Office Workers to Breathe Easier

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/indoor-airquality-monitoring-can-allow-anxious-office-workers-to-breathe-easier

Although IEEE Spectrum’s offices reopened several weeks ago, I have not been back and continue to work remotely—as I suspect many, many other people are still doing. Reopening offices, schools, gyms, restaurants, or really any indoor space seems like a dicey proposition at the moment.

Generally speaking, masks are better than no masks, and outdoors is better than indoors, but short of remaining holed up somewhere with absolutely no contact with other humans, there’s always that chance of contracting COVID-19. One of the reasons I personally haven’t gone back to the office is because the uncertainty over whether the virus, exhaled by a coworker, is present in the air, doesn’t seem worth the novelty of actually being able to work at my desk again for the first time in months.

Unfortunately, we can’t detect the virus directly, short of administering tests to people. There’s no sensor you can plug in and place on your desk that will detect virus nuclei hanging out in the air around you. What is possible, and might bring people some peace of mind, is installing IoT sensors in office buildings and schools to monitor air quality and ascertain how favorable the indoor environment is to COVID-19 transmission.

One such monitoring solution has been developed by wireless tech company eleven-x and environmental engineering consultant Pinchin. The companies have created a real-time monitoring system that measures carbon dioxide levels, relative humidity, and particulate levels to assess the quality of indoor air.

“COVID is the first time we’ve dealt with a source of environmental hazard that is the people themselves,” says David Shearer, the director of Pinchin’s indoor environmental quality group. But that also gives Pinchin and eleven-x some ideas on how to approach monitoring indoor spaces, in order to determine how likely transmission might be.

Again, because there’s no such thing as a sensor that can directly detect COVID-19 virus nuclei in an environment, the companies have resorted to measuring secondhand effects. The first measurement is carbon dioxide levels. he biggest contributor to CO2 in an indoor space is people exhaling. Higher levels in a room or building suggest more people, and past a certain level, means either there are too many people in the space or the air is not being vented properly. Indirectly, that carbon dioxide suggests that if someone was a carrier of the coronavirus, there’s a good chance there’s more virus nuclei in the air.

The IoT system also measures relative humidity. Dan Mathers, the CEO of eleven-x, says that his company and Pinchin designed their monitoring system based on recommendations by the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). ASHRAE’s guidelines [PDF] suggest that a space’s relative humidity should hover between 40 and 60 percent. More or less than that, Shearer explains, and it’s easier for the virus to be picked up by the tissue of the lungs.

A joint sensor measures carbon dioxide levels and relative humidity. A separate sensor measures the level of 2.5-micrometer particles in the air. 2.5 microns (or PM 2.5) is an established particulate size threshold in air quality measuring. And as circumstances would have it, it’s also slightly smaller than the size of a virus nucleus, which about 3 µm. They’re still close enough in size that a high level of 2.5-µm particles in the air means there’s a greater chance that there’s potentially some COVID-19 nuclei in the space.

According to Mathers, the eleven-x sensors are battery-operated and communicate using LoRaWAN, a wireless standard that is most notable for being extremely low power. Mathers says the batteries in eleven-x’s sensors can last for ten years or more without needing to be replaced.

LoRaWAN’s principal drawback is that isn’t a good fit for transmitting high-data messages. But that’s not a problem for eleven-x’s sensors. Mathers says the average message they send is just 10 bytes. Where LoRa excels is in situations where simple updates or measurements need to be sent—measurements such as carbon dioxide, relative humidity, and particulate levels. The sensors communicate on what’s called an interrupt basis, meaning once a measurement passes a particular threshold, the sensor wakes up and sends an update. Otherwise, it whiles away the hours in a very low power sleep mode.

Mathers explains that one particulate sensor is placed near a vent to get a good measurement of 2.5-micron-particle levels. Then, several carbon dioxide/relative humidity sensors are scattered around the indoor space to get multiple measurements. All of the sensors communicate their measurements to LoRaWAN gateways that relay the data to eleven-x and Pinchin for real-time monitoring.

Typically, indoor air quality tests are done in-person, maybe once or twice a year, with measuring equipment brought into the building. Shearer believes that the desire to reopen buildings, despite the specter of COVID-19 outbreaks, will trigger a sea change for real-time air monitoring.

It has to be stressed again that the monitoring system developed by eleven-x and Pinchin is not a “yes or no” style system. “We’re in discussions about creating an index,” says Shearer, “but it probably won’t ever be that simple.” Shifting regional guidelines, and the evolving understanding about how COVID-19 is transmitted will make it essentially impossible to create a hard-and-fast index for what qualities of indoor air are explicitly dangerous with respect to the virus.

Instead, like social distancing and mask wearing recommendations, the measurements are intended to give general guidelines to help people avoid unhealthy situations. What’s more, the sensors will notify building occupants should indoor conditions change from likely safe to likely problematic. It’s another way to make people feel safe if they have to return to work or school.

How the U.S. Can Apply Basic Engineering Principles To Avoid an Election Catastrophe

Post Syndicated from Jonathan Coopersmith original https://spectrum.ieee.org/tech-talk/telecom/security/engineering-principles-us-election

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

The 2020 primary elections and caucuses in the United States earlier this year provide a textbook case of how technology and institutions can fail in a crisis, when we need them most. To recap: Americans looked on, many of them with incredulity, as people hoping to vote were forced to wait hours at the height of the initial wave of the COVID-19 pandemic. Elsewhere, there were delayed elections, tens of thousands of undelivered or uncounted mail-in ballots, and long delays in counting and releasing results. This in what is arguably the world’s most technologically advanced industrialized nation.

Given the persistence of COVID in the United States and domestic and foreign efforts to delegitimize its November presidential election, a repeat of the primaries could easily produce a massive visible disaster that will haunt the United States for decades. Indeed, doomsday scenarios and war-gamingwhat ifs” have become almost a cottage industry. Fortunately, those earlier failures in the primaries provide a road map to a fair, secure, and accessible election—if U.S. officials are willing to learn and act quickly.

What happens in the 2020 U.S. election will reverberate all over the world, and not just for the usual reasons. Every democracy will face the challenge of organizing and ensuring safe and secure elections.  People in most countries still vote in person by paper ballot, but like the United States, many countries are aggressively expanding options for how their citizens can vote

Compared with the rest of the world, though, the United States stands apart in one fundamental aspect:  No single federal law governs elections. The 50 states and the District of Columbia each conduct their own elections under their own laws and regulations. The elections themselves are actually administered by 3,143 counties and equivalents within those states, which differ in resources, training, ballots, and interpretation of regulations. In Montana, for example, counties can automatically mail ballots to registered voters but are not required to. 

A similar diversity applies to the actual voting technology. In 2016, half of registered voters in the United States lived in areas that only optically scan paper ballots; a quarter lived in areas with direct-recording electronic (DRE) equipment, which only creates an electronic vote; and the remainder lived in areas that use both types of systems or where residents vote entirely by mail, using paper ballots that are optically scanned. Over 1,800 small counties still collectively counted a million paper ballots by hand. 

The failures during the primaries grew from a familiar litany: poor organization, untried technology, inadequate public information, and an inability to scale quickly. Counties that had the worst problems were usually ones that had introduced new voting software and hardware without adequately testing it, or without training operators and properly educating users.

There were some early warnings of trouble ahead. In February, with COVID not yet an issue in the United States, the Iowa Democratic caucus was thrown into chaos when the inadequately vetted IowaReporter smartphone app by Shadow, which was used to tabulate votes, broke down. The failure was compounded by malicious jamming of party telephone lines by American trolls. It took weeks to report final results. Georgia introduced new DRE equipment with inadequate training and in some cases, the wrong proportion of voting equipment to processing equipment, which delayed tabulation of the results.  

As fear of COVID spread, many states scaled up their absentee-voting options and reduced the number of polling places. In practice, “absentee voting” refers to the use of a paper ballot that is mailed to a voter, who fills it out and then returns it.  A few states like Kentucky, with good coordination between elected leaders and election administrators, executed smooth mail-in primaries earlier this year. More common, however, were failures of the U.S. Post Office and many state- and local-election offices to handle a surge of ballots.  In New York, some races remained undecided three weeks after its primary because of slow receipt and counting of ballots. Election officials in 23 states rejected over 534,000 primary mail-in ballots, compared with 319,000 mail-in ballots for the 2016 general election. 

The post office, along with procrastinating voters, has emerged as a critical failure node for absentee voting. Virginia rejected an astonishing 6 percent of ballots for lateness, compared with half of a percent for Michigan. Incidentally, these cases of citizens losing their vote due to known difficulties with absentee ballots far, far outweigh the incidences of voter fraud connected to absentee voting, contrary to the claims of certain politicians. 

At this point, a technologically savvy person could be forgiven for wondering, why can’t we vote over the Internet? The Internet has been in widespread public use in developed countries for more than a quarter century. It’s been more than 40 years since the introduction of the personal computer, and about 20 since the first smartphones came out. And yet we still have no easy way to use these nearly ubiquitous tools to vote.

A subset of technologists has long dreamed of Internet voting, via an app. But in most of the world, it remains just that: a dream. The main exception is Estonia, where Internet voting experiments began 20 years ago. The practice is now mainstream there—in the country’s 2019 parliamentary elections, nearly 44 percent of voters voted over the Internet without any problems. In the United States, over the past few years, some 55 elections have used app-based Internet voting as an option for absentee voting. However, that’s a very tiny percentage of the thousands of elections conducted by municipalities during that period. Despite Estonia’s favorable experiences with what it calls “i-voting,” in much of the rest of the world concerns about security, privacy, and transparency have kept voting over the Internet in the realm of science fiction. 

The rise of blockchain, a software-based system for guaranteeing the validity of a chain of transactions, sparked new hopes for Internet voting. West Virginia experimented with blockchain absentee voting in 2018, but election technology experts worried about possible vulnerabilities in recording, counting, and storing an auditable vote without violating the voter’s privacy.  The lack of transparency by the system provider, Voatz, did not dispel these worries. After a report from MIT’s Internet Policy Research Initiative reinforced those concerns, West Virginia canceled plans to use blockchain voting in this year’s primary.    

We can argue all we want about the promise and perils of Internet voting, but it won’t change the fact that this option won’t be available for this November’s general election in the United States. So officials will have to stick with tried-and-true absentee-voting techniques, improving them to avoid the fiascoes of the recent past. Fortunately, this shouldn’t be hard. Think of shoring up this election as an exercise involving flow management, human-factors engineering, and minimizing risk in a hostile (political) environment—one with a low signal-to-noise ratio. 

This coming November 3 will see a record voter turnout in the United States, an unprecedented proportion of which will be voting early and by mail, all during an ongoing pandemic in an intensely partisan political landscape with domestic and foreign actors trying to disrupt or discredit the election. To cope with such numbers, we’ll need to “flatten the curve.” A smoothly flowing election will require encouraging as many people as possible to vote in the days and weeks before Election Day and changing election procedures and rules to accommodate those early votes.  

That tactic will of course create a new challenge: handling the tens of millions of people voting by mail in a major acceleration of the U.S. trend of voting before Election Day. Historically, U.S. voters could cast an absentee ballot by mail only if they were out of state or had another state-approved excuse. But in 2000, Oregon pioneered the practice of voting by mail exclusively. There is no longer any in-person voting in Oregon—and, it is worth noting, Oregon never experienced any increases in fraud as it transitioned to voting by mail.

Overall, mail-in voting in U.S. presidential elections doubled from 12 percent (14 million) of all votes in 2004 to 24 percent (33 million) votes cast in 2016. Those numbers, however, hide great diversity: Ninety-seven percent of voters in the state of Washington but only 2 percent of West Virginians voted by mail in 2016. Early voting–in person at a polling station open before Election Day–also expanded from 8 percent to 17 percent of all votes during that same 12-year period, 2004 to 2016. 

Today, absentee voting and vote-by-mail are essentially equivalent as more states relax restrictions on mail-in voting. In 2020, five more states–Washington, Colorado, Utah, Hawaii, and California–will join Oregon to vote by mail exclusively. A COVID-induced relaxation of absentee-ballot rules means that over 190 million Americans, not quite two-thirds of the total population, will have a straightforward vote-by-mail option this fall. 

Whether they will be able to do so confidently and successfully is another question. The main concern with voting by mail is rejected ballots. The overall rejection rate for ballots at traditional, in-person voting places in the United States is 0.01 percent. Compare that with a 1 percent rejection rate for mail-in ballots in Florida in 2018 and a deeply dismaying 6 percent rate for Virginia in its 2020 primary.

Nearly all of the Virginia ballots were rejected because they arrived late, reflecting the lack of experience for many voting by mail for the first time. Forgetting to sign a ballot was another common reason for rejection. But votes were also refused because of regulations that some might deem overly strict—a tear in an envelope is enough to get a mail-in vote nixed in some districts. These verification procedures are less forgiving of errors, and first-time voters, especially ethnic and racial minorities, have their ballots rejected more frequently. Post office delivery failures also contributed.    

We already know how to deal with all this and thereby minimize ballot rejection. States could automatically send ballots to registered voters weeks before the actual election date. Voters could fill out their ballot, seal it in an envelope, and place that envelope inside a larger envelope, which they would sign. A bar code on that outside envelope would allow the voter and election administrators to track its location. It is vitally important for voters to have feedback that confirms their vote has been received and counted. 

This ballot could be mailed, deposited in a secure dropbox, or returned in person to the local election office for processing. In 2016, more than half the voters in vote-by-mail states returned their ballots, not by mail but by secure drop-off boxes or by visiting their local election offices. 

The signature on the outer envelope would be verified against a signature on file, either from a driver’s license or on a voting app, to guard against fraud. If the signature appeared odd or if some other problem threatened the ballot’s rejection, the election office would contact the voter by text, email, or phone to sort out the problem. The voter could text a new signature, for example.  Once verified, the ballots could be promptly counted, either before or on Election Day.

The problem with these best practices is that they are not universal. A few states, including Arizona, already employ such procedures and enjoy very low rates of rejected mail-in ballots: Maricopa County, Ariz. had a mail-in ballot rejection rate of just 0.03 percent in 2018, roughly on a par with in-person voting. Most states, however, lack these procedures and infrastructure: Only 13 percent of mail-in ballots this primary season had bar codes.

The ballots themselves could stand some better human-factors engineering. Too often, it is too challenging to correctly fill out a ballot or even an application for a ballot. In 2000, a poorly designed ballot in Florida’s Palm Beach County may have deprived Al Gore of Florida’s 29 electoral votes, and therefore the presidency. And in Travis County, Texas, a complex, poorly designed application to vote by mail was incorrectly filled out by more than 4,500 voters earlier this year. Their applications rejected, they had to choose on Election Day between not voting or going to the poll and risking infectionAnd yet help is readily available: Groups like the Center for Civic Design can provide best practices.

Training on signature verification also widely varies within and among states. Only 20 states now require that election officials give voters an opportunity to correct a disqualified mail-in ballot. 

Timely processing is the final mail-in challenge. Eleven states do not start processing absentee ballots until Election Day, three start the day after, and three start the day before. In a prepandemic election, mail-in ballots made up a smaller share of all votes, so the extra time needed for processing and counting was relatively minor. Now with mail-in ballots potentially making up over half of all votes in the United States, the time needed to process and count ballots may delay results for days or weeks. In the current political climate of suspicion and hyper-partisanship, that could be disastrous—unless people are informed about it and expecting it.

The COVID-19 pandemic is strongly accelerating a trend toward absentee voting that began a couple of decades ago. Contrary to what many people were anticipating five or 10 years ago, though, most of the world is not moving toward Internet voting but rather to a more advanced version of what they’re using already. That Estonia has done so well so far with i-voting offers a tantalizing glimpse of a possible future. For 99.998 percent of the world’s population, though, the paper ballot will reign for the foreseeable future. Fortunately, major technological improvements envisioned over the next decade will increase the security, reliability, and speedy processing of paper ballots, whether they’re cast in person or by mail. 

Jonathan Coopersmith is a Professor at Texas A&M University, where he teaches the history of technology.  He is the author of FAXED: The Rise and Fall of the Fax Machine (Johns Hopkins University Press, 2015).  His current interests focus on the importance of froth, fraud, and fear in emerging technologies.  For the last decade, he has voted early and in person. 

Apptricity Beams Bluetooth Signals Over 30 Kilometers

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/wireless/apptricity-beams-bluetooth-signals-over-20-miles

When you think about Bluetooth, you probably think about things like wireless headphones, computer mice, and other personal devices that utilize the short-range, low-power technology. That’s where Bluetooth has made its mark, after all—as an alternative to Wi-Fi, using unlicensed spectrum to make quick connections between devices.

But it turns out that Bluetooth can go much farther than the couple of meters for which most people rely on it. Apptricity, a company that provides asset and inventory tracking technologies, has developed a Bluetooth beacon that can transmit signals over 32 kilometers (20 miles). The company believes its beacon is a cheaper, secure alternative to established asset and inventory tracking technologies.

A quick primer, if you’re not entirely clear on asset tracking versus inventory tracking: There’s some gray areas in the middle, but by and large, “asset tracking” refers to an IT department registering which employee has which laptop, or a construction company keeping tabs on where its backhoes are on a large construction site. Inventory tracking refers more to things like a retail store keeping correct product counts on the shelves, or a hospital noting how quickly it’s going through its store of gloves.

Asset and inventory tracking typically use labor-intensive techniques like barcode or passive RFID scanning, which are limited both by distance (a couple of meters at most, in both cases) and the fact that a person has to be directly involved in scanning. Alternatively, companies can use satellite or LTE tags to keep track of stuff. While such tags don’t require a person to actively track items, they are far more expensive, requiring a costly subscription to either a satellite or LTE network.

So, the burning question: How does one send a Bluetooth signal over 30-plus kilometers? Typically, Bluetooth’s distance is limited because large distances would require a prohibitive amount of power, and its use of unlicensed spectrum means that the greater the distance, the more likely it will interfere with other wireless signals.

The key new wrinkle, according to Apptricity’s CEO Tim Garcia, is precise tuning within the Bluetooth spectrum. Garcia says it’s the same principle as a tightly-focused laser beam. A laser beam will travel farther without its signal weakening beyond recovery if the photons making up the beam are all as close to a specific frequency as possible. Apptricity’s Bluetooth beacons use firmware developed by the company to achieve such precise tuning, but with Bluetooth signals instead of photons. Thus, data can be sent and received by the beacons without interfering and without requiring unwieldy amounts of power.

Garcia says RFID tags and barcode scanning don’t actively provide information about assets or inventory. Bluetooth, however, can not only pinpoint where something is, it can send updates about a piece of equipment that needs maintenance or just requires a routine check-up.

By its own estimation, Apptricity’s Bluetooth beacons are 90 percent cheaper than LTE or satellite tags, specifically because Bluetooth devices don’t require paying for a subscription to an established network.

The company’s current transmission distance record for its Bluetooth beacons is 38 kilometers (23.6 miles). The company has also demonstrated non-commercial versions of the beacons for the U.S. Department of Defense with broadcast ranges between 80 and 120 kilometers.

100 Million Zoom Sessions Over a Single Optical Fiber

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/single-optical-fibers-100-million-zoom

Journal Watch report logo, link to report landing page

A team at the Optical Networks Group at University College London has sent 178 terabits per second through a commercial singlemode optical fiber that has been on the market since 2007. It’s a record for the standard singlemode fiber widely used in today’s networks, and twice the data rate of any system now in use. The key to their success was transmitting it across a spectral range of 16.8 terahertz, more than double the the broadest range in commercial use.

The goal is to expand the capacity of today’s installed fiber network to serve the relentless demand for more bandwidth for Zoom meetings, streaming video, and cloud computing. Digging holes in the ground to lay new fiber-optic cables can run over $500,000 a kilometer in metropolitan areas, so upgrading transmission of fibers already in the ground by installing new optical transmitters, amplifiers, and receivers could save serious money. But it will require a new generation of optoelectronic technology.

A new generation of fibers have been in development for the past few years that promise higher capacity by carrying signals on multiple paths through single fibers. Called spatial division multiplexing, the idea has been demonstrated in fibers with multiple cores, multiple modes through individual cores, or combining multiple modes in multiple fibers. It’s demonstrated record capacity for single fibers, but the technology is immature and would require the expensive laying of new fibers. Boosting capacity of fibers already in the ground would be faster and cheaper. Moreover, many installed fibers remain dark, carrying no traffic, or transmitting on only a few of the roughly 100 available wavelengths, making them a hot commodity for data networks.

“The fundamental issue is how much bandwidth we can get” through installed fibers, says Lidia Galdino,  a University College lecturer who leads a team including engineers from equipment maker Xtera and Japanese telecomm firm KDDI. For a baseline they tested Corning Inc.’s SMF-28 ULL (ultra-low-loss) fiber, which has been on the market since 2007. With a pure silica core, its attenuation is specified at no more than 0.17 dB/km at the 1550-nanometer minimum-loss wavelength, close to the theoretical limit. It can carry 100-gigabit/second signals more than a thousand kilometers through a series of amplifiers spaced every 125 km.

Generally, such long-haul fiber systems operate in the C band of wavelengths from 1530 to 1565 nm. A few also operate in the L band from 1568 to 1605 nm, most notably the world’s highest-capacity submarine cable, the 13,000-km Pacific Light Cable, with nominal capacity at 24,000 gigabits per second on each of six fiber pairs. Both bands use well-developed erbium-doped fiber amplifiers, but that’s about the limit of their spectral range.

To cover a broader spectral range, UCL added the largely unused wavelengths of 1484 to 1520 nm in the shorter-wavelength S band. That required new amplifiers that used thulium to amplify those wavelengths. Because only two thulium amplifiers were available, they also added Raman-effect fiber amplifiers to balance gain across that band. They also used inexpensive semiconductor optical amplifiers to boost signals reaching the receiver after passing through 40 km of fiber. 

Another key to success is format. “We encoded the light in the best possible way” of geometric coding quadrature amplitude modulation (QAM) format to take advantage of differences in signal quality between bands. “Usually commercial systems use 64 points, but we went to 1024 [QAM levels]…an amazing achievement,” for the best quality signals, Gandino said. 

This experiment, reported in IEEE Photonics Technology Letters, is only the first in a planned series. Their results are close to the Shannon limit on communication rates imposed by noise in the channel. The next step, she says, will be buying more optical amplifiers so they can extend transmission beyond 40 km.

“This is fundamental research on the maximum capacity per channel,” Galdino says. The goal is to find limits, rather than to design new equipment. Their complex system used more than US$2.6 million of equipment, including multiple types of amplifiers and modulation schemes. It’s a testbed, not optimized for cost, performance, or reliability, but for experimental flexibility. Industry will face the challenge of developing detectors, receivers, amplifiers and high-quality lasers on new wavelengths, which they have already started. If they succeed, a single fiber pair will be able to carry enough video for all 50 million school-age children in the US to be on two Zoom video channels at once.

Unlock Wireless Test Capabilities On Your RF Gear

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/whitepaper/unlock_wireless_test_capabilities_on_your_rf_gear

Discover how easy it is to update your instrument to test the latest wireless standards.

We’re offering 30-day software trials that evolve test capabilities on your signal analyzers and signal generators. Automatically generate or analyze signals for many wireless applications.

Choose from our more popular applications:

  • Bluetooth ®
  • WLAN 802.11
  • Vector Modulation Analysis
  • And more

Predicting the Lifespan of an App

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/predicting-the-lifespan-of-an-app

The number of apps smartphone users have to choose from is daunting, with roughly 2 million available through the Apple Store alone. But survival of the fittest applies to the digital world too, and not all of these apps will go on to become the next Tik Tok. In a study published 29 July in IEEE Transactions on Mobile Computing, researchers describe a new model for predicting the long-term survival of apps, which outperforms seven existing designs.

“For app developers, understanding and tracking the popularity of an app is helpful for them to act in advance to prevent or alleviate the potential risks caused by the dying apps,” says Bin Guo, a professor at Northwestern Polytechnical University who helped develop the new model.

“Furthermore, the prediction of app life cycle is crucial for the decision-making of investors. It helps evaluate and assess whether the app is promising for the investors with remarkable rewards, and provides in advance warning to avoid investment failures.”

In developing their new model, AppLife, Guo’s team took a Multi-Task Learning (MTL) approach. This involves dividing data on apps into segments based on time, and analyzing factors – such as download history, ratings, and reviews – at each time interval. AppLife then predicts the likelihood of an app being removed within the next one or two years.

The researchers evaluated AppLife using a real-world dataset with more than 35,000 apps from the Apple Store that were available in 2016, but had been released the previous year. “Experiments show that our approach outperforms seven state-of-the-art methods in app survival prediction. Moreover, the precision and the recall reach up to 84.7% and 95.1%, respectively,” says Guo.

Intriguingly, AppLife was particularly good at predicting the survival of apps for tools—even more so than apps for news and video. Guo says this could be because more apps for tools exist in the dataset, feeding the model with more data to improve its performance in this respect. Or, he says, it could be caused by greater competition among tool apps, which in turn leads to more detailed and consistent user feedback.

Moving forward, Guo says he plans on building upon this work. While AppLife currently looks at factors related to individual apps, Guo is interested in exploring interactions among apps, for example which ones complement each other. Analyzing the usage logs of apps is another area of interest, he says.

For the IoT, User Anonymity Shouldn’t Be an Afterthought. It Should Be Baked In From the Start

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/security/for-the-iot-user-anonymity-shouldnt-be-an-afterthought-it-should-be-baked-in-from-the-start

The Internet of Things has the potential to usher in many possibilities—including a surveillance state. In the July issue, I wrote about how user consent is an important prerequisite for companies building connected devices. But there are other ways companies are trying to ensure that connected devices don’t invade people’s privacy.

Some IoT businesses are designing their products from the start to discard any personally identifiable information. Andrew Farah, the CEO of Density, which developed a people-counting sensor for commercial buildings, calls this “anonymity by design.” He says that rather than anonymizing a person’s data after the fact, the goal is to design products that make it impossible for the device maker to identify people in the first place.

“When you rely on anonymizing your data, then you’re only as good as your data governance,” Farah says. With anonymity by design, you can’t give up personally identifiable information, because you don’t have it. Density, located in Macon, Ga., settled on a design that uses four depth-perceiving sensors to count people by using height differentials.

Density could have chosen to use a camera to easily track the number of people in a building, but Farah balked at the idea of creating a surveillance network. Taj Manku, the CEO of Cognitive Systems, was similarly concerned about the possibilities of his company’s technology. Cognitive, in Waterloo, Ont., Canada, developed software that interprets Wi-Fi signal disruptions in a room to understand people’s movements.

With the right algorithm, the company’s software could tell when someone is sleeping or going to the bathroom or getting a midnight snack. I think it’s natural to worry about what happens if a company could pull granular data about people’s behavior patterns.

Manku is worried about information gathered after the fact, like if police issued a subpoena for Wi-Fi disruption data that could reveal a person’s actions in their home. Cognitive does data processing on the device and then dumps that data. Nothing identifiable is sent to the cloud. Likewise, customers who buy Cognitive’s software can’t access the data on their devices, just the insight. In other words, the software would register a fall, without including a person’s earlier actions.

“You have to start thinking about it from day one when you’re architecting the product, because it’s very hard to think about it after,” Manku says. It’s difficult to shut things down retroactively to protect privacy. It’s best if sensitive information stays local and gets purged.

Companies that promote anonymity will lose helpful troves of data. These could be used to train future machine-learning models in order to optimize their devices’ performance. Cognitive gets around this limitation by having a set of employees and friends volunteer their data for training. Other companies decide they don’t want to get into the analytics market or take a more arduous route to acquire training data for improving their devices.

If nothing else, companies should embrace anonymity by design in light of the growing amount of comprehensive privacy legislation around the world, like the General Data Protection Regulation in Europe and the California Consumer Privacy Act. Not only will it save them from lapses in their data-governance policies, it will guarantee that when governments come knocking for surveillance data, these businesses can turn them away easily. After all, you can’t give away something you never had.

This article appears in the September 2020 print issue as “Anonymous by Design.”

Building Your Own Cellphone Network Can Be Empowering, and Also Problematic

Post Syndicated from Roberto J. González original https://spectrum.ieee.org/telecom/wireless/building-your-own-cellphone-network-can-be-empowering-and-problematic

During the summer of 2013, news reports recounted how a Mexican pueblo launched its own do-it-yourself cellphone network. The people of Talea de Castro, population 2,400, created a mini-telecom company, the first of its kind in the world, without help from the government or private companies. They built it after Mexico’s telecommunications giants failed to provide mobile service to the people in the region. It was a fascinating David and Goliath story that pitted the country’s largest corporation against indigenous villagers, many of whom were Zapotec-speaking subsistence farmers with little formal education.

The reports made a deep impression on me. In the 1990s, I’d spent more than two years in Talea, located in the mountains of Oaxaca in southern Mexico, working as a cultural anthropologist.

Reading about Talea’s cellular network inspired me to learn about the changes that had swept the pueblo since my last visit. How was it that, for a brief time, the villagers became international celebrities, profiled by USA Today, BBC News, Wired, and many other media outlets? I wanted to find out how the people of this community managed to wire themselves into the 21st century in such an audacious and dramatic way—how, despite their geographic remoteness, they became connected to the rest of the world through the wondrous but unpredictably powerful magic of mobile technology.

I naively thought it would be the Mexican version of a familiar plot line that has driven many Hollywood films: Small-town men and women fearlessly take on big, bad company. Townspeople undergo trials, tribulations, and then…triumph! And they all lived happily ever after.

I discovered, though, that Talea’s cellphone adventure was much more complicated—neither fairy tale nor cautionary tale. And this latest phase in the pueblo’s centuries-long effort to expand its connectedness to the outside world is still very much a work in progress.

Sometimes it’s easy to forget that 2.5 billion people—one-third of the planet’s population—don’t have cellphones. Many of those living in remote regions want mobile service, but they can’t have it for reasons that have to do with geography, politics, or economics.

In the early 2000s, a number of Taleans began purchasing cellphones because they traveled frequently to Mexico City or other urban areas to visit relatives or to conduct business. Villagers began petitioning Telcel and Movistar, Mexico’s largest and second-largest wireless operators, for cellular service, and they became increasingly frustrated by the companies’ refusal to consider their requests. Representatives from the firms claimed that it was too expensive to build a mobile network in remote locations—this, in spite of the fact that Telcel’s parent company, América Móvil, is Mexico’s wealthiest corporation. The board of directors is dominated by the Slim family, whose patriarch, Carlos Slim, is among the world’s richest men.

But in November 2011, an opportunity arose when Talea hosted a three-day conference on indigenous media. Kendra Rodríguez and Abrám Fernández, a married couple who worked at Talea’s community radio station, were among those who attended. Rodríguez is a vivacious and articulate young woman who exudes self-confidence. Fernández, a powerfully built man in his mid-thirties, chooses his words carefully. Rodríguez and Fernández are tech savvy and active on social media. (Note: These names are pseudonyms.)

Also present were Peter Bloom, a U.S.-born rural-development expert, and Erick Huerta, a Mexican lawyer specializing in telecommunications policy. Bloom had done human rights work in Nigeria in rural areas where “it wasn’t possible to use existing networks because of security concerns and lack of finances,” he explains. “So we began to experiment with software that would enable communication between phones without relying upon any commercial companies.” Huerta had worked with the United Nations’ International Telecommunication Union.

The two men founded Rhizomatica, a nonprofit focused on expanding cellphone service in indigenous areas around the world. At the November 2011 conference, they approached Rodríguez and Fernández about creating a community cell network in Talea, and the couple enthusiastically agreed to help.

In 2012, Rhizomatica assembled a ragtag team of engineers and hackers to work on the project. They decided to experiment with open-source software called OpenBTS—Open Base Transceiver Station—which allows cell phones to communicate with each other if they are within range of a base station. Developed by David Burgess and Harvind Samra, cofounders of the San Francisco Bay Area startup Range Networks, the software also allows networked phones to connect over the Internet using VoIP, or Voice over Internet Protocol. OpenBTS had been successfully deployed at Burning Man and on Niue, a tiny Polynesian island nation.

Bloom’s team began adapting OpenBTS for Talea. The system requires electrical power and an Internet connection to work. Fortunately, Talea had enjoyed reliable electricity for nearly 40 years and had gotten Internet service in the early 2000s.

In the meantime, Huerta pulled off a policy coup: He convinced federal regulators that indigenous communities had a legal right to build their own networks. Although the government had granted telecom companies access to nearly the entire radio-frequency spectrum, small portions remained unoccupied. Bloom and his team decided to insert Talea’s network into these unused bands. They would become digital squatters.

As these developments were under way, Rodríguez and Fernández were hard at work informing Taleans about the opportunity that lay before them. Rodríguez used community radio to pique interest among her fellow villagers. Fernández spoke at town hall meetings and coordinated public information sessions at the municipal palace. Soon they had support from hundreds of villagers eager to get connected. Eventually, they voted to invest 400,000 pesos (approximately US $30,000) of municipal funding for equipment to build the cellphone network. By doing so, the village assumed a majority stake in the venture.

In March 2013, the cellphone network was finally ready for a trial run. Volunteers placed an antenna near the town center, in a location they hoped would provide adequate coverage of the mountainside pueblo. As they booted up the system, they looked for the familiar signal bars on their cellphone screens, hoping for a strong reception.

The bars light up brightly—success!

The team then walked through the town’s cobblestone streets in order to detect areas with poor reception. As they strolled up and down the paths, they heard laughter and shouting from several houses. People emerged from their homes, stunned.

“We’ve got service!” exclaimed a woman in disbelief.

“It’s true, I called the state capital!” cried a neighbor.

“I just connected to my son in Guadalajara!” shouted another.

Although Rodríguez, Fernández, and Bloom knew that many Taleans had acquired cellphones over the years, they didn’t realize the extent to which the devices were being used every day—not for communications but for listening to music, taking photos and videos, and playing games. Back at the base station, the network computer registered more than 400 cellphones within the antenna’s reach. People were already making calls, and the number was increasing by the minute as word spread throughout town. The team shut down the system to avoid an overload.

As with any experimental technology, there were glitches. Outlying houses had weak reception. Inclement weather affected service, as did problems with Internet connectivity. Over the next six months, Bloom and his team returned to Talea every week, making the 4-hour trip from Oaxaca City in a red Volkswagen Beetle.

Their efforts paid off. The community network proved to be immensely popular, and more than 700 people quickly subscribed. The service was remarkably inexpensive: Local calls and text messages were free, and calls to the United States were only 1.5 cents per minute, approximately one-tenth of what it cost using landlines. For example, a 20-minute call to United States from the town’s phone kiosk might cost a campesino a full day’s wages.

Initially, the network could handle only 11 phone calls at a time, so villagers voted to impose a 5-minute limit to avoid overloads. Even with these controls, many viewed the network as a resounding success. And patterns of everyday life began to change, practically overnight: Campesinos who walked 2 hours to get to their corn fields could now phone family members if they needed provisions. Elderly women collecting firewood in the forest had a way to call for help in case of injury. Youngsters could send messages to one another without being monitored by their parents, teachers, or peers. And the pueblo’s newest company—a privately owned fleet of three-wheeled mototaxis—increased its business dramatically as prospective passengers used their phones to summon drivers.

Soon other villages in the region followed Talea’s lead and built their own cellphone networks—first, in nearby Zapotec-speaking pueblos and later, in adjacent communities where Mixe and Mixtec languages are spoken. By late 2015, the villages had formed a cooperative, Telecomunicaciones Indígenas Comunitarias, to help organize the villages’ autonomous networks. Today TIC serves more than 4,000 people in 70 pueblos.

But some in Talea de Castro began asking questions about management and maintenance of the community network. Who could be trusted with operating this critical innovation upon which villagers had quickly come to depend? What forms of oversight would be needed to safeguard such a vital part of the pueblo’s digital infrastructure? And how would villagers respond if outsiders tried to interfere with the fledgling network?

On a chilly morning in May 2014, Talea’s Monday tianguis, or open-air market, began early and energetically. Some merchants descended upon the mountain village by foot alongside cargo-laden mules. But most arrived on the beds of clattering pickup trucks. By daybreak, dozens of merchants were displaying an astonishing variety of merchandise: produce, fresh meat, live farm animals, eyeglasses, handmade huaraches, electrical appliances, machetes, knockoff Nike sneakers, DVDs, electronic gadgets, and countless other items.

Hundreds of people from neighboring villages streamed into town on this bright spring morning, and the plaza came alive. By midmorning, the pueblo’s restaurants and cantinas were overflowing. The tianguis provided a welcome opportunity to socialize over coffee or a shot of fiery mezcal.

As marketgoers bustled through the streets, a voice blared over the speakers of the village’s public address system, inviting people to attend a special event—the introduction of a new, commercial cellphone network. Curious onlookers congregated around the central plaza, drawn by the sounds of a brass band.

As music filled the air, two young men in polo shirts set up inflatable “sky dancers,” then assembled tables in front of the stately municipal palace. The youths draped the tables with bright blue tablecloths, each imprinted with the logo of Movistar, Mexico’s second-largest wireless provider. Six men then filed out of the municipal palace and sat at the tables, facing the large audience.

What followed was a ceremony that was part political rally, part corporate event, part advertisement—and part public humiliation. Four of the men were politicians affiliated with the authoritarian Institutional Revolutionary Party, which has dominated Oaxaca state politics for a century. They were accompanied by two Movistar representatives. A pair of young, attractive local women stood nearby, wearing tight jeans and Movistar T-shirts.

The politicians and company men had journeyed to the pueblo to inaugurate a new commercial cellphone service, called Franquicias Rurales (literally, “Rural Franchises”). As the music faded, one of the politicians launched a verbal attack on Talea’s community network, declaring it to be illegal and fraudulent. Then the Movistar representatives presented their alternative plan.

A cellphone antenna would soon be installed to provide villagers with Movistar cellphone service—years after a group of Taleans had unsuccessfully petitioned the company to do precisely that. Movistar would rent the antenna to investors, and the franchisees would sell service plans to local customers in turn. Profits would be divided between Movistar and the franchise owners.

The grand finale was a fiesta, where the visitors gave away cheap cellphones, T-shirts, and other swag as they enrolled villagers in the company’s mobile-phone plans.

The Movistar event came at a tough time for the pueblo’s network. From the beginning, it had experienced technical problems. The system’s VoIP technology required a stable Internet connection, which was not always available in Talea. As demand grew, the problems continued, even after a more powerful system manufactured by the Canadian company Nutaq/NuRAN Wireless was installed. Sometimes, too many users saturated the network. Whenever the electricity went out, which is not uncommon during the stormy summer months, technicians would have to painstakingly reboot the system.

In early 2014, the network encountered a different threat: opposition from the community. Popular support for the network was waning amidst a growing perception that those in charge of maintaining the network were mismanaging the enterprise. Soon, village officials demanded that the network repay the money granted by the municipal treasury. When the municipal government opted to become Talea’s Movistar franchise owner, they used these repaid funds to pay for the company’s antenna. The homegrown network lost more than half of its subscribers in the months following the company’s arrival. In March 2019, it shut down for good.

No one forced Taleans to turn their backs on the autonomous cell network that had brought them international fame. There were many reasons behind the pueblo’s turn to Movistar. Although the politicians who condemned the community network were partly responsible, the cachet of a globally recognized brand also helped Movistar succeed. The most widely viewed television events in the pueblo (as in much of the rest of the world) are World Cup soccer matches. As Movistar employees were signing Taleans up for service during the summer of 2014, the Mexican national soccer team could be seen on TV wearing jerseys that prominently featured the telecom company’s logo.

But perhaps most importantly, young villagers were drawn to Movistar because of a burning desire for mobile data services. Many teenagers from Talea attended high school in the state capital, Oaxaca City, where they became accustomed to the Internet and all that it offers.

Was Movistar’s goal to smash an incipient system of community-controlled communication, destroy a bold vision for an alternative future, and prove that villagers are incapable of managing affairs themselves? It’s hard to say. The company’s goals were probably motivated more by long-term financial objectives: to increase market share by extending its reach into remote regions.

And yet, despite the demise of Talea’s homegrown network, the idea of community-based telecoms had taken hold. The pueblo had paved the way for significant legal and regulatory changes that made it possible to create similar networks in nearby communities, which continue to thrive.

The people of Talea have long been open to outside ideas and technologies, and have maintained a pragmatic attitude and a willingness to innovate. The same outlook that led villagers to enthusiastically embrace the foreign hackers from Rhizomatica also led them to give Movistar a chance to redeem itself after having ignored the pueblo’s earlier appeals. In this town, people greatly value convenience and communication. As I tell students at my university, located in the heart of Silicon Valley, villagers for many years have longed to be reliably and conveniently connected to each other—and to the rest of the world. In other words, Taleans want cellphones because they want to be like you.

This article is based on excerpts from Connected: How a Mexican Village Built Its Own Cell Phone Network (University of California Press, 2020).

About the Author

Roberto J. González is chair of the anthropology department at San José State University.

How to Improve Threat Detection and Hunting in the AWS Cloud Using the MITRE ATT&CK Matrix

Post Syndicated from IEEE Spectrum Recent Content full text original https://spectrum.ieee.org/webinar/how_to_improve_threat_detection_and_hunting_in_the_aws_cloud

SANS and AWS Marketplace will discuss the exercise of applying MITRE’s ATT&CK Matrix to the AWS Cloud. They will also explore how to enhance threat detection and hunting in an AWS environment to maintain a strong security posture.

5G Just Got Weird

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/standards/5g-release-16

The only reason you’re able to read this right now is because of the Internet standards created by the Internet Engineering Task Force. So while standards may not always be the most exciting thing in the world, they make exciting things possible. And occasionally, even the standards themselves get weird.

That’s the case with the recent 5G standards codified by the 3rd Generation Partnership Project (3GPP), the industry group that establishes the standards for cellular networks. 3GPP finalized Release 16 on July 3.

Release 16 is where things are getting weird for 5G. While earlier releases focused on the core of 5G as a generation of cellular service, Release 16 lays the groundwork for new services that have never been addressed by cellular before. At least, not in such a rigorous, comprehensive way.

“Release 15 really focused on the situation we’re familiar with,” says Danny Tseng, a director of technical marketing at Qualcomm, referring to cellular service, adding, “release 16 really broke the barrier” for connected robots, cars, factories, and dozens of other applications and scenarios.

4G and other earlier generations of cellular focused on just that: cellular. But when 3GPP members started gathering to hammer out what 5G could be, there was interest in developing a wireless system that could do more than connect phones. “The 5G vision has always been this unifying platform,” says Tseng. When developing 5G standards, researchers and engineers saw no reason that wireless cellular couldn’t also be used to connect anything wireless.

With that in mind, here’s an overview of what’s new and weird in Release 16. If you’d rather pour through the standards yourself, here you go. But if you’d rather not drown in page after page of technical details, don’t worry—keep reading for the cheat sheet.

Vehicle-to-Everything

One of the flashiest things in Release 16 is V2X, short for “Vehicle to Everything.” In other words, using 5G for cars to communicate with each other and everything else around them. Hanbyul Seo, an engineer at LG Electronics, says V2X technologies have previously been standardized in IEEE 802.11p and 3GPP LTE V2X, but that the intention in these cases was to enable basic safety services. Seo is one of the rapporteurs for 3GPP’s item on V2X, meaning he was responsible for reporting on the item’s progress to 3GPP.

In defining V2X for 5G, Seo says the most challenging thing was to provide high data throughput, reliability, and low latency, both of which are essential for anything beyond the most basic communications. Seo explains that earlier standards typically deal with messages with hundreds of bytes that are expected to reach 90 percent of receivers in a 300-meter radius within a few hundred milliseconds. The 3GPP standards bring those benchmarks into the realm of gigabytes per second, 99.999 percent reliability, and just a few milliseconds.

Sidelinking

Matthew Webb, a 3GPP delegate for Huawei and the other rapporteur for the 3GPP item on V2X, adds that Release 16 also introduces a new technique called sidelinking. Sidelinks will allow 5G-connected vehicles to communicate directly with one another, rather than going through a cell-tower intermediary. As you might imagine, that can make a big difference for cars speeding past each other on a highway as they alert each other about their positions.

Tseng says that sidelinking started as a component of the V2X work, but it can theoretically apply to any two devices that might need to communicate directly rather than go through a base station first. Factory robots are one example, or large-scale Internet of Things installations.

Location Services

Release 16 also includes information on location services. In past generations of cellular, three cell towers were required to triangulate where a phone was by measuring the round-trip distance of a signal from each tower. But 5G networks will be able to use the round-trip time from a single tower to locate a device. That’s because massive MIMO and beamforming allow 5G towers to send precise signals directly to devices, and so the network can measure the direction and angle of a beam, along with its distance from the tower, to locate it.

Private Networks

Then there’s private networks. When we think of cellular networks, we tend to think of wide networks that cover lots of ground so that you can always be sure you have a signal. But 5G incorporates millimeter waves, which are higher frequency radio waves (30 to 300 GHz) that don’t travel nearly as far as traditional cell signals. Millimeter waves means it will be possible to build a network just for an office building, factory, or stadium. At those scales, 5G could function essentially like Wi-Fi networks.

Unlicensed Spectrum

The last area to touch on is Release 16’s details on unlicensed spectrum. Jing Sun, an engineer at Qualcomm and the 3GPP’s rapporteur on the subject, says Release 16 is the first occasion unlicensed spectrum has been included in the 5G’s cellular service. According to Sun, it made sense to expand 5G into unlicensed spectrum in the 5 and 6 GHz bands because the bands are widely available around the world and ideal for use now that 5G is pushing cellular service into higher frequency bands. Unlicensed spectrum could be key for private networks as, just like Wi-Fi, the networks could use that spectrum without having to go through the rigorous process of licensing a frequency band which may or may not be available.

Release 17 Will “Extend Reality”

Release 16 has introduced a lot of new areas for 5G service, but very few of these areas are finished. “The Release 17 scope was decided last December,” says Tseng. “We’ve got a pretty good idea of what’s in there.” In general, that means building on a lot of the blocks established in Release 16. For example, Release 17 will include more mechanisms by which devices—not just cars—can sidelink.

And it will include entirely new things as well. Release 17 includes a work item on extended reality—the catch-all term for alternate reality and virtual reality technologies. Tseng says there is also an item about NR-Lite, which will attempt to address current gaps in IoT coverage by using the 20 MHz band. Sun says Release 17 also includes a study item to explore the possibility of using frequencies in the 52 to 71 GHz range, far higher than anything used in cellular today.

Finalizing Release 17 will almost certainly be pushed back by the ongoing Covid-19 pandemic, which makes it difficult if not impossible for groups working on work items and study items to meet face-to-face. But it’s not stopping the work entirely. And when Release 17 is published, it’s certain to make 5G even weirder still.

How To Use Wi-Fi Networks To Ensure a Safe Return to Campus

Post Syndicated from Jan Dethlefs original https://spectrum.ieee.org/view-from-the-valley/telecom/wireless/want-to-return-to-campus-safely-tap-wifi-network

IEEE COVID-19 coverage logo, link to landing page

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

Universities, together with other operators of large spaces like shopping malls or airports, are currently facing a dilemma: how to return to business as fast as possible, while at the same time providing a safe environment?

Given that the behavior of individuals is the driving factor for viral spread, key to answering this question will be understanding and monitoring the dynamics of how people move and interact while on campus, whether indoors or outdoors.

Fortunately, universities already have the perfect tool in place:  the campus-wide Wi-Fi network. These networks typically cover every indoor and most outdoor spaces on campus, and users are already registered. All that needs to be added is data analytics to monitor on-campus safety.

We, a team of researchers from the University of Melbourne and the startup Nexulogy, have developed the necessary algorithms that, when fed data already gathered by campus Wi-Fi networks, can help keep universities safe. We have already tested these algorithms successfully at several campuses and other large yet contained environments.

To date, little attention has been paid to using Wi-Fi networks to track social distancing. Countries like Australia have rolled out smartphone apps to support contact tracing, typically using Bluetooth to determine proximity. A recent Google/Apple collaboration, also using Bluetooth, led to a decentralized protocol for contact monitoring.

Yet the success of these apps mainly relies on people voluntarily downloading them. A study by the University of Oxford estimated that more than 70 percent of smartphone users in the United Kingdom would have to install the app for it to be effective. But adoption is not happening at anything near that scale; the Australian COVIDSafe app, for example, released in April 2020, has only been downloaded by 6 million people by mid-June 2020, or about 24 percent of the population.

Furthermore, this kind of Bluetooth-based tracking does not relate the contacts to a physical location, such as a classroom. This makes it hard to satisfy the requirements of running a safe campus. And data collected by the Bluetooth tracking apps is generally not readily available to the campus owners, so it doesn’t help make their own spaces safer.

Our Wi-Fi based algorithms provide the least privacy-intrusive monitoring mechanisms thus far, because they use only anonymous device addresses; no individual user names are necessary to understand crowd densities and proximity. In the case of a student or campus staff member reporting a positive coronavirus test, the device addresses determined to belong to someone at risk can be passed on authorities with the appropriate privacy clearance. Only then would names be matched to devices, with people at risk informed individually and privately.

Wi-Fi presents the best solution for universities for a couple of reasons: wireless coverage is already campus wide; it is a safe assumption that everyone on campus is carrying at least one Wi-Fi capable device; and virtually everyone living and working on campus registers their devices to have internet access. Such tracking is possible without onerous user-facing app downloads.

Often university executives already have the rights to use the information collected in the wireless system included as part of its Terms and Conditions. In the midst of this pandemic, they now also have a legal or, at least a moral obligation, to use such data to their best ability to improve safety and well-being of everyone on campus.

The process starts by collecting the time and symbolic location (also known as network access point) of Wi-Fi capable devices when they are first detected by the Wi-Fi infrastructure, for example, when a student enters the campus’ Wi-Fi  environment, and then during regular intervals or when they change locations. Then, after consolidating any multiple devices of a single user, our algorithms calculate the number of occupants in a given area. That provides a quick insight into crowd density in any building or outdoor plaza.

Our algorithms can also reconstruct the journey of any user within the Wi-Fi infrastructure, and from that can derive exposure time to other users, spaces visited and transmission risk.

This information lets campus managers do several things. For one, our algorithms can easily identify high risk areas by time and location and flag areas where social distance limits are regularly exceeded. This helps campus managers focus resources on areas that may need more hand sanitizers or more frequent deep cleaning.

For another, imagine an infected staff member has been identified by public health authorities. Based on the health authorities’ request, the university can identify possible infection pathways by tracking the individual’s journeys through the campus. It is possible to backtrack the movement history since the start of the data collection, for days or even weeks if necessary. 

To illustrate the methodology, we assumed one of us had a COVID infection and we reconstructed his journey and exposure to other university visitors during a recent visit to a busy university campus. In the heat map below, you can see that only the conference building showed a significant number of contacts (red bar) with an exposure time of, in this example, more than 30 minutes. While we only detected other individuals by Wi-Fi devices, the university executives could now notify people that potentially have been exposed to an infectious user.

This system isn’t without technical challenges. The biggest problem is noise in the system that needs to be removed before the data is useful.

For example, depending on the building layout and wireless network configurations, wireless counts inside specific zones can include passing outside traffic or static devices (like desktop computers). We have developed algorithms to eliminate both without requiring access to any user information.

Even though the identification and management of infection risks is limited to the area covered by the wireless infrastructure, the timely identification of a COVID event naturally benefits areas beyond the campus. Campuses, and even residential campuses in lockdown, have a daily influx of external visitors —staff, contractors, and family members. Those individuals could be identified via telecommunication providers if requested by health authorities.

In the case of someone on campus testing positive for the virus, people they came in contact with, their contact times and places they went to can be identified within minutes. This allows the initiation of necessary measures (e.g. COVID testing or decontamination) in a targeted, timely and cost-effective way.

Since the analytics can happen in the cloud, the software can easily be updated to reflect on new or refined medical knowledge or health regulations, say a new exposure time threshold or physical distancing guidelines.

Privacy is paramount in this process. Understanding population densities and crowd management is done anonymously. Only in the case of someone on campus reporting a confirmed case of the coronavirus  do authorities with the necessary privacy clearance need to connect the devices and the individuals.  Our algorithm operates separately from the identification and notification process.

As universities around the world are eager to welcome students back to the campus, proactive plans need to be in place to ensure the safety and wellbeing of everyone. Wi-Fi is available and ready to help.

Jan Dethlefs is a data scientist and Simon Wei is a data engineer, both at Nexulogy in Melbourne, Australia.  Stephan Winter is a professor and Martin Tomko is a senior lecturer, both in the Department of Infrastructure Engineering at the University of Melbourne, where they work with geospatial data analytics.

Twitter Bots Are Spreading Massive Amounts of COVID-19 Misinformation

Post Syndicated from Thor Benson original https://spectrum.ieee.org/tech-talk/telecom/internet/twitter-bots-are-spreading-massive-amounts-of-covid-19-misinformation

Back in February, the World Health Organization called the flood of misinformation about the coronavirus flowing through the Internet a “massive infodemic.” Since then, the situation has not improved. While social media platforms have promised to detect and label posts that contain misleading information related to COVID-19, they haven’t stopped the surge.

But who is responsible for all those misleading posts? To help answer the question, researchers at Indiana University’s Observatory on Social Media used a tool of their own creation called BotometerLite that detects bots on Twitter. They first compiled a list of what they call “low-credibility domains” that have been spreading misinformation about COVID-19, then used their tool to determine how many bots were sharing links to this misinformation. 

Their findings, which they presented at this year’s meeting of the Association for the Advancement of Artificial Intelligence, revealed that bots overwhelmingly spread misinformation about COVID-19 as opposed to accurate content. They also found that some of the bots were acting in “a coordinated fashion” to amplify misleading messages.  

The scale of the misinformation problem on Twitter is alarming. The researchers found that overall, the number of tweets sharing misleading COVID-19 information was roughly equivalent to the number of tweets that linked to New York Times articles. 

We talked with Kai-Cheng Yang, a PhD student who worked on this research, about the bot-detection game.

This conversation has been condensed and edited for clarity.

IEEE Spectrum: How much of the overall misinformation is being spread by bots?

Kai-Cheng Yang: For the links to the low-credibility domains, we find about 20 to 30 percent are shared by bots. The rest are likely shared by humans.

Spectrum: How much of this activity is bots sharing links themselves, and how much is them amplifying tweets that contain misinformation?

Yang: It’s a combination. We see some of the bots sharing the links directly and other bots are retweeting tweets containing those links, so they’re trying to interact with each other.

Spectrum: How do your Botometer and BotometerLite tools identify bots? What are they looking for? 

Yang: Both Botometer and BotometerLite are implemented as supervised machine learning models. We first collect a group of Twitter accounts that are manually annotated as bots or humans. We extract their characteristics from their profiles (number of friends, number of followers, if using background image, etc), and we collect data on content, sentiment, social network, and temporal behaviors. We then train our machine learning models to learn how bots are different from humans in terms of these characteristics. The differences between Botometer and BotometerLite is that Botometer considers all these characteristics whereas BotometerLite only focuses on the profiles for efficiency.

Spectrum: The links these bots are sharing: Where do they lead?

Yang: We have compiled a list of 500 or so low-credibility domains. They’re mostly news sites, but we would characterize many of them as ‘fake news.’ We also consider extremely hyper-partisan websites as low-credibility.

Spectrum: Can you give a few examples of the kinds of COVID-related misinformation that appear on these sites? 

Yang: Common themes include U.S. politics, status of the outbreak, and economic issues. A lot of the articles are not necessarily fake, but they can be hyper-partisan and misleading in some sense. We also see false information like: the virus is weaponized, or political leaders have already been vaccinated.

Spectrum: Did you look at whether the bots spreading misinformation have followers, and whether those followers are humans or other bots? 

Yang: Examining the followers of Twitter accounts is much harder due the API rate limit, and we didn’t conducted such analysis this time.

Spectrum: In your paper, you write that some of the bots seem to be acting in a coordinated fashion. What does that mean? 

Yang: We find that some of the accounts (not necessarily all bots) were sharing information from the same set of low-credibility websites. For two arbitrary accounts, this is very unlikely, yet we found some accounts doing so together. The most plausible explanation is that these accounts were coordinated to push the same information. 

Spectrum: How do you detect bot networks? 

Yang: I’m assuming you are referring to the network shown in the paper. For that, we simply extract the list of websites each account shares and then find the accounts that have very similar lists and consider them to be connected.

Spectrum: What do you think can be done to reduce the amount of misinformation we’re seeing on social media?

Yang: I think it has to be done by the platforms. They can do flagging, or if they know a source is low-credibility, maybe they can do something to reduce the exposure. Another thing we can do is improve the average person’s journalism literacy: Try to teach people that there might be those kinds of low-credibility sources or fake news online and to be careful. We have seen some recent studies indicating that if you tell the user what they’re seeing might be from low-credibility sources, they become much more sensitive to such things. They’re actually less likely to share those articles or links. 

Spectrum: Why can’t Twitter prevent the creation and proliferation of bots? 

Yang: My understanding is that when you try to make your tool or platform easy to use for real users, it opens doors for the bot creators at the same time. So there is a trade-off.

In fact, according to my own experience, recently Twitter started to ask the users to put in their phone numbers and perform more frequent two-step authentications and recaptcha checks. It’s quite annoying for me as a normal Twitter user, but I’m sure it makes it harder, though still possible, to create or control bots. I’m happy to see that Twitter has stepped up.

With 5G Rollout Lagging, Research Looks Ahead to 6G

Post Syndicated from Dexter Johnson original https://spectrum.ieee.org/tech-talk/telecom/wireless/with-5g-rollout-lagging-research-looks-ahead-to-6g

Amid a 5G rollout that has faced its fair share of challenges, it might seem somewhat premature to start looking ahead at 6G, the next generation of mobile communications. But 6G development is happening now, and it’s being pursued in earnest by both industry and academia.

Much of the future landscape for 6G was mapped out in an article published in March of this year in an article published by IEEE Communications titled “Toward 6G Networks: Use Cases and Technologies.”  The article presents the requirements, the enabling technologies and the use cases for adopting a systematic approach to overcoming the research challenges for 6G.

“6G research activities are envisioning radically new communication technologies, network architectures, and deployment models,” said Michele Zorzi,  a professor at the University of Padua in Italy, and one of the authors of the IEEE Communications article. “Although some of these solutions have already been examined in the context of 5G, they were intentionally left out of initial 5G standards developments and will not be part of early 5G commercial rollout mainly because markets are not mature enough to support them.”

The foundational difference between 5G and 6G networks, according to Zorzi, will be the increased role that intelligence will play in 6G networks. It will go beyond merely classification and prediction tasks as is the case in legacy and/or 5G systems.

While machine-learning-driven networks are now still in their infancy, they will likely represent a fundamental component of the 6G ecosystem, which will shift towards a fully-user-centric architecture where end terminals will be able to make autonomous network decisions without supervision from centralized controllers.

This decentralization of control will enable sub-millisecond latency as required by several 6G services (which is below the already challenging 1-millisecond requirement of emerging 5G systems). This is expected to yield more responsive network management.

To achieve this new kind of performance, the underlying technologies of 6G will be fundamentally different from 5G. For example, says Marco Giordani, a researcher at the University of Padua and co-author of the IEEE Communications article, even though 5G networks have been designed to operate at extremely high frequencies in the millimeter-wave bands, 6G will exploit even higher-spectrum technologies—terahertz and optical communications being two examples.

At the same time, Giordani explains that 6G will have a new cell-less network architecture that is a clear departure from current mobile network designs. The cell-less paradigm can promote seamless mobility support, targeting interruption-free communication during handovers, and can provide quality of service (QoS) guarantees that are in line with the most challenging mobility requirements envisioned for 6G, according to Giordani.

Giordani adds: “While 5G networks (and previous generations) have been designed to provide connectivity for an essentially bi-dimensional space, future 6G heterogeneous architectures will provide three-dimensional coverage by deploying non-terrestrial platforms (e.g., drones, HAPs, and satellites) to complement terrestrial infrastructures.”

Key Industry and Academic Initiatives in 6G Development: