Tag Archives: Telecom/Internet

Analysis of COVID-19 Tweets Reveals Who Uses Racially Charged Language

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/analysis-tweets-related-covid19-racially-charged-language

Journal Watch report logo, link to report landing page
IEEE COVID-19 coverage logo, link to landing page

As the COVID-19 pandemic began to spread around the globe, it was followed by an increase in media coverage of racist attacks. Some have argued that the use of racially charged language to describe the novel coronavirus, including terms such as the “Chinese flu” or “Chinese virus,” may have played a role in these attacks.

In a recent study, published 21 May in IEEE Transactions on Big Data, researchers analyzed Twitter data to better understand which users are more likely to use racially charged versus neutral terms during the pandemic. In a second study, the group analyzed the general language used by these two groups of Twitter users, shedding light on their priorities and emotional states.

COVID-19 Makes It Clear That Broadband Access Is a Human Right

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/covid19-makes-clear-broadband-access-is-human-right

Like clean water and electricity, broadband access has become a modern-day necessity. The spread of COVID-19 and the ensuing closure of schools and workplaces and even the need for remote diagnostics make this seem like a new imperative, but the idea is over a decade old. Broadband is a fundamental human right, essential in times like now, but just as essential when the world isn’t in chaos.

A decade ago, Finland declared broadband a legal right. In 2011, the United Nations issued a report [PDF] with a similar conclusion. At the time, the United States was also debating its broadband policy and a series of policy efforts that would ensure everyone had access to broadband. But decisions made by the Federal Communications Commission between 2008 and 2012 pertaining to broadband mapping, network neutrality, data caps and the very definition of broadband are now coming back to haunt the United States as cities lock themselves down to flatten the curve on COVID-19.

While some have voiced concerns about whether the strain of everyone working remotely might break the Internet, the bigger issue is that not everyone has Internet access in the first place. Most U.S. residential networks are built for peak demand, and even the 20 to 40 percent increase in network traffic seen in locations hard hit by the virus won’t be enough to buckle networks.

An estimated 21 to 42 million people in the United States don’t have physical access to broadband, and even more cannot afford it or are reliant on mobile plans with data limits. For a significant portion of our population, this makes remote schooling and work prohibitively expensive at best and simply not an option at worst. This number hasn’t budged significantly in the last decade, and it’s not just a problem for the United States. In Hungary, Spain, and New Zealand, a similar percentage of households also lack a broadband subscription according to data from the Organization for Economic Co-operation and Development.

Faced with the ongoing COVID-19 outbreak, Internet service providers in the United States. have already taken several steps to expand broadband access. Comcast, for example, has made its public Wi-Fi network available to anyone. The company has also expanded its Internet Essentials program—which provides a US $9.95 monthly connection and a subsidized laptop—to a larger number of people on some form of government assistance.

To those who already have access but are now facing financial uncertainty, AT&T, Comcast, and more than 200 other U.S. ISPs have pledged not to cut off subscribers who can’t pay their bills and not to charge late fees, as part of an FCC plan called Keep Americans Connected. Additionally, AT&T, Comcast, and Verizon have also promised to eliminate data caps for the near future, so customers don’t have to worry about blowing past a data limit while learning and working remotely.

It’s good to keep people connected during quarantines and social distancing, but going forward, some of these changes should become permanent. It’s not enough to say that broadband is a basic necessity; we have to push for policies that ensure companies treat it that way.

“If it wasn’t clear before this crisis, it is crystal clear now that broadband is a necessity for every aspect of modern civic and commercial life. U.S. policymakers need to treat it that way,” FCC Commissioner Jessica Rosenworcel says. “We should applaud public spirited efforts from our companies, but we shouldn’t stop there.” 

This article appears in the May 2020 print issue as “We All Deserve Broadband.”

How the Internet Can Cope With the Explosion of Demand for “Right Now” Data During the Coronavirus Outbreak

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/everyone-staying-home-because-of-covid19-is-targeting-the-internets-biggest-weak-spot

The continuing spread of COVID-19 has forced far more people to work and learn remotely than ever before. And more people in self-isolation and quarantine means more people are streaming videos and playing online games. The spike in Internet usage has some countries looking at ways to curb streaming data to avoid overwhelming the Internet.

But the amount of data we’re collectively using now is not actually the main cause of the problem. It’s the fact that we’re all suddenly using many more low-latency applications: teleconferencing, video streaming, and so on. The issue is not that the Internet is running out of throughput. It’s that there’s a lot more demand for data that needs to be delivered without any perceivable delay.

“The Internet is getting overwhelmed,” says Bayan Towfiq, the founder and CEO of Subspace, a startup focusing on improving the delivery of low-latency data. “The problem is going to get worse before it gets better.” Subspace wasn’t planning to come out of stealth mode until the end of this year, but the COVID-19 crisis has caused the company to rethink those plans.

“What’s [been a noticeable problem while everyone is at home streaming and videoconferencing] is less than one percent of data. [For these applications] it’s more important than the other 99 percent,” says Towfiq. While we all collectively use far more data loading webpages and browsing social media, we don’t notice if a photo takes half a second to load in the same way we notice a half-second delay on a video conference call.

So if we’re actually not running out of data throughput, why the concern over streaming services and teleconferencing overloading the Internet?

“The Internet doesn’t know about the applications running on it,” says Towfiq. Put another way, the Internet is agnostic about the type of data moving from point A to point B. What matters most, based on how the Internet has been built, is moving as much data as possible.

And normally that’s fine, if most of the data is in the form of emails or people browsing Amazon. If a certain junction is overwhelmed by data, load times may be a little slower. But again, we barely notice a delay in most of the things for which we use the Internet.

The growing use of low-latency applications, however, means those same bottlenecks are painfully apparent. When a staff Zoom meeting has to contend with someone trying to watch the Mandalorian, the Internet sees no difference between your company’s videochat and Baby Yoda.

For Towfiq, the solution to the Internet’s current stress is not to cut back on the amount of video-conferencing, streaming, and online gaming, as has been suggested. Instead, the solution is what Subspace has been focused on since its founding last year: changing how the Internet works by forcing it to prioritize that one percent of data that absolutely, positively has to get there right away.

Subspace has been installing both software and hardware for ISPs in cities around the world designed to do exactly that. Towfiq says ISPs already saw the value in Subspace’s tech after the company demonstrated that it could make online gaming far smoother for players by reducing the amount of lag they dealt with.

Initially Subspace was sending out engineers to personally install its equipment and software for ISPs and cities they were working with. But with the rising demand and the pandemic itself, the company is transitioning to “palletizing” its equipment: making it so that, after shipping it, the city or ISP can plug in just a few cables and change how their networks function.

Now, Towfiq says, the pandemic has made it clear that the startup needed to immediately come out of stealth. Even though Subspace was already connecting its new tech to cities’ network infrastructure at a rate of five per week in February, coming out of stealth will allow the company to publicly share information about what it’s working on. The urgency, says Towfiq, outweighed the company’s original plans to conduct some proof-of-concept trials and build out a customer base.

“There’s a business need that’s been pulled out of us to move faster and unveil right now,” Towfiq says. He adds that Subspace didn’t make the decision to come out of stealth until last Tuesday. “There’s a macro thing happening with governments and Internet providers not knowing what to do.”

Subspace could offer the guidance these entities need to avoid overwhelming their infrastructure. And once we’re all back to something approximating normal after the COVID-19 outbreak, the Internet will still benefit from the types of changes Subspace is making. As Towfiq says, “We’re becoming a new kind of hub for the Internet.”

How to Detect a Government’s Hand Behind Internet Shutdowns

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/how-to-detect-a-governments-hand-behind-internet-shutdowns

Internet shutdowns that affect entire regions or countries and cost billions of dollars annually have become a widespread phenomenon, especially as various governments wield them like a blunt club to restrict citizens’ access to online information.

Some governments deploy Internet shutdowns in an attempt to suppress protests, while Iraq’s Ministry of Education even orders shutdowns to prevent cheating during national school exams. The trick for independent observers trying to keep track of it all involves figuring out the difference between government-ordered shutdowns versus other causes of Internet outages.

In early 2020, the five-person team behind the nongovernmental organization NetBlocks was watching dips in Internet connectivity happening in a particular region of China over several months. That could have sparked suspicion that China’s online censors—who restrict access to certain online content as part of China’s “Great Firewall”—were perhaps throttling some popular online services or social media networks. But the NetBlocks team’s analysis showed that such patterns likely had to do with businesses shutting down or limiting operations to comply with government efforts aimed at containing the coronavirus outbreak that has since become a pandemic.

“When you’re investigating an internet shutdown, you need to work from both ends to conclusively verify that incident has happened, and to understand why it’s happened,” says Alp Toker, executive director of NetBlocks. “This means ruling out different types of outages.”

NetBlocks is among the independent research groups trying to keep an eye on the growing prevalence of Internet shutdowns. Since it formed in 2016, the London-based NetBlocks has expanded its focus from Turkey and the Middle East to other parts of the world by using remote measurement techniques. These include analytics software that monitors how well millions of phones and other devices can access certain online websites and services, along with both hardware probes plugged into local routers and an Internet browser probe that anyone can use to check their local connectivity.

But NetBlocks also relies upon what Toker describes as a more hands-on investigation to manually check out various incidents. That could mean checking in with local engineers or Internet service providers who are in a position to help confirm or rule out certain lines of inquiry. This combined approach has helped NetBlocks investigate all sorts of causes of Internet shutdowns, including major hurricanes, nationwide power outages in Venezuela and cuts in undersea Internet cables affecting Africa and the Middle East. Each of these types of outages provides data that NetBlocks is using to train machine learning algorithms in hopes of better automating detection and analysis of different events.

“Each of the groups that’s currently monitoring Internet censorship uses a different technical approach and can observe different aspects of what’s happening,” says Zachary Weinberg, a postdoctoral researcher at the University of Massachusetts Amherst and member of the Information Controls Lab (ICLab) project. “We’re working with them on combining all of our data sets to get a more complete picture.

ICLab relies heavily on a network of commercial virtual private networks (VPNs) to gain observation points that provide a window into Internet connectivity in each country, along with a handful of human volunteers based around the world. These VPN observation points can do bandwidth-intensive tests and collect lots of data on network traffic without endangering volunteers in certain countries. But one limitation of this approach is that VPN locations in commercial data centers are sometimes not subject to the same Internet censorship affecting residential networks and mobile networks.

If a check turns up possible evidence of a network shutdown, ICLab’s internal monitoring alerts the team. The researchers use manual confirmation checks to make sure it’s a government-ordered shutdown action and not something like a VPN service malfunction. “We have some ad-hoc rules in our code to try to distinguish these possibilities, and plans to dig into the data [collected] so far and come up with something more principled,” Weinberg says.

The Open Observatory of Network Interference (OONI) takes a more decentralized, human-reliant approach to measuring Internet censorship and outages. OONI’s six-person team has developed and refined a computer software tool called OONI probe that people can download and run to can check local Internet connectivity with a number of websites, including a global test list of internationally relevant websites (such as Facebook) and a country-specific test list. 

The OONI project began when members of the Tor Project, the nonprofit organization that oversees the Tor network designed to enable people to use the Internet anonymously, began creating “ad hoc scripts” to investigate blocking of Tor software and other examples of Internet censorship, says Arturo Filasto, lead developer of OONI. Since 2012, that has evolved into the free and open-source OONI probe with an openly-documented methodology explaining how it measures Internet censorship, along with a frequently updated database that anyone can search.

“We eventually consolidated [that] into the software that now tens of thousands of people run all over the world to collect their own evidence of Internet censorship and contribute to this growing pool of open data that anybody can use to research and investigate various forms of information controls on the Internet,” Filasto says.

Beyond the tens of thousands of active monthly users, hundreds of millions of people have downloaded the OONI probe. That probe is currently available as a mobile app and for desktop Linux and macOS users who don’t mind using the command-line interface, but the team aims to launch a more user-friendly desktop program for Windows and macOS users in April 2020. 

Other groups have their own approaches. The CensoredPlanet lab at the University of Michigan uses echo servers that exist primarily to bounce messages back to senders as observation points. The Cooperative Association for Internet Data Analysis (CAIDA) at the University of California in San Diego monitors global online traffic involving the Border Gateway Protocol, which backbone routers use to communicate with each other. 

On the low-tech side, news articles and word-of-mouth reports from ordinary people can also provide valuable internet outage data for websites such as the Internet Shutdown Tracker run by the Software Freedom Law Centre in New Delhi, India. But the Internet Shutdown Tracker website also invites mobile users to download and install the OONI probe tool as a way of helping gather more data on regional and city-level Internet shutdowns ordered by India’s government.

Whatever their approach, most of the groups tracking Internet shutdowns and online censorship still consist of small teams with budget constraints. For example, ICLab’s team would like to speed up and automate much of their process, but their budget is reliant in large part upon getting grants from the U.S. National Science Foundation. They also have limited data storage that restricts them to checking each country about two or three times a week on average to collect detailed cycles of measurements—amounting to about 500 megabytes of raw data per country. 

Another challenge comes on the data collection side. People may face personal risk in downloading and using OONI probe or similar tools in some countries, especially if the government’s laws regard such actions as illegal or even akin to espionage. This is why the OONI team openly warns about the risk up front as part of what they consider their informed consent process, and even require mobile users to complete a quiz before starting to use the OONI probe app.

“Thanks to the fact that many people are running OONI probe in China and Iran, we’ve been able to uncover a lot of really interesting and important cases of Internet censorship that we wouldn’t otherwise have known,” Filasto says. “So we are very grateful to the brave users of OONI probe that have gathered these important measurements.”

Recent trends in both government information control strategies and the broader Internet landscape may also complicate the work of such groups. Governments in countries such as China, Russia, and Iran have begun moving away from network-level censorship toward embedding censorship policies within large social media platforms and chat systems such as Tencent’s WeChat in China. Detecting more subtle censorship within these platforms represents an even bigger challenge than collecting evidence of a region-wide Internet shutdown.

“We have to create accounts on all these systems, which in some cases requires proof of physical-space identity, and then we have to automate access to them, which the platforms intentionally make as difficult as possible,” Weinberg says. “And then we have to figure out whether someone’s post isn’t showing up because of censorship, or because the ‘algorithm’ decided our test account wouldn’t be interested in it.”

In 2019, large-scale Internet shutdowns affecting entire countries occurred alongside the shift toward “more nuanced Internet disruptions that happen on different layers,” Toker says. The NetBlocks team is refining its analytical capability to home in on different types of outages by learning more about the daily pattern of Internet traffic that reflects each country’s normal economic activities. But Toker is also hoping that his group and others can continue forging international cooperation to study these issues together. For now, NetBlocks relies upon community contributions, the technical community, and volunteers.

“There are bubbles of expertise in different parts of the world, and those haven’t necessarily combined, so from where we’ve been coming I think those bridges are just starting to be built,” Toker says. “And that means really getting engineers together from different fields and different backgrounds, whether it’s electrical engineering or Internet engineering.”

Facebook Switches to New Timekeeping Service

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/tech-talk/telecom/internet/facebook-new-time-keeping-service

Facebook recently switched millions of its own servers and consumer products (including Portal and Oculus VR headsets) over to a new timekeeping service. The company says the new service, built in-house by the company’s engineers using open-source tools, is more scalable than the one it used previously. What’s more, it will improve the accuracy of device’s internal clocks from 10 milliseconds to 100 microseconds.

To figure out what time it is, Internet-connected devices look to timekeeping services maintained by companies or government agencies such as the U.S. National Institute of Standards and Technology (NIST). There are dozens of such services available. Devices constantly ping them for fresh timestamps formatted in the Network Time Protocol (NTP), and use the info to set or recalibrate their internal clocks.

With the announcement, Facebook joins other tech companies including Apple and Google that operate publicly-available timekeeping services of their own. Facebook’s service is now available for free to the public at time.facebook.com.

Snag With Linking Google’s Undersea Cable to Saint Helena Could Leave Telecom Monopoly Entrenched

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/googles-planned-equiano-undersea-cable-branch-to-saint-helena-hits-a-snag

Last June, Google announced an addition to the company’s planned Equiano undersea cable. In addition to stretching down the length of Africa’s western coastline, a branch would split off to connect to the remote island of Saint Helena, part of the United Kingdom. The cable would be an incredible gain for the island of about 4,500 people, who today rely on a shared 50-megabit-per-second satellite link for Internet connections.

The cable will deliver an expected minimum of several hundred gigabits per second. That’s far more data than the island’s residents can use, and would in fact be prohibitively expensive for Helenians to afford. To make the cable’s costs feasible, Christian von der Ropp—who started the Connect Saint Helena campaign in 2011—worked on the possibility of getting satellite ground stations installed on the island.

These ground stations would be crucial links between the growing number of satellites in orbit and our global network infrastructure. One of the biggest problems with satellites, especially lower-orbiting ones, is that they spend significant chunks of time without a good backhaul connection—usually because they’re over an ocean or a remote area on land. The southern Atlantic, as you can surely guess, is one such spot. Saint Helena happens to be right in the middle of it.

Von der Ropp found there was interest among satellite companies to build ground stations on the island. OneWeb, Spire Global, and Laser Light have all expressed interest in building infrastructure on the island. The ground stations would be a perfect match for the cable’s throughput, taking up the bulk of the cable’s usage and effectively subsidizing the costs of high-speed access for Helenians.

But what seemed like smooth sailing for the cable has now run into another bump, however. The island government is currently at odds with the island’s telecom monopoly, Sure South Atlantic. If the dispute cannot be resolved by the time the cable lands in late 2021 or early 2022, Helenians could see incredibly fast Internet speeds come to their shores—only to go nowhere once they arrive.

“The arrival of unlimited broadband places [Sure’s] business at risk,” says von der Ropp. He points out that, in general, when broadband access becomes essentially unlimited, users move away from traditional phone and television services in favor of messaging and streaming services like Skype, WhatsApp, and Netflix. If fewer Helenians are paying for Sure’s service packages, the company may instead jack up the prices on Internet access—which Helenians would then be forced to pay.

Most pressing, however, is that the island’s infrastructure simply cannot handle the data rates the Equiano branch will deliver. Because Sure is a monopoly, the company has little incentive to upgrade or repair infrastructure in any but the direst circumstances (Sure did not respond to a request for comment for this story).

That could give satellite operators cold feet as well. Under Sure’s current contract with the island government, satellite operators would be forbidden from running their own fiber from their ground stations to the undersea cable’s terminus. They would be reliant on Sure’s existing infrastructure to make the connection.

Sure’s current monopoly contract is due to expire on December 31, 2022—about a year after the cable is due to land on the island—assuming that the Saint Helena government does not renew the contract. Given the dissatisfaction of many on the island with the quality of service, that appears to be a distinct possibility. Right now, for example, Helenians pay 82 pounds per month for 11 gigabytes of data according to Sure’s Gold Package. The moment they exceed their data cap, Sure charges them 5 pence per megabyte, which is a 670 percent increase in the cost of data.

11 GB per month may seem hard to burn through, but remember that for Helenians, that data covers everything—streaming, browsing the Internet, phone calls, and texting. For a Helenian that has exceeded their data cap, a routine 1.5 GB iPhone update could cost them an additional 75 pounds.

But it could be hard to remove Sure as a monopoly. If the island government ends the contract, Sure has a right of compensation for all assets on the island. Von der Ropp estimates that means the government would be required to compensate Sure in the ballpark of four or five million pounds. That’s an extremely hefty sum, considering the government’s total annual budget is between 10 and 20 million pounds.

“They will need cash to pay the monopoly’s ransom,” says von der Ropp, adding that it will likely be up to the United Kingdom to foot the bill. Meanwhile, the island will need to look for new providers to replace Sure, ones that will hopefully invest in upgrading the island’s deteriorating infrastructure.

 There is interest in doing just that. As Councilor Cyril Leo put it in a recent speech to the island’s Legislative Council, “Corporate monopoly in St Helena cannot have the freedom to extract unregulated profits, from the fiber-optic cable enterprise, at the expense of the people of Saint Helena.” What remains to be seen is if the island can actually find a way to remove that corporate monopoly.

Li-Fi Scrubs Into the Operating Room

Post Syndicated from Dan Garisto original https://spectrum.ieee.org/tech-talk/telecom/internet/lifi-scrubs-into-operating-room

Li-Fi, which is short for “light fidelity,” is a wireless technology that uses optical light to transmit information (as opposed to Wi-Fi, which also transmits light, but at much lower radio frequencies.) Proponents claim that Li-Fi could deliver more reliable data transmission at faster rates than Wi-Fi.

Since Harald Haas, a professor at the University of Edinburgh, popularized the term Li-Fi in 2011, companies including the former Philips Lighting—now Signify—and Haas’s own pureLiFi have tried to commercialize the technology. It’s been tested in offices, schools, and even airplanes, but has so far struggled to gain widespread adoption. 

Now, Li-Fi has completed its first tests in a hospital—a place where its reliability and speed may prove particularly valuable. A team of researchers from the Fraunhofer Heinrich-Hertz Institute (HHI) in Berlin and the Czech Technical University (CTU) in Prague published results from a demonstration, which they announced at the recent Optical Networking and Communication Conference in San Diego. Their new study lays the groundwork for possibly someday using Li-Fi in a medical setting.

Submarine Cable Repairs Underway in South Africa

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/undersea-cable-repairs-south-africa

A clock in South Africa is counting down the seconds until a pair of broken cables are expected to go back in service. Every few hours, the service provider TENET, which keeps South Africa’s university and research facilities connected to the global Internet, tweets updates.

The eight-year-old West Africa Cable System (WACS) submarine cable, which runs parallel to Africa’s west coast, broke at two points early on 16 January. That same day, an 18-year-old cable called SAT-3 that runs along the same route also broke.

How India, the World’s Largest Democracy, Shuts Down the Internet

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/how-the-worlds-largest-democracy-shuts-down-the-internet

When government officials in India decided to shut down the Internet, software engineers working for an IT and data analytics firm lost half a day of work and fell behind in delivering a project for clients based in London. A hotel was unable to pay its employees or manage online bookings for tourists. A major hospital delayed staff salary payments and restricted its medical services to the outpatient and emergency departments.  

CES 2020 News: Tech Executives Answer Tough Questions About Privacy

Post Syndicated from Tekla S. Perry original https://spectrum.ieee.org/view-from-the-valley/telecom/internet/ces-2020-news-tech-executives-answer-tough-questions-about-privacy

A panel of tech executives discussed privacy, encryption, and digital advertising this week at CES 2020. Apple’s senior director of global privacy Jane Horvath came out in strong favor of privacy protection, commissioner Rebecca Slaughter of the U.S. Federal Trade Commission came out even stronger. Facebook’s vice president of public policy Erin Egan maintained that Facebook does just fine in protecting consumer privacy, while Wing Venture Capital partner Rajeev Chand served as moderator and posed timely questions to the group.

(Proctor & Gamble’s global privacy officer Susan Shook occasionally got a word in, but the spotlight focused on companies that provide technology far more than those that use it.)

Bringing Legacy Fiber Optic Cables Up to Speed

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/legacy-fiber-optic-cables-speed-data-rates

Installing optical fibers with fat cores once seemed like a good idea for local-area or campus data networks. It was easier to couple light into such “multimode” fibers than into the tiny cores of high-capacity “singlemode” fibers used for long-haul networks.

The fatter the core, the slower data flows through the fiber, but fiber with 50-micrometer (µm) cores can carry data at rates of 100 megabits per second up to a few hundred meters—good enough for local transmission.

Now Cailabs in Rennes, France has developed special optics that it says can send signals at rates of 10 gigabits per second (Gbps) up to 10 kilometers through the same fiber, avoiding the need to replace legacy multimode fiber. They hope to reach rates of 100 Gbps, which are now widely required for large data centers.

Fear of Internet Censorship Hangs Over Hong Kong Protests

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/fear-of-internet-censorship-hangs-over-hong-kong-protests

Fears about online censorship have grown since Hong Kong government officials raised the possibility of curbing Internet freedom to suppress a city-wide protest movement that has led to increasingly violent clashes between riot police and some protesters.

5 Challenges of Wideband 5G Device Test

Post Syndicated from National Instruments original https://spectrum.ieee.org/telecom/internet/5-challenges-of-wideband-5g-device-test

1. Waveforms Are Wider and More Complex

5G New Radio includes two different types of waveforms:

  • Cyclic-prefix OFDM (CP-OFDM) for downlink and uplink
  • Discrete Fourier transform spread OFDM (DFT-S-OFDM) for uplink only; this waveform resembles LTE’s single-carrier frequency division multiple access (SC-FDMA)

Researchers and engineers working on 5G device test have the new challenges of creating, distributing, and generating 5G waveforms among their design and test benches. Engineers need to work with highly complex, standard-compliant uplink and downlink signals that have larger bandwidths than ever before. They include a variety of different resource allocations; modulation and coding sets; demodulation, sounding, and phase-tracking information; and single-carrier and contiguous and non-contiguous carrier-aggregated configurations.

Design tip: Select a 5G standard-compliant toolset that allows you to generate, analyze, and share these waveforms between test benches to fully characterize your DUTs.

2. Instruments Must Be Wideband and Linear, and They Must Cost-effectively Cover an Extensive Frequency Range

Although RF engineers have been working with specialized and expensive test systems for mmWave in industries such as aerospace and military, this represents unexplored territory for the mass-market semiconductor industry. Engineers need cost-effective test equipment to configure more test benches for a shorter time to market. These new benches must support high linearity; tight amplitude and phase accuracy over large bandwidths; low phase noise; extensive frequency coverage for multiband devices; and the ability to test for coexistence with other wireless standards. Along with powerful hardware, modular, software-based test and measurement benches will be able to adapt rapidly to new test requirements. 

Design tip: Invest in a wideband test platform that can evaluate performance in both existing and new frequency bands. Select instrumentation that not only supports coexistence with current standards but also adapts to the evolution of the standard over time.

3. Component Characterization and Validation Require More Testing

Working with wide signals below 6 GHz and at mmWave frequencies requires characterizing and validating greater performance out of RF communications components. Engineers must not only test innovative designs for multiband power amplifiers, low-noise amplifiers, duplexers, mixers, and filters, but also ensure that new and improved RF signal chains support simultaneous operation of 4G and 5G technologies. Additionally, to overcome significant propagation losses, mmWave 5G requires beamforming subsystems and antenna arrays, which demand fast and reliable multiport test solutions.

Design tip: Ensure your test systems can handle both multiband and multichannel 5G devices to address beamformers, FEMs, and transceivers.

4. Over-the-air Testing of Massive MIMO and Beamforming Systems Makes Traditional Measurements Spatially Dependent

Engineers working on 5G beamforming devices face the challenge of characterizing the transmit and receive paths and improving the reciprocity for TX and RX. For example, as the transmit power amplifier goes into compression, it introduces amplitude, phase shifts, and other thermal effects that the LNA in the receiver path would not produce. Additionally, the tolerances of phase shifters, variable attenuators, gain control amplifiers, and other devices could cause unequal phase shifts between channels, which affects the anticipated beam patterns. Measuring these effects requires over-the-air (OTA) test procedures that make traditional measurements like TxP, EVM, ACLR, and sensitivity spatially dependent.

Design tip: Use OTA test techniques that synchronize fast and precise motion control and RF measurements to more accurately characterize 5G beamforming systems without exceeding your test time budget.

5. High-volume Production Test Demands Fast and Efficient Scaling

New 5G applications and industry verticals will exponentially increase the number of 5G components and devices that manufacturers need to produce per year. Manufacturers are challenged by the need to provide quick ways to calibrate the multiple RF paths and antenna configurations of new devices and accelerate the OTA solutions for reliable and repeatable manufacturing test results. However, for volume production of RFICs, traditional RF chambers can take up much of the production floor space, disrupt material handling flows, and multiply capital expenses. To tackle these problems, OTA-capable IC sockets—small RF enclosures with an integrated antenna—are now commercially available. These provide semiconductor OTA test functionality in a reduced form factor. 

Design tip: Select an ATE platform that extends lab-grade 5G instrumentation to the production floor to simplify the correlation of characterization and production test data.

Technical White Paper

Engineer’s Guide to 5G Semiconductor Test

Wideband 5G IC test is complex. The Engineer’s Guide to 5G Semiconductor Test is here to help. A must-read for anyone navigating the time, cost, and quality trade-offs of sub-6 GHz and mmWave IC test, the guide features color diagrams, recommended test procedures, and tips for avoiding common mistakes.

Topics include:

  • Working with wide 5G downlink and uplink OFDM waveforms
  • Configuring wideband test benches for extensive frequency coverage
  • Avoiding common sources of error in 5G beamforming
  • Reducing test times of over-the-air TX and RX test procedures
  • Choosing alternatives to RF chambers for high-volume production of mmWave RFICs

 Download the Engineer’s Guide to 5G Semiconductor Test

A Machine Learning Classifier Can Spot Serial Hijackers Before They Strike

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/mit-and-caida-researchers-want-to-use-machine-learning-to-plug-one-of-the-internets-biggest-holes

How would you feel if, every time you had to send sensitive information somewhere, you relied on a chain of people playing the telephone game to get that information to where it needs to go? Sounds like a terrible idea, right? Well, too bad, because that’s how the Internet works.

Data is routed through the Internet’s various metaphorical tubes using what’s called the Border Gateway Protocol (BGP). Any data moving over the Internet needs a physical path of networks and routers to make it from A to B. BGP is the protocol that moves information through those paths—though the downside, like a person in a game of telephone, is that each junction in the path only knows what they’ve been told by their immediate neighbor.

Google’s Equiano Cable Will Extend to the Remote Island of Saint Helena, Flooding It With Data

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/googles-equiano-cable-will-extend-to-the-remote-island-of-saint-helena-flooding-it-with-data

The tiny island will need to turn itself into a data hub to make use of the expected bandwidth

If you know anything about the South Atlantic island of Saint Helena, that’s probably because it was the island where the British government exiled Napoleon until he died in 1821. It was actually the second time Britain attempted to exile Napoleon, and the island was chosen for a very specific reason: It’s incredibly remote.

Napoleon is long gone, but the island’s remoteness continues to pose challenges for its 4,500-odd residents. They used to only be able to reach St. Helena by boat once every 3 weeks, though it’s now possible to catch the occasional flight from Johannesburg, South Africa to what’s been called “the world’s most useless airport.” Residents’ Internet prospects are even worse—the island’s entire population shares a single 50 megabits per second satellite link.

That’s about to change, however, as the St. Helena government has shared a letter of intent describing a plan to connect the island to Google’s recently announced Equiano cable. The cable will be capable of delivering orders of magnitude more data than anything the island has experienced. It will create so much capacity, in fact, that St. Helena could use the opportunity to transform itself from an almost unconnected island to a South Atlantic data hub.

How YouTube Paved the Way for Google’s Stadia Cloud Gaming Service

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/how-the-youtube-era-made-cloud-gaming-possible

Google’s vision is that any device that can play YouTube videos will also have access to cloud gaming through Stadia

When Google’s executives floated a vision for the Stadia cloud gaming service that could make graphically intensive gaming available on any device, they knew the company wouldn’t have to build all the necessary technology from scratch. Instead, the tech giant planned to leverage its expertise in shaping Internet standards and installing infrastructure to support YouTube video streaming for more than a billion people worldwide.

The Internet Is Coming to the Rest of the Animal Kingdom

Post Syndicated from Elie Dolgin original https://spectrum.ieee.org/tech-talk/telecom/internet/internet-of-living-things-can-communication-tools-break-down-the-interspecies-divide

A new Doolittlesque initiative aims to promote Internet communication among smart animals

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Shipping Industry Bets Big on IoT in Bid to Save Billions

Post Syndicated from Manon Verchot original https://spectrum.ieee.org/tech-talk/telecom/internet/shipping-industry-bets-big-on-iot-in-bid-to-save-billions

Across the shipping industry, IoT technology is finally graduating from pilots to real-world commercial products

In a bid to save billions of dollars annually, the shipping industry is graduating from pilot projects and finally starting to adopt a smattering of Internet of Things (IoT) technologies for real-world, commercial use. Lately, several large and small shipping companies have turned to Traxens, a French technology firm, to help them deploy IoT devices across their fleets.

Traxens develops technology that tracks and monitors cargo. Since it launched in 2012, the company has earned investments from leading shipping companies. Shipping is responsible for carrying 90 percent of the world’s traded goods, according to the International Chamber of Shipping. This year, A.P. Møller—Mærsk A/S, which is the world’s largest container ship and supply vessel operator, became a Traxens shareholder and customer. 

Then, earlier this month, Traxens equipped Indonesian shipping company, PT TKSolusindo with a set of devices, each slightly longer and thinner than a brick, with sensors including GPS.  These devices can track geolocation, detect shock and motion, and check the temperature, humidity, and alarms on refrigerated containers, often called reefers. 

Melting Arctic Ice Opens a New Fiber Optic Cable Route

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/melting-sea-ice-opens-the-floodgate-for-a-new-fiber-optic-cable-route

Cinia and MegaFon’s proposed Arctic cable would bring lower latencies and geographical diversity

The most reliable way to reduce latency is to make sure your signal travels over the shortest physical distance possible. Nowhere is that more evident than the fiber optic cables that cross oceans to connect continents. There will always be latency between Europe and North America, for example, but where you lay the cable connecting the two continents affects that latency a great deal.

To that end, Helsinki-based Cinia, which owns and operates about 15,000 kilometers of fiber optic cable, and MegaFon, a Russian telecommunications operator, signed a memorandum of understanding to lay a fiber optic cable across the Arctic Ocean. The cable, if built, would not only reduce latency between users in Europe, Asia, and North America, but provide some much-needed geographical diversity to the world’s undersea cable infrastructure.

The vast majority of undersea cable encircles the world along a relatively similar path: From the east coast of the United States, cables stretch across the northern Atlantic to Europe, through the Mediterranean and Red Seas, across the Indian Ocean before swinging up through the South China Sea and finally spanning the Pacific to arrive at the west coast of the U.S. Other cable routes exist, but none have anywhere near the data throughput that this world-girding trunk line has.

Ari-Jussi Knaapila, the CEO of Cinia, estimates that the planned Arctic cable, which would stretch from London to Alaska, would shorten the physical cable distance between Europe and the western coast of North America by 20 to 30 percent. Additional cable will extend the route down to China and Japan, for a planned total of 10,000 kilometers of new cable.

Knaapila also says that the cable is an important effort to increase geographic diversity of the undersea infrastructure. Because many cables run along the same route, various events—earthquakes, tsunamis, seabed landslides, or even an emergency anchoring by a ship—can damage several cables at once. On December 19, 2008, 14 countries lost their connections to the Internet after ship anchors cut five major undersea cables in the Mediterranean Sea and Red Sea.

“Submarine cables are much more reliable than terrestrial cables with the same length,” Knaapila says. “But when a fault occurs in the submarine cable, the repair time is much longer.” The best way to avoid data crunches when a cable is damaged is to already have cables elsewhere that were unaffected by whatever event broke the original cable in the first place.

Stringing a cable across the Arctic Ocean is not a new idea, though other proposed projects, including the semi-built Arctic Fibre project, have never been completed. In the past, the navigational season in the Arctic was too short to easily build undersea cables. Now, melting sea ice due to climate change is expanding that window and making it more feasible (The shipping industry is coming to similar realizations as new routes open up).

The first step for the firms under the memorandum of understanding is to establish a company by the end of the year tasked with the development of the cable. Then come route surveys of the seabed, construction permits (for both on-shore and off-shore components), and finally, the laying of the cable. According to Knaapila, 700 million euros have been budgeted for investment in the route.

The technical specifics of the cable itself have yet to be determined. Most fiber optic cables use wavelengths of light around 850, 1300, or 1550 nanometers, but for now, the goal is to remain as flexible as possible. For the same reason, the data throughput of the proposed cable remains undecided.

Of course, not all cable projects succeed. The South Atlantic Express (SAex), would be one of the first direct links between Africa and South America, and connect remote islands like St. Helena along the way. But SAex has struggled with funding and currently sits in limbo. Cinia and MegaFon hope to avoid a similar fate.