Tag Archives: Telecom/Internet

Chaos Engineering Saved Your Netflix

Post Syndicated from Michael Dumiak original https://spectrum.ieee.org/telecom/internet/chaos-engineering-saved-your-netflix

To hear Greg Orzell tell it, the original Chaos Monkey tool was simple: It randomly picked a virtual machine hosted somewhere on Netflix’s cloud and sent it a “Terminate” command. Unplugged it. Then the Netflix team would have to figure out what to do.

That was a decade ago now, when Netflix moved its systems to the cloud and subsequently navigated itself around a major U.S. East Coast service outage caused by its new partner, Amazon Web Services (AWS).

Orzell is currently a principal software engineer at GitHub and lives in Mainz, Germany. As he recently recalled the early days of Chaos Monkey, Germany got ready for another long round of COVID-related pandemic lockdowns and deathly fear. Chaos itself raced outside.

But while the coronavirus wrenched daily life upside-down and inside out, a practice called chaos engineering, applied in computer networks, might have helped many parts of the networked world limp through their coronavirus-compromised circumstances.

Chaos engineering is a kind of high-octane active analysis, stress testing taken to extremes. It is an emerging approach to evaluating distributed networks, running experiments against a system while it’s in active use. Companies do this to build confidence in their operation’s ability to withstand turbulent conditions.

Orzell and his Netflix colleagues built Chaos Monkey as a Java-based tool from the AWS software development kit. The tool acted almost like a number generator. But when Chaos Monkey told a virtual machine to terminate, it was no simulation. The team wanted systems that could tolerate host servers and pieces of application services going down. “It was a lot easier to make that real by saying, ‘No, no, no, it’s going to happen,’ ” Orzell says. “We promise you it will happen twice in the next month, because we are going to make it happen.’ ”

In controlled and small but still significant ways, chaos engineers see if systems work by breaking them—on purpose, on a regular basis. Then they try to learn from it. The results show if a system works as expected, but they also build awareness that even in an engineering organization, things fail. All the time.

As practiced today, chaos engineering is more refined and ambitious still. Subsequent tools could intentionally slow things down to a crawl, send network traffic into black holes, and turn off network ports. (One related app called Chaos Kong could scale back company servers inside an entire geographic region. The system would then need to be resilient enough to compensate.) Concurrently, engineers also developed guardrails and safety practices to contain the blast radius. And the discipline took root.

At Netflix, chaos engineering has evolved into a platform called the Chaos Automated Platform, or ChAP, which is used to run specialized experiments. (See “Spawning Chaos,” above.) Nora Jones, a software engineer, founder and chief executive of a startup called Jeli, says teams need to understand when and where to experiment. She helped implement ChAP while still at Netflix. “Creating chaos in a random part of the system is not going to be that useful for you,” she says. “There needs to be some sort of reasoning behind it.”

Of course, the novel coronavirus has added entirely new kinds of chaos to network traffic. Traffic fluctuations during the pandemic did not all go in one direction either, says AWS principal solutions architect Constantin Gonzalez. Travel services like the German charter giant Touristik Union International (TUI), for instance, drastically pulled in its sails as traffic ground to a halt. But the point in building resilient networks is to make them elastic, he says.

Chaos engineering is geared for this. As an engineering mind-set, it alludes to Murphy’s Law, developed during moonshot-era rocket science: If something can go wrong, it will go wrong.

It’s tough to say that the practice kept the groaning networks up and running during the pandemic. There are a million variables. But for those using chaos-engineering techniques—even for as far-flung and traditional a business as DBS Bank, a consumer and investment institution in Singapore with US $437 billion in assets—they helped. DBS is three years into a network resiliency program, site-reliability engineer Harpreet Singh says, and even as the program got off the ground in early 2018 the team was experimenting with chaos-engineering tools.

And chaos seems to be catching. Jones’s Jeli startup delivers a strategic view on what she calls the catalyst events (events that might be simulated or sparked by chaos engineering), which show the difference between how an organization thinks it works and how it actually works. Gremlin, a four-year-old San Jose venture, offers chaos-engineering tools as service products. In January, the company also issued its first “State of Chaos Engineering” report for 2021. In a blog post announcing the publication, Gremlin vice president of marketing Aileen Horgan described chaos-engineering conferences these days as topping 3,500-plus registrants. Gremlin’s user base alone, she noted, has conducted nearly 500,000 chaos-engineering system attacks to date.

Gonzalez says AWS has been using chaos-engineering practices for a long time. This year—as the networked world, hopefully, recovers from stress that tested it like never before—AWS is launching a fault-injection service for its cloud customers to use to run their own experiments.

Who knows how they’ll be needed.

Treat Smart City Tech like Sewers, or Better

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/treat-smart-city-tech-like-sewers-or-better

Smart cities, like many things, took a beating in 2020. Alphabet, Google’s parent company, pulled its Sidewalk Labs subsidiary out of a smart-city project in Toronto. Cisco killed its plans to sell smart-city technology. And in many places, city budgets will be affected for years to come by the pandemic’s economic shock, making it more difficult to justify smart-city expenses.

That said, the pandemic also provided municipalities around the world with reason to invest new technologies for public transportation, contact tracing, and enforcing social distancing. In a way, the present moment is an ideal time for a new understanding of smart-city infrastructure and a new way of paying for it.

Cities need to think of their smart-city efforts as infrastructure, like roads and sewers, and as such, they need to think about investing in it, owning it, maintaining it, and controlling how it’s used in the same ways as they do for other infrastructure. Smart-city deployments affect the citizenry, and citizens will have a lot to say about any implementation. The process of including that feedback and respecting citizens’ rights means that cities should own the procurement process and control the implementation.

In some cases, citizen backlash can kill a project, such as the backlash against Sidewalk’s Toronto project over who exactly had access to the data collected by the smart-city infrastructure. Even when cities do get permission from citizens for deployments, the end results are often neighborhood-size “innovation zones” that are little more than glorified test beds. A truly smart city needs a master plan, citizen accountability, and a means of funding that grants the city ownership.

One way to do this would be for cities to create public authorities, just like they do when investing in public transportation or even health care. These authorities should have publicly appointed commissioners who manage and operate the sensors and services included in smart-city projects. They would also have the ability to raise funds using bond issues backed by the revenue created by smart-city implementation.

For example, consider a public safety project that requires sensors at intersections to reduce collisions. The project might use the gathered data to meet its own safety goals, but the insights derived from analyzing traffic patterns could also be sold to taxi companies or logistics providers.

These sales will underpin the repayment on bonds issued to pay for the technology’s deployment and management. While some municipal bonds mature in 10- to 30-year time frames, there are also bonds with 3- to 5-year terms that would be better suited to the shorter life spans of technologies like traffic-light sensors.

Even if bonds and public authorities aren’t the right way to proceed, owning and controlling the infrastructure has other advantages. Smart-city contracts could employ local contractors and act not just as a source of revenue for cities but also as an economic development tool that can create jobs and a halo effect to draw in new companies and residents.

For decades, cities have invested in their infrastructure using public debt. If cities invest in smart-city technologies the same way, they could give their citizens a bigger stake in the process, create new streams of revenue for the city, and improve quality of life. After all, people deserve to live in smarter cities, rather than innovation zones.

This article appears in the March 2021 print issue as “Smarter Smart Cities.”

Facebook’s Australian Struggles Cast Shadow on Net Policy Around the World

Post Syndicated from John Boyd original https://spectrum.ieee.org/tech-talk/telecom/internet/facebook-drops-news-ban-after-australia-compromises-on-media-bargaining-code

Facebook users in Australia last week found they couldn’t access news and government web pages. That was by design. The social network removed these links in response to a proposed Australian government bill to require social media networks pay news organizations for use of their content. The ban on news access is actually part of a larger struggle between Facebook and media players like Rupert Murdoch’s News Corporation over the future of news content on social media.  

That battle ended (for now) when an agreement was brokered after hurried bargaining resulted in the Australian government making concessions. Now, if digital platforms like Facebook and Google negotiate commercial deals with news organizations that can be shown to contribute “to the sustainability of the Australian news industry,” they could avoid being subject to the new law. In addition, the companies will receive at least one month’s notice before “a last resort” arbitration is proposed if such negotiations fail.

Plenty of questions, however, remain: Will Facebook consent to other countries enacting similar policy on them? What might other countries’ deals with other players (including Google and Microsoft) look like? And will any of this matter to Facebook and Internet users beyond Australia’s borders?

The answer to that final question, at least, is very likely Yes.    

The battle began brewing back in April 2020 when the Australian government asked its Competition & Consumer Commission to draw up a news media bargaining code that would “address bargaining power imbalance between Australian news media businesses and digital platforms, specifically Google and Facebook.”

As the Media Code neared enactment, Google threatened in January to pull its search engine from Australia. In the same month, Mel Silva, managing director of Google Australia and New Zealand, stated before the Australian Senate Economics Legislation Committee that, “The principle of unrestricted linking between websites is fundamental to Search. Coupled with the unmanageable financial and operational risk if this version of the Code were to become law, it would give us no real choice but to stop making Google Search available in Australia.” 

Australian prime minister Scott Morrison’s reply was immediate. “We don’t respond to threats,” he told the press the same day. 

Microsoft, long-time competitor with Google and Facebook, was quick to exploit its rivals’ difficulties. After endorsing the government’s Media Code, Brad Smith, Microsoft president, wrote on his blog in February, Microsoft “committed that its Bing search service would remain in Australia and that it is prepared to share revenues with news organizations under the rules that Google and Facebook are rejecting.” He added that the company would support “a similar proposal in the United States, Canada, The European Union, and other countries.”

Such developments apparently gave Google second thoughts, and it began negotiating directly with several Australian media companies, including  News Corp., which announced on February 17 “it has agreed to an historic multi-year partnership with Google to provide trusted journalism from its news sites around the world in return for significant payments by Google.”

The Facebook Fight

In a formal announcement, William Easton, managing director of Facebook Australia & New Zealand said on Tuesday that after discussions, “We are satisfied that the Australian government has agreed to a number of changes and guarantees that address our core concerns about allowing commercial deals that recognize the value our platform provided to publishers relative to the value we receive from them.”

Campbell Brown, Facebook’s vice president of global news partnerships, added in a separate announcement that, “We’re restoring news on Facebook in Australia in the coming days. From now on, the government has clarified we will retain the ability to decide if news appears on Facebook so that we won’t automatically be subject to a forced negotiation.”

The amendments may help diminish the criticism that deals like Google’s News Corp. agreement will only stuff the pockets of media owners such as Rupert Murdoch, who, critics like Kara Swisher claim, has unduly influenced the Australian government’s legislation.

Certainly, News Corp., which owns The Wall Street Journal, The New York Post, and The Australian, as well as the U.K.’s The Times and The Sun has suffered from the online dominance of Google and Facebook in Australia and around the world. Before the ascendancy of both these online giants, newspapers were a thriving business, making money by selling copies and advertisements. In Australia today, according to the country’s competition watchdog, for every A$100 in media advertising revenues, Google accounts for $53, Facebook for $28, and the remaining $19 is shared by the content providers like News Corp. 

Meanwhile, Facebook will now have to deal with the fallout resulting from its news blackout. U.K. member of parliament Julian Knight described the ban as a “crass move,” and “the worst type of corporate culture.” While the U.K. News Media Association said the move “demonstrates why robust regulation is urgently needed.”

Canada also condemned the Facebook move. Heritage Minister Steven Guilbeault, who is drawing up the country’s own media code, said he has talked with French, German and Finnish counterparts about collaborating to ensure published content would be properly compensated. He added that he expected “ten, fifteen countries [would soon be] adopting similar rules.”

To Close the Digital Divide, the FCC Must Redefine Broadband Speeds

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/to-close-the-digital-divide-the-fcc-must-redefine-broadband-speeds

The coronavirus pandemic has brought the broadband gap in the United States into stark relief—5.6 percent of the population has no access to broadband infrastructure. But for an even larger percentage of the population, the issue is that they can’t afford access, or they get by on mobile phone plans. Recent estimates, for example, suggest that 15 million to 16 million students—roughly 30 percent of the grade-school population in the United States—lack broadband access for some reason.

The Federal Communications Commission (FCC) has punted on broadband access for at least a decade. With the recent change in the regulatory regime, it’s time for the country that created the ARPANET to fix its broadband access problem. While the lack of access is driven largely by broadband’s high cost, the reason that cost is driving the broadband gap is because the FCC’s current definition of broadband is stuck in the early 2000s.

The FCC defines broadband as a download speed of 25 megabits per second and an upload speed of 3 Mb/s. The agency set this definition in 2015, when it was already immediately outdated. At that time, I was already stressing a 50 Mb/s connection just from a couple of Netflix streams and working from home. Before 2015, the defined broadband speeds in the United States were an anemic 4 Mb/s down and 1 Mb/s up, set in 2010.

If the FCC wants to address the broadband gap rather than placate the telephone companies it’s supposed to regulate, it should again redefine broadband. The FCC could easily establish broadband as 100 Mb/s down and at least 10 Mb/s up. This isn’t a radical proposal: As of 2018, 90.5 percent of the U.S. population already had access to 100 Mb/s speeds, but only 45.7 percent were tapping into it, according to the FCC’s 2020 Broadband Deployment Report.

Redefining broadband will force upgrades where necessary and also reveal locations where competition is low and prices are high. As things stand, most people in need of speeds above 100 Mb/s have only one option: cable providers. Fiber is an alternative, but most U.S. fiber deployments are in wealthy suburban and dense urban areas, leaving rural students and those living on reservations behind. A lack of competition leaves cable providers able to impose data caps and raise fees.

What seems like a lack of demand is more likely a rejection of a high-cost service, even as more people require 100 Mb/s for their broadband needs. In the United States, 100 Mb/s plans cost $81.19 per month on average, according to data from consumer interest group New America. The group gathered broadband prices across 760 plans in 28 cities around the world, including 14 cities in the United States. When compared with other countries, prices in the United States are much higher. In Europe, the average cost of a 100/10 Mb/s plan is $48.48, and in Asia, a similar plan would cost $69.76.

Closing the broadband gap will still require more infrastructure and fewer monopolies, but redefining broadband is a start. With a new understanding of what constitutes reasonable broadband, the United States can proactively create new policies that promote the rollout of plans that will meet the needs of today and the future.

This article appears in the February 2021 print issue as “Redefining Broadband.”

St. Helena’s New Undersea Cable Will Deliver 18 Gbps Per Person

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/telecom/internet/st-helenas-new-undersea-cable-will-deliver-18-gbps-per-person

Since 1989, the island of St. Helena in the South Atlantic has relied on a single 7.6-meter satellite dish to connect the island’s residents to the rest of the world. While the download and upload speeds have increased over the years, the roughly 4,500 Saints, as the island’s residents call themselves, still share a measly 40-megabit-per-second downlink and a 14.4-Mbps uplink to stay connected. 

But come April, they’ll be getting quite an upgrade: An undersea cable with a maximum capacity of 80 terabits per second will make landfall on the island. That’s a higher data rate than residents can use or afford, so the island’s government is looking to satellite operators to defray the costs of tapping into the cable’s transmissions. However, an incumbent telecom monopoly, and the outdated infrastructure it maintains, could turn the entire project into a cable to nowhere.

Laying an undersea cable to an island with fewer than 5,000 inhabitants is obviously a terrible business idea. Someone has to pay for the seafloor route surveys, the manufacturing and laying of the fiber-optic cable and other hardware, and the ongoing operational costs. St. Helena, part of the British Overseas Territories, was able to pay for its cable only because the European Union’s European Development Fund granted the island €21.5 million in 2018 for that exact purpose.

St. Helena signed a contract with Google in December 2019 under which it would pay for a branch off the Equiano cable, which Google is currently laying from Portugal to South Africa. All that remained was finding someone to use and pay for the bulk of the data once it was flowing to the island. The minimum bandwidth that can be delivered by a single wavelength over the cable is 100 gigabits per second, still too much for St. Helena.

In recent years, several companies have begun launching low Earth orbit (LEO) satellite constellations to provide broadband service, with SpaceX’s Starlink being perhaps the most prominent example. Like any communications satellite, these connect users who can’t tap into terrestrial networks, whether because of distance or geography, by acting as a relay between the user and a ground-station antenna elsewhere that connects to an Internet backbone. The roughly 150-kilogram satellites in LEO constellations offer lower-latency connections compared with those of larger satellites up higher in geostationary orbits, but the trade-off is that they cannot see nearly as much of Earth’s surface. This limitation means the satellites need a scattering of regularly spaced ground stations to complete their connections. 

The scheme also creates ocean-size problems in maintaining coverage for airplanes, ships, and islands. These can connect directly to a LEO satellite, but then there needs to be a connection from the satellite to Earth’s terrestrial networks. Hence the need for a ground station with access to a long-haul cable.“It’s critical to find places where there’s a little bit of land,” says Michele Franci, who is in charge of operations at the LEO satellite company OneWeb, one of the companies interested in building on the island. “Without anything, there will be a hole in the coverage.”

St. Helena is one of the few little bits of land in the South Atlantic. So when OneWeb learned of the proposed Equiano cable, the benefits of having a ground station on the island became apparent. “Without that cable, it would not have been a feasible option,” Franci says. OneWeb filed for bankruptcy in March 2020 but has been bought jointly by the British government and the Indian telecom company Bharti Global.

Christian von der Ropp, who launched a campaign in 2011 to connect St. Helena to an undersea cable, sees ground stations as the ideal way to offset the cable’s operating costs. With OneWeb and other companies that have expressed some level of interest such as SpaceX and Maxar paying for the bulk of the throughput, Saints can siphon off the bit of data they need to conduct business and stay in touch with family and friends working off-island.

But von der Ropp, a satellite-telecom consultant, does foresee a last-mile problem in bringing high-speed connections to Saints, and he blames it on the island’s telecom monopoly, Sure. “They are terribly exploiting the islands,” von der Ropp declares. Pay-TV packages can cost £40 per month, which is a lot on an island where the average monthly income comes to about £700. Sure also charges up to 600 percent premiums for any data usage that exceeds the amount in a subscriber’s plan. Von der Ropp says Sure’s infrastructure is insufficient and that Saints would experience throttled connections in the last mile. However, Christine Thomas, the chief executive for Sure’s operations on St. Helena, says that Sure’s prices have been approved by the island government. Thomas also says that the company has invested £3 million in building out the island’s infrastructure since 2013, and recognizes the need for further upgrades to match the cable’s throughput.

Sure’s current contract with the St. Helena government runs through 31 December 2022. While the cable spur off the Equiano cable to St. Helena will land on the island in April, it will not carry data until early 2022, when the entire cable is completed. The island’s government is currently drafting new telecom regulations and exploring the possibility of issuing a license to a different provider after Sure’s term expires.

Meanwhile, von der Ropp, along with island resident Karl Thrower, who worked on communications infrastructure in Europe before moving to the island, plans to create a nonprofit service provider called Saintel as an alternative to Sure. The two propose that Saintel build new millimeter-wave and laser-link infrastructure to provide high-speed connections from the cable’s endpoint and fund it in part by offering its network as a test bed for those technologies.

Despite an entrenched monopoly and poor infrastructure—not to mention the coronavirus pandemic, resulting in severely restricted travel to the island for preliminary cable and ground-station work—St. Helena can act as a model for how to connect other remote islands. OneWeb’s Franci notes that building ground stations on St. Helena will also improve satellite connections for the island of Tristan de Cunha, about 2,400 kilometers to St. Helena’s south. And other parts of the world’s oceans need coverage, too: For example, there are large gaps in LEO coverage below the 45th parallel south that could be plugged by islands with ground stations.

“The places that used to be isolated and not really part of the mainstream now become relevant,” Franci says. While St. Helena will be as remote as ever, at least it will no longer be isolated.

A version of this article appears in the January 2021 print issue as “A Small Island Waits On Big Data Rates.”

Quantum Memory Milestone Boosts Quantum Internet Future

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/milestone-for-quantum-memory-efficiency-makes-quantum-internet-possible

An array of cesium atoms just 2.5 centimeters long has demonstrated a record level of storage-and-retrieval efficiency for quantum memory. It’s a pivotal step on the road to eventually building large-scale quantum communication networks spanning entire continents.

Fake News Is a Huge Problem, Unless It’s Not

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/fake-news-is-a-huge-problem-unless-its-not

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

Jonathan Swift in 1710 definitely said, “Falsehood flies, and the truth comes limping after it.” Mark Twain, on the other hand, may or may not have said, “A lie can travel halfway around the world while the truth is putting on its shoes.”

Especially in the context of politics, we lately use the term “fake news” instead of “political lies” and the problem of fake news—especially when it originates abroad—seems to be much with us these days. It’s believed by some to have had a decisive effect upon the 2016 U.S. Presidential election and fears are widespread that the same foreign adversaries are at work attempting to influence the vote in the current contest.

A report in 2018 commissioned by the U.S. Senate Intelligence Committee centered its attention on the Internet Research Agency, a shadowy arm of Russia’s intelligence services. The report offers, to quote an account of it in Wired magazine, “the most extensive look at the IRA’s attempts to divide Americans, suppress the vote, and boost then-candidate Donald Trump before and after the 2016 presidential election.”

Countless hours of research have gone into identifying and combating fake news. A recent study found more than 2000 articles about fake news published between 2017 and 2020.

Nonetheless, there’s a dearth of actual data when it comes to the magnitude, extent, and impact, of fake news.

For one thing, we get news that might be fake in various ways—from the Web, from our phones, from television—yet it’s hard to aggregate these disparate sources. Nor do we know what portion of all our news is fake news. Finally, the impact of fake news may or may not exceed its prevalence—we just don’t know.

A new study looks into these very questions. Its authors include two researchers at Microsoft who listeners of the earlier incarnation of this podcast will recognize: David Rothschild and Duncan Watts were both interviewed here back in 2012. The lead author, Jennifer Allen, was a software engineer at Facebook before becoming a researcher at Microsoft in its Computational Social Science Group and she is also a Ph.D. student at the MIT Sloan School of Management and the MIT Initiative on the Digital Economy. She’s my guest today via Skype.

Jenny, welcome to the podcast.

Jennifer Allen Thank you, Steven. Happy to be here.

Steven Cherry [[COPY]] Jenny, Wikipedia defines “fake news” as “a type of yellow journalism or propaganda that consists of deliberate misinformation or hoaxes spread via traditional print and broadcast news media or online social media.” The term made its way into the online version of the Random House dictionary in 2017 as “false news stories, often of a sensational nature, created to be widely shared or distributed for the purpose of generating revenue, or promoting or discrediting a public figure, political movement, company, etc.” Jenny, are you okay with either of these definitions? More simply, what is fake news?

Jennifer Allen Yeah. Starting off with a tough question. I think the way that we define fake news really changes whether or not we consider it to be a problem or the magnitude of the problem. So the way that we define fake news in our research—and how the academic community has defined fake news—is that it is false or misleading information masquerading as legitimate news. I think the way that, you know, we’re using fake news is really sort of this hoax news that’s masquerading as true news. And that is the definition that I’m going to be working with today.

Steven Cherry The first question you tackled in this study and here I’m quoting it: “Americans consume news online via desktop computers and increasingly mobile devices as well as on television. Yet no single source of data covers all three modes.” Was it hard to aggregate these disparate sources of data?

Jennifer Allen Yes. So that was one thing that was really cool about this paper, is that usually when people study fake news or misinformation, they do so in the context of a single platform. So there’s a lot of work that happens on Twitter, for example. And Twitter is interesting and it’s important for a lot of reasons, but it certainly does not give a representative picture of the way that people consume information today or consume news today. It might be popular among academics and journalists. But the average person is not necessarily on Twitter. And so one thing that was really cool and important, although, as you mentioned, it was difficult as well, was to combine different forms of data.

And so we looked at a panel of Nielsen TV data, as well as a desktop panel of individual Web traffic, also provided by Nielsen. And then finally, we also looked at mobile traffic with an aggregate data set provided to us by ComScore. And so we have these three different datasets that really allow us to triangulate the way that people are consuming information and give sort of a high-level view.

Steven Cherry You found a couple of interesting things. One was that most media consumption is not news-related—maybe it isn’t surprising—and there’s a big difference across age lines.

Jennifer Allen Yes, we did find that. And so—as perhaps it might not be surprising—older people consume a lot more television than younger people do. And younger people spend more time on mobile and online than older people do. However, what might be surprising to you is that no matter whether it’s old people or younger people, the vast majority of people are consuming more news on TV. And so that is a stat that surprises a lot of people, even as we look across age groups—that television is the dominant news source, even among people age 18 to 24.

Steven Cherry When somebody looked at a television-originating news piece on the web instead of actually on television, you characterized it as online. That is to say, you characterized by the consumption of the news, not its source. How did you distinguish news from non-news, especially on social media?

Jennifer Allen Yes. So there are a lot of different definitions of news here that you could use. We tried to take the widest definition possible across all of our platforms. So on television, we categorized as news anything that Nielsen categorizes as news as part of their dataset. And they are the gold standard dataset for TV consumption. And so, think Fox News, think the Today show. But then we also added things that maybe they wouldn’t consider news. So Saturday Night Live often contains news clips and touches on the topical events of the day. And so we also included that show as news. And so, again, we tried to take a really wide definition. And the same online.

And so online, we also aggregated a list of, I think, several thousand Web site that were both mainstream news and hyper-partisan news, as well as fake news. And we find hyper-partisan news and fake news using these news lists that have emerged in the large body of research that has come out of the 2016 elections / fake news phenomenon. And so there again, we tried to take the widest definition of fake news. And so not only are things like your crappy single-article site but also things like Breitbart and the Daily Wire we categorize as hyper-partisan sites.

Steven Cherry Even though we associate online consumption with young people and television with older people, you found that fake news stories were more likely to be encountered on social media and that older viewers were heavier consumers than younger ones.

Jennifer Allen Yes, we did find that. This is a typical finding within the fake news literature, which is that older people tend to be more drawn to fake news for whatever reason. And there’s been work looking at why that might be. Maybe it’s digital literacy. Maybe it’s just more interested in news generally. And it’s true that on social media, there’s more fake and hyper-partisan news than, you know, on the open web.

That being said, I would just emphasize that the dominant … that the majority of news that is consumed even on social media—and even among older Americans—is still mainstream. And so, think your New York Times or your Washington Post instead of your Daily Wire or Breitbart.

Steven Cherry You didn’t find much in the way of fake news on television at all.

Jennifer Allen Yes. And so this is sort of a function, as I was saying before, of the way that we defined fake news. We, by definition, did not find any fake news on television, because the way the fake news has really been studied and in the literature and also talked about sort of in the mainstream media is as this phenomenon of Web sites masquerading as legitimate news outlets. That being said, I definitely believe that there is misinformation that occurs on television. You know, a recent study came out looking at who the biggest spreader of misinformation around the coronavirus was and found it to be Donald Trump. And just because we aren’t defining that content as fake news—because it’s not deceptive in the way that it is presenting itself—doesn’t mean that it is necessarily legitimate to information. It could still be misinformation, even though we do not define it as fake news.

Steven Cherry I think the same thing would end up being true about radio. I mean, there certainly seems to be a large group of voters—including, it’s believed, the core supporters of one of the presidential candidates—who are thought to get a lot of their information, including fake information from talk radio.

Jennifer Allen Yeah, talk radio is unfortunately a hole in our research; we were not able to get a good dataset looking at talk radio. And indeed, you know, talk radio. And, you know, Rush Limbaugh’s talk show, for example, can really be seen as the source of a lot of the polarization in the news and the news environment.

And there’s been work done by Yochai Benkler at the Harvard Berkman Klein Center that looks at the origins of talk radio in creating a polarized and swampy news environment.

Steven Cherry Your third finding, and maybe the most interesting or important one, is and I’m going to quote again, “fake news consumption is a negligible fraction of Americans’ daily information diet.”

Jennifer Allen Yes. So it might be a stat that surprises people. We find that fake news comprises only 0.15 percent of Americans’ daily media diet. Despite the outsized attention that fake news gets in the mainstream media and especially within the academic community: more than half of the journal articles that contain the word news are about fake news in recent years. It is actually just a small fraction of the news that people consume. And also a small fraction of the information that people consume. The vast majority of the content that people are engaging with online is not news at all. It’s YouTube music videos. It’s entertainment. It’s Netflix.

And so I think that it’s an important reminder that when we consider conversations around fake news and its potential impact, for example, on the 2016 election, that we look at this information in the context of the information ecosystem and we look at it not just in terms of the numerator and the raw amount of fake news that people are consuming, but with the denominator as well. So how much of the news that people consume is actually fake news?

Steven Cherry So fake news ends up being only one percent or even less of our overall media diet. What percentage is it of news consumption?

Jennifer Allen It occupied less than one percent of overall news consumption. So that is, including TV. Of course, when you zoom in, for example, to fake news on social media, the scale of the problem gets larger. And so maybe seven to 10 percent—and perhaps more, depending on your definition of fake news—of news that is consumed on social media could be considered what we say is hyper-partisan or fake news. But still, again, to emphasize, the majority of people on Facebook are not seeing any news at all. So, you know, over 50 percent of people on Facebook and in our data don’t click on any news articles that we can see.

Steven Cherry You found that our diet is pretty deficient in news in general. The one question that you weren’t able to answer in your study is whether fake news, albeit just a fraction of our news consumption—and certainly a tiny fraction of our media consumption—still might have an outsized impact compared with regular news.

Jennifer Allen Yeah, that’s very true. And I think here it’s important to distinguish between the primary and secondary impact of fake news. And so in terms of, you know, the primary exposure of people consuming fake news online and seeing a news article about Hillary Clinton running a pedophile ring out of a pizzeria and then changing their vote, I think we see very little data to show that that could be the case. 

That being said, I think there’s a lot we don’t know about the secondary sort of impact of fake news. So what does it mean for our information diets that we now have this concept of fake news that is known to the public and can be used and weaponized?

And so, the extent to which fake news is covered and pointed to by the mainstream media as a problem also gives ammunition to people who oppose journalists, you know, mainstream media and want to erode trust in journalism and give them ammunition to attack information that they don’t agree with. And I think that is a far more dangerous and potentially negatively impactful effect of fake news and perhaps its long-lasting legacy.

The impetus behind this paper was that there’s all this conversation around fake news out of the 2016 election. There is a strong sense that was perpetuated by the mainstream media that fake news on Facebook was responsible for the election of Trump. And that people were somehow tricked into voting for him because of a fake story that they saw online. And I think the reason that we wanted to write this paper is to contradict that narrative because you might read those stories and think people are just living in an alternate fake news reality. I think that this paper really shows that that just isn’t the case.

To the extent that people are misinformed or they make voting decisions that we think are bad for democracy, it is more likely due to the mainstream media or the fact that people don’t read news at all than it is to a proliferation of fake news on social media. And you know, one thing that David [Rothschild]—and in one piece of research that David and Duncan [Watts] did prior to this study that I thought was really resonant was to say that, let’s look at the New York Times. And in the lead-up to the 2016 election, there were more stories about Hillary Clinton’s email scandal in the seven days before the 2016 election than there were about policy at all over the whole scope of the election process. And so instead of zeroing in on fake news, really push our attention to really take a hard look at the way the mainstream media operates. And also, you know, what happens in this news vacuum where people aren’t consuming any news at all.

Steven Cherry So people complain about people living inside information bubbles. What your study shows is fake news, if it’s a problem at all, is really the smallest part of the problem. A bigger part of the problem would be false news—false information that doesn’t rise to the level of fake news. And then finally, the question that you raise here of balance when it comes to the mainstream media. “Balance”—I should even say “emphasis.”

Jennifer Allen Yes. So I think, again, the extent to which people are misinformed, I think that we can look to the mainstream news. And, you know, for example, it’s overwhelming coverage of Trump and the lies that often he spreads. And I think some of the new work that we’re doing is trying to look at the mainstream media and its potential role and not reporting false news that is masquerading as true. But, you know, reporting on people who say false things without appropriately taking the steps to discredit those things and really strongly punch back against them. And so I think that is an area that is really understudied. And I would hope that researchers look at this research and sort of look at the conversation that is happening around Covid and, you know, mail in voting and the 2020 election and really take a hard look at mainstream media, you know, so-called experts or politicians making wild claims in a way that we would not consider them to be fake news, but are still very dangerous.

Steven Cherry Well, Jenny, it’s all too often true that the things we all know to be true aren’t so true. And as usual, the devil is in the details. Thanks for taking a detailed look at fake news, maybe with a better sense of it quantitatively, people can go on and get a better sense of its qualitative impact. So thank you for your work here and thanks for joining us today.

Jennifer Allen Thank you so much. Happy to be here.

We’ve been speaking with Jennifer Allan, lead author of an important new study, “Evaluating the Fake News Problem at the Scale of the Information Ecosystem.” This interview was recorded October 7, 2020. Our thanks to Raul at Gotham Podcast’s Studio for our engineering today and to Chad Crouch for our music.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

The Problem of Filter Bubbles Hasn’t Gone Away

Post Syndicated from Steven Cherry original https://spectrum.ieee.org/podcast/telecom/internet/the-problem-of-filter-bubbles-hasnt-gone-away

Steven Cherry Hi, this is Steven Cherry for Radio Spectrum.

In 2011, the former executive director of MoveOn gave a widely-viewed TED talk, “Beware Online Filter Bubbles“ that became a 2012 book and a startup. In all the talk of fake news these days, many of us have forgotten the unseen power of filter bubbles in determining the ways in which we think about politics, culture, and society. That startup tried to get people to read news they might otherwise not see by repackaging them with new headlines.

A recent app, called Ground News, has a different approach. It lets you look up a topic and see how it’s covered by media outlets with identifiably left-leaning or right-leaning slants. You can read the coverage itself, right–left–moderate or internationally; look at its distribution; or track a story’s coverage over time. Most fundamentally, it’s a way of seeing stories that you wouldn’t ordinarily come across.

My guest today is Sukh Singh, the chief technology officer of Ground News and one of its co-founders.

Sukh, welcome to the podcast.

Sukh Singh Thanks for having me on, Steven.

Steven Cherry Back in the pre-Internet era, newspapers flourished, but overall news sources were limited. Besides a couple of newspapers in one’s area, there would be two or three television stations you could get, and a bunch of radio stations that were mainly devoted to music. Magazines were on newsstands and delivered by subscription, but only a few concentrated on weekly news. That world is gone forever in favor of a million news sources, many of them suspect. And it seems the Ground News strategy is to embrace that diversity instead of lamenting it, putting stories in the context of who’s delivering them and what their agendas might be, and most importantly, break us out of the bubbles we’re in.

Sukh Singh That’s true that we are embracing the diversity as you mentioned, moving from the print era and the TV era to the Internet era. The costs of having a news outlet or any kind of a media distribution outlet have dropped dramatically to the point of a single-person operation becoming viable. That has had the positive benefit of allowing people to cater to very niche interests that were previously glossed over. But on the negative side, the explosion of a number of outlets out there certainly has—it has a lot of drawbacks. Our approach at Ground News is to take as wide a swath as we meaningfully can and put it in one destination for our subscribers.

Steven Cherry So has the problem of filter bubbles gotten worse since that term was coined a decade ago?

Sukh Singh It has. It has certainly gotten much worse. In fact, I would say by the time it was even coined, the development of filter bubbles were well underway before the phenomena was observed.

And that’s largely because it’s a natural outcome of the algorithms and, later, machine learning models used to determine what is being served, what content is being served to people. In the age of the Internet, personalization of a news feed became possible. And that meant more and more individual personalization and machines picking—virtually handpicking—what anybody get served. By and large, we saw the upstream effect of that being that news publications found out what type of content appealed to a large enough number of people to make to make them viable. And that has resulted in a shift from a sort of aspiration of every news outlet to be the universal record of truth to a more of an erosion of that. And now many outlets, certainly not all of them, but many outlets embracing the fact that they’re not going to cater to everyone; they are going to cater to a certain set of people who agree with their worldview. And their mission then becomes reinforcing that worldview, that agenda, that that specific set of beliefs through reiterated and repeated content for everything that’s happening in the world.

Steven Cherry People complain about the filtering of social media. But that original TED talk was about Google and its search results, which seems even more insidious. Where’s is the biggest problem today?

Sukh Singh I would say that social media has shocked us all in terms of how bad the problem can get. If you think back 10 years, 15 years … Social media as we have it today, would not have been the prediction of most people. If we think back to the origin of, say, Facebook, it was very much in the social networking era over most of the content you were receiving was from friends and family. It’s a relatively recent phenomenon, certainly in this decade. Not the last one, where we saw this chimera of social network plus digital media becoming social media and the news feed there—going back to the personalization—catered to the one-engagement rhetoric, one-success metric being how much time-on-platform could they get the user to spend. Which, comparing against Google, isn’t as much about the success metrics, it’s more getting you to the right page or to the most relevant page as quickly as possible. But with social media, when that becomes a multi-hour-per-day activity, it certainly had more wide-reaching and deeper consequences.

Steven Cherry So I gave just a little sketch of how Ground News works. Do you want to say a little bit more about that?

Sukh Singh Absolutely. So I think you alluded to this in your earlier questions … going from the past age of print and TV media to the Internet age. What we’ve seen is that news delivery today has really bifurcated into two types, into two categories. We have the more traditional, more legacy based news outlets coming to the digital age, to the mobile age, with websites and mobile apps that are tailored along the conventional sections of, here’s the sports section, here’s the entertainment section, here’s a politics section—that was roughly how the world of information was divided up by these publications, and that has carried over. And on the other side, we see the social media feed, which has in many ways blown the legacy model out of the water.

It’s a single-drip feed where the user has to do no work and can just scroll and keep finding more and more and more … an infinite supply of engaging content. That divide doesn’t map exactly to education versus entertainment. Entertainment and sensationalism has been a part of media as far as back media goes. But there certainly is more affinity toward entertainment in a social media feed which caters to engagement.

So we at Ground News serve both those needs, through both those models, with two clearly labeled and divided feeds. One is Top Stories and the other one is My Feed. The Top Stories is more. … there’s a legacy model of, here are universally important news events that you should know about, no matter which walk of life you come from, no matter where you are located, no matter where your interests lie. And the second being My Feed, which is the recognition that ultimately people will care more about certain interests, certain topics, certain issues than other ones. So it is a nod to that personalization within the limits of not delving down to the same spiral of filter bubbles.

Steven Cherry There’s only so many reporters at a newspaper. There’s only so many minutes we have in a day to read the news. So in all of the coverage, for example, of the protests this past month—coverage we should be grateful for of an issue that deserves all the prominence it can get—a lot of other stories got lost. For example, there was a dramatic and sudden announcement of a reduction in our U.S. troop count in Germany. [Note: This episode was recorded June 18, 2020 — Ed.] I happened to catch that story in the New York Times myself. But it was a pretty small headline, buried pretty far down. It was covered in your weekly newsletter, though. I take it you see yourself as having a second mission besides one-sided news. The problem of under-covered news.

Sukh Singh Yes, we do, and that’s been a realization as we’ve made the journey of Ground News. It wasn’t something that we recognized from the onset, but something that we discovered as we were … as our throughput of news increased. We spoke about the problem of filter bubbles. And we initially thought the problem was bias. The problem was that a news event happens, some real concrete event happens in the real world, and then it is passed on as information through various news outlets, each one spinning it or at least wording it in a way that aligned to either their core agenda or to the likings of their audience. More and more, we found that the problem isn’t just bias and spin, it’s also the mission.

So if we look at the wide swath of left-leaning and right-leaning publications, news publications, in America today … If we were to go to the home page of two publications fairly wide apart on the political spectrum, you would not just find the same news stories with different headlines or different lenses, but an entirely different set of news stories. So much so—you mentioned our newsletter to the Blindspot report—in the Blindspot report. We pick each week five to six stories that were covered massively on one side of the political spectrum but entirely omitted from the other.

So in this case—the event that you mentioned about the troop withdrawal from Germany—it did go very unnoticed by certain parts of the political spectrum. So as a consumer, as a consumer who wants to be informed, going to one or two news sources, no matter how valuable, no matter how rigorous they are, will inevitably result in very large parts of the news out there that will be emitted from your field of view. It’s a secondary conversation, whether that’s whether if you’re going to the right set of publications or not. But what a more primary and more concerning conversation is: how do you communicate with your neighbor when they’re coming from a completely different set of news stories and a different worldview informed by them?

Steven Cherry The name Ground News seems to be a reference to the idea that there’s a ground truth. There are ground truths in science and engineering; it would be wonderful, for example, if we could do some truly random testing for coronavirus and get the ground truth on rates of infection. But are there ground truths in the news business anymore? Or are there only counterbalancings of partial truths?

Sukh Singh That’s a good question. I wouldn’t be as cynical to say as that there’s no news publications out there reporting what they truly believe to be the ground truth. But we do find ourselves in a world where … in a world of court counterbalances. We do turn on the TV news networks and we do see a set of three talking heads with a moderator in the middle and differing opinions on either side. So what we do, at Ground News—as you said, the reference to the name—is try to have that flat, even playing field where different perspectives can come and make their case.

So our aspiration is always to take the—whether that’s the ground truth, whether that’s in the world of science or in the world philosophy, whatever you want to call an atomic fact—is, take the real event and then have dozens, typically, on average, we have about 20 different news perspectives, local, national, international, left, right, all across the board covering the same news event. And are our central thesis is that the ultimate solution is reader empowerment, that no publication or technology can truly come to the conclusions for a person. And there’s perhaps a “shouldn’t” in there as well. So our mission really is to take the different news perspectives, present them on an even playing field to the user, to our subscribers, and then allow them to come to their own conclusion.

Steven Cherry So without getting entirely philosophical about this, it seems to me that—let’s say in the language of Plato’s Republic and the allegory of the cave—you’re able to help us look at more than just the shadows that are projected on the wall of the cave. We get to see all of the different people projecting the shadows, but we’re still not going to get to the Platonic forms of the actual truth of what’s happening in the world. Is that fair to say?

Sukh Singh Yes. Keeping with that allegory, I would say that our assertion is not that every single perspective is equally valid. That’s not a value judgment view that we ever make. We don’t label the left-right-moderate biases on any news publication or a platform. We actually source them from three arms-length nonprofit agencies that have the mission of labeling news publications by their demonstrated bias. So we aggregate and use those as labels in our platform. So we never pass a value judgment on any perspective. But my hope personally and ours as a company really is that some perspectives are getting you closer to the glimpse of the outside rather than just being another shadow on the wall. The onus really is on the reader to be able to say which perspective or which coverage they think most closely resembles what the ground truth is.

Steven Cherry I think that’s fair enough. And I think I would also be fair to add that, even for issues for which there really isn’t a pairing of two opposing sides—for example, climate change, responsible journalists pretty much ignore the idea of there being no climate change—but still, it’s important for people politically to understand that there are people out there who have not accepted climate change and that they’re still writing about it and still sharing views and so forth. And so it seems to me that what you’re doing is shining a light on that aspect of it.

Sukh Singh Absolutely, and one of our key aspirations and our mission is to enable people to have those conversations. So even if you are 100 percent convinced that you are going to credible news publications and you’re getting the most vetted and journalistically rigorous news coverage that is available on the free market, it may still be that you might not be able to reach across the aisle or just go next door and talk to your neighbor or your friend, who is living in a very different … different world view. Better or worse, again, we won’t pass judgment, but just having a more expanded scope of news stories that come into your field of view, on your radar, does enable you to have those conversations, even if you feel some of your peers may be misguided.

Steven Cherry The fundamental problem in news is that there are financial incentives for aggregators like Google and Facebook and for the news sources themselves to keep us in the bubbles that we’re in, feeding us only stories that fit our world view and giving us extreme versions of the news instead of more moderate ones. You yourself noted that with Facebook and other social networks, the user does no work in those cases. Using Ground News is something you have to do actively. Do you ever fear that it’s just a sort of Band-Aid that we can place on this gaping social wound?

Sukh Singh So let me deal with that in two parts, the first part is the financial sustainability of journalism. There certainly is a crisis there. And then I think we can have another of several more of these conversations about the financial sustainability in journalism and solutions to that crisis.

But one very easily identifiable problem is the reliance on advertising. I think a lot of news publications all too willingly started publicizing their content on the Internet to increase their reach and any advertising revenue that they could get off of that from Facebook, sorry, from Google and later Facebook, was incremental revenue to their print subscription. And they were, on the whole, very chipper to get incremental revenue by using the Internet. As we’ve seen, that problem has become a more and more of a stranglehold on news publications and media publications in general, where they’re trying to find a fight for these ad dollars. And the natural end of that, that competition is sensationalism and clickbait. That’s speaking to the financial sustainability in journalism there.

I mean, the path we’ve chosen to go down—exactly for that reason—is to charge subscriptions directly to our users. So we have thousands of paying subscribers now paying a dollar a month or ten dollars a year to access the features on Ground News. And that’s a nominal price point. But it also has an ulterior motive to that. It really is about habit-building and getting people to pay for news again. There are many of us have forgotten over the last couple of decades that news paying for news, which almost used to be that the same as paying for electricity or water, that sense of having to pay for news, has disappeared. We’re trying to revive that, which again, will hopefully pay dividends down the line for financial sustainability in journalism.

In terms of being a Band-Aid solution, we do think there is more of a movement for people accepting the responsibility to do the work, to inform themselves, which is direct and stands in direct contrast to the social media feed, which I think most of us have come to distrust, especially in recent years. There was a, I believe, Reuters study two years ago that showed that 2013 was the first year where people went to Facebook for their news, fewer people than to Facebook for their news in twenty eighteen than they did in the year before. And that was the first time in a decade. So I do think there’s a recognition of that. There’s a recognition a social media feed is no longer a viable news delivery mechanism. So people we do see come doing that little bit of work and on our part, we make it as accessible as possible here. Your question reminds me of the kind of adage that as a consumer, if you’re not the customer, you’re the product. And that really is the divide using a free social media feed as opposed to paying for a news delivery mechanism.

Steven Cherry Ground News is actually a service of your earlier startup Snapwise. Do you want to say a little bit about it, what it does.

Sukh Singh My co-founder was a former NASA engineer of NASA’s satellite engineer who worked on earth observation satellites. So she was working on a constellation of satellites that went across the planet every 24 hours and mapped every square foot of the planet for literally the ground truth, what was happening everywhere on the planet. And once she left her space career and she and I was starting to talk about the impact of technology in journalism, we realized that if we can map the entire planet every 24 hours and have an undeniable record of what what’s happening in the world, why can’t we have the same in the news industry? So our earliest iteration of what is now Ground News was much more focused on citizen journalism and getting folks to use their phones to communicate what was happening in the world around them and getting that firsthand data into the information stream, which we consume as news consumers.

If this is starting to sound like Twitter, we ran into several of the same drawbacks, especially when it came to news integrity and verifying the facts and making sure that what people were using as information really was to them to the same grade as professional journalists. And more and more, we realized we couldn’t diminish the role of professional journalists in delivering what the news is. So we started to advocate more and more vetted, credible news publications from across the world. And before we knew it, we had fifty thousand different unique sources of news, local, national, international, left-to-right, all the way down from your town newspaper to you to a giant multi-national press wire service like Thomson Reuters. We were taking all those different news sources and putting them in the same platform. So so that’s really been our evolution, as people trying to solve some of these problems in the journalistic industry.

Steven Cherry How do you identify publications as being on the left or on the right?

Sukh Singh As we started aggregating more and more news sources, we got over to the 10,000 mark. And before we knew what we were up to 50,000 news sources that we were covering. It’s humanly impossible for our small team or imagine even a much, much larger team to really carefully go and label each of them. So we’ve taken that from a number of news monitoring agencies whose mission and entire purpose as organizations is to review and review news publications.

So we use three different ones today. Media Bias Fact Check, AllSides, as well as Ad Fontes Media and all three of these, I would call them rating agencies, if you want to use the stock market analogy, that sort of rate, the political leanings and factual sort of demonstrated factuality of these news organizations. We take that as inputs. We aggregate them. But you do make exactly their original labels available on our platform, to use an analogy from the movie world, where we’re sort of like Metacritic, aggregating ratings from IMDb and Rotten Tomatoes and different platforms and making that all transparently available for consumers.

Steven Cherry You’re based in Canada, in Kitchener, which is a small city about an hour from Toronto. I think Americans think of Canada as having avoided some of the extremisms of the U.S. I mean, other than maybe burning down the White House a couple of centuries ago, it’s been a pretty easy-going get-along kind of place. Do you think being Canadian and looking at the U.S. from a bit of a distance contributed to what you’re doing?

Sukh Singh We had I don’t think we’ve had a belligerent reputation since since the War of 1812. As Canadians, we do enjoy a generally nice-person kind of stereotype. We are, as you said, at arm’s length. And sitting’s not quite a safe distance away, but across the border from everything that happens in the US, but with frequent trips down and just being deeply integrating with the United States as as a country, we do get a very, very close view on what’s happening.

North of the border, we do have our own political … I mean, we do have our own political system to deal with all of its workings and all of its ins and outs. But in terms of where we’ve really seen Ground News deliver value, it certainly has been in the United States. That is are both our biggest market and our largest set of subscribers by far.

Steven Cherry Thank you so much for giving us this time today and explaining a service that’s really providing an essential function in this chaotic news political world.

Sukh Singh Thanks, Steven.

Steven Cherry We’ve been speaking with Sukh Singh, CTO and co-founder of Ground News, an app that helps break us out of our filter bubbles and tries to provide a 360-degree view of the news.

Our audio engineering was by Gotham Podcast Studio in New York. Our music is by Chad Crouch.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronic Engineers.

For Radio Spectrum, I’m Steven Cherry.

Note: Transcripts are created for the convenience of our readers and listeners. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

We welcome your comments on Twitter (@RadioSpectrum1 and @IEEESpectrum) and Facebook.

This interview was recorded June 18, 2020.

Resources

Spotify, Machine Learning, and the Business of Recommendation Engines

https://mediabiasfactcheck.com/

AllSides

Ad Fontes Media

“Beware Online Filter Bubbles” (TED talk)

The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think, by Eli Pariser (Penguin, 2012)

100 Million Zoom Sessions Over a Single Optical Fiber

Post Syndicated from Jeff Hecht original https://spectrum.ieee.org/tech-talk/telecom/internet/single-optical-fibers-100-million-zoom

Journal Watch report logo, link to report landing page

A team at the Optical Networks Group at University College London has sent 178 terabits per second through a commercial singlemode optical fiber that has been on the market since 2007. It’s a record for the standard singlemode fiber widely used in today’s networks, and twice the data rate of any system now in use. The key to their success was transmitting it across a spectral range of 16.8 terahertz, more than double the the broadest range in commercial use.

The goal is to expand the capacity of today’s installed fiber network to serve the relentless demand for more bandwidth for Zoom meetings, streaming video, and cloud computing. Digging holes in the ground to lay new fiber-optic cables can run over $500,000 a kilometer in metropolitan areas, so upgrading transmission of fibers already in the ground by installing new optical transmitters, amplifiers, and receivers could save serious money. But it will require a new generation of optoelectronic technology.

A new generation of fibers have been in development for the past few years that promise higher capacity by carrying signals on multiple paths through single fibers. Called spatial division multiplexing, the idea has been demonstrated in fibers with multiple cores, multiple modes through individual cores, or combining multiple modes in multiple fibers. It’s demonstrated record capacity for single fibers, but the technology is immature and would require the expensive laying of new fibers. Boosting capacity of fibers already in the ground would be faster and cheaper. Moreover, many installed fibers remain dark, carrying no traffic, or transmitting on only a few of the roughly 100 available wavelengths, making them a hot commodity for data networks.

“The fundamental issue is how much bandwidth we can get” through installed fibers, says Lidia Galdino,  a University College lecturer who leads a team including engineers from equipment maker Xtera and Japanese telecomm firm KDDI. For a baseline they tested Corning Inc.’s SMF-28 ULL (ultra-low-loss) fiber, which has been on the market since 2007. With a pure silica core, its attenuation is specified at no more than 0.17 dB/km at the 1550-nanometer minimum-loss wavelength, close to the theoretical limit. It can carry 100-gigabit/second signals more than a thousand kilometers through a series of amplifiers spaced every 125 km.

Generally, such long-haul fiber systems operate in the C band of wavelengths from 1530 to 1565 nm. A few also operate in the L band from 1568 to 1605 nm, most notably the world’s highest-capacity submarine cable, the 13,000-km Pacific Light Cable, with nominal capacity at 24,000 gigabits per second on each of six fiber pairs. Both bands use well-developed erbium-doped fiber amplifiers, but that’s about the limit of their spectral range.

To cover a broader spectral range, UCL added the largely unused wavelengths of 1484 to 1520 nm in the shorter-wavelength S band. That required new amplifiers that used thulium to amplify those wavelengths. Because only two thulium amplifiers were available, they also added Raman-effect fiber amplifiers to balance gain across that band. They also used inexpensive semiconductor optical amplifiers to boost signals reaching the receiver after passing through 40 km of fiber. 

Another key to success is format. “We encoded the light in the best possible way” of geometric coding quadrature amplitude modulation (QAM) format to take advantage of differences in signal quality between bands. “Usually commercial systems use 64 points, but we went to 1024 [QAM levels]…an amazing achievement,” for the best quality signals, Gandino said. 

This experiment, reported in IEEE Photonics Technology Letters, is only the first in a planned series. Their results are close to the Shannon limit on communication rates imposed by noise in the channel. The next step, she says, will be buying more optical amplifiers so they can extend transmission beyond 40 km.

“This is fundamental research on the maximum capacity per channel,” Galdino says. The goal is to find limits, rather than to design new equipment. Their complex system used more than US$2.6 million of equipment, including multiple types of amplifiers and modulation schemes. It’s a testbed, not optimized for cost, performance, or reliability, but for experimental flexibility. Industry will face the challenge of developing detectors, receivers, amplifiers and high-quality lasers on new wavelengths, which they have already started. If they succeed, a single fiber pair will be able to carry enough video for all 50 million school-age children in the US to be on two Zoom video channels at once.

Predicting the Lifespan of an App

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/predicting-the-lifespan-of-an-app

The number of apps smartphone users have to choose from is daunting, with roughly 2 million available through the Apple Store alone. But survival of the fittest applies to the digital world too, and not all of these apps will go on to become the next Tik Tok. In a study published 29 July in IEEE Transactions on Mobile Computing, researchers describe a new model for predicting the long-term survival of apps, which outperforms seven existing designs.

“For app developers, understanding and tracking the popularity of an app is helpful for them to act in advance to prevent or alleviate the potential risks caused by the dying apps,” says Bin Guo, a professor at Northwestern Polytechnical University who helped develop the new model.

“Furthermore, the prediction of app life cycle is crucial for the decision-making of investors. It helps evaluate and assess whether the app is promising for the investors with remarkable rewards, and provides in advance warning to avoid investment failures.”

In developing their new model, AppLife, Guo’s team took a Multi-Task Learning (MTL) approach. This involves dividing data on apps into segments based on time, and analyzing factors – such as download history, ratings, and reviews – at each time interval. AppLife then predicts the likelihood of an app being removed within the next one or two years.

The researchers evaluated AppLife using a real-world dataset with more than 35,000 apps from the Apple Store that were available in 2016, but had been released the previous year. “Experiments show that our approach outperforms seven state-of-the-art methods in app survival prediction. Moreover, the precision and the recall reach up to 84.7% and 95.1%, respectively,” says Guo.

Intriguingly, AppLife was particularly good at predicting the survival of apps for tools—even more so than apps for news and video. Guo says this could be because more apps for tools exist in the dataset, feeding the model with more data to improve its performance in this respect. Or, he says, it could be caused by greater competition among tool apps, which in turn leads to more detailed and consistent user feedback.

Moving forward, Guo says he plans on building upon this work. While AppLife currently looks at factors related to individual apps, Guo is interested in exploring interactions among apps, for example which ones complement each other. Analyzing the usage logs of apps is another area of interest, he says.

Twitter Bots Are Spreading Massive Amounts of COVID-19 Misinformation

Post Syndicated from Thor Benson original https://spectrum.ieee.org/tech-talk/telecom/internet/twitter-bots-are-spreading-massive-amounts-of-covid-19-misinformation

Back in February, the World Health Organization called the flood of misinformation about the coronavirus flowing through the Internet a “massive infodemic.” Since then, the situation has not improved. While social media platforms have promised to detect and label posts that contain misleading information related to COVID-19, they haven’t stopped the surge.

But who is responsible for all those misleading posts? To help answer the question, researchers at Indiana University’s Observatory on Social Media used a tool of their own creation called BotometerLite that detects bots on Twitter. They first compiled a list of what they call “low-credibility domains” that have been spreading misinformation about COVID-19, then used their tool to determine how many bots were sharing links to this misinformation. 

Their findings, which they presented at this year’s meeting of the Association for the Advancement of Artificial Intelligence, revealed that bots overwhelmingly spread misinformation about COVID-19 as opposed to accurate content. They also found that some of the bots were acting in “a coordinated fashion” to amplify misleading messages.  

The scale of the misinformation problem on Twitter is alarming. The researchers found that overall, the number of tweets sharing misleading COVID-19 information was roughly equivalent to the number of tweets that linked to New York Times articles. 

We talked with Kai-Cheng Yang, a PhD student who worked on this research, about the bot-detection game.

This conversation has been condensed and edited for clarity.

IEEE Spectrum: How much of the overall misinformation is being spread by bots?

Kai-Cheng Yang: For the links to the low-credibility domains, we find about 20 to 30 percent are shared by bots. The rest are likely shared by humans.

Spectrum: How much of this activity is bots sharing links themselves, and how much is them amplifying tweets that contain misinformation?

Yang: It’s a combination. We see some of the bots sharing the links directly and other bots are retweeting tweets containing those links, so they’re trying to interact with each other.

Spectrum: How do your Botometer and BotometerLite tools identify bots? What are they looking for? 

Yang: Both Botometer and BotometerLite are implemented as supervised machine learning models. We first collect a group of Twitter accounts that are manually annotated as bots or humans. We extract their characteristics from their profiles (number of friends, number of followers, if using background image, etc), and we collect data on content, sentiment, social network, and temporal behaviors. We then train our machine learning models to learn how bots are different from humans in terms of these characteristics. The differences between Botometer and BotometerLite is that Botometer considers all these characteristics whereas BotometerLite only focuses on the profiles for efficiency.

Spectrum: The links these bots are sharing: Where do they lead?

Yang: We have compiled a list of 500 or so low-credibility domains. They’re mostly news sites, but we would characterize many of them as ‘fake news.’ We also consider extremely hyper-partisan websites as low-credibility.

Spectrum: Can you give a few examples of the kinds of COVID-related misinformation that appear on these sites? 

Yang: Common themes include U.S. politics, status of the outbreak, and economic issues. A lot of the articles are not necessarily fake, but they can be hyper-partisan and misleading in some sense. We also see false information like: the virus is weaponized, or political leaders have already been vaccinated.

Spectrum: Did you look at whether the bots spreading misinformation have followers, and whether those followers are humans or other bots? 

Yang: Examining the followers of Twitter accounts is much harder due the API rate limit, and we didn’t conducted such analysis this time.

Spectrum: In your paper, you write that some of the bots seem to be acting in a coordinated fashion. What does that mean? 

Yang: We find that some of the accounts (not necessarily all bots) were sharing information from the same set of low-credibility websites. For two arbitrary accounts, this is very unlikely, yet we found some accounts doing so together. The most plausible explanation is that these accounts were coordinated to push the same information. 

Spectrum: How do you detect bot networks? 

Yang: I’m assuming you are referring to the network shown in the paper. For that, we simply extract the list of websites each account shares and then find the accounts that have very similar lists and consider them to be connected.

Spectrum: What do you think can be done to reduce the amount of misinformation we’re seeing on social media?

Yang: I think it has to be done by the platforms. They can do flagging, or if they know a source is low-credibility, maybe they can do something to reduce the exposure. Another thing we can do is improve the average person’s journalism literacy: Try to teach people that there might be those kinds of low-credibility sources or fake news online and to be careful. We have seen some recent studies indicating that if you tell the user what they’re seeing might be from low-credibility sources, they become much more sensitive to such things. They’re actually less likely to share those articles or links. 

Spectrum: Why can’t Twitter prevent the creation and proliferation of bots? 

Yang: My understanding is that when you try to make your tool or platform easy to use for real users, it opens doors for the bot creators at the same time. So there is a trade-off.

In fact, according to my own experience, recently Twitter started to ask the users to put in their phone numbers and perform more frequent two-step authentications and recaptcha checks. It’s quite annoying for me as a normal Twitter user, but I’m sure it makes it harder, though still possible, to create or control bots. I’m happy to see that Twitter has stepped up.

Indian Mobile Service Providers Suspected of Providing Discriminatory Services

Post Syndicated from Payal Dhar original https://spectrum.ieee.org/tech-talk/telecom/internet/indian-mobile-service-providers-suspected-of-providing-discriminatory-services

India’s Telecom Disputes Settlement and Appellate Tribunal (TDSAT) has granted interim relief to telecom companies Bharti Airtel and Vodafone Idea, allowing them to continue with their premium-service plans. The TDSAT order came on 18 July, exactly a week after the country’s telecom regulatory authority had blocked the two companies from offering better speeds to higher-paying customers, citing net neutrality violations.

“This is not a final determination by the TDSAT,” says Apar Gupta of the Internet Freedom Foundation, a digital liberties organization that has been at the forefront of the fight for online freedom, privacy, and innovation in India. While the Telecom Regulatory Authority of India (TRAI) continues with its inquiry, the two providers will not be prevented from rolling out their plains.

The matter was brought to TRAI’s notice on 8 July by a rival mobile service provider, Reliance Jio, which wrote to the regulatory body asking about Airtel’s and Vodafone Idea’s Platinum and RedX plans, respectively. “Before offering any such plans ourselves…we would like to seek the Authority’s views on whether [these] tariff offerings…are in compliance with the extant regulatory framework,” the letter said.

Three days later, TRAI asked for the respective Airtel and Vodafone Idea plans to be blocked while these claims were investigated. It also sent both telcos a 10-point questionnaire related to various elements of their services, seeking clarification on how they defined “priority 4G network” and “faster speeds,” among other things. Following the blocking of the plans, Vodafone Idea approached TDSAT, arguing that TRAI’s order was illegal and arbitrary, considering that their RedX plan had been rolled out over eight months earlier. When contacted for comment on the matter, Vodafone declined, “as the matter is in TDSAT court.” Airtel, meanwhile, has agreed to comply with TRAI’s directive and not take new customers for its Platinum plan until the matter has been fully investigated.

Although it is being framed as such by media coverage and in the court of public opinion, strictly speaking, the offering of new tariffs by Airtel and Vodafone Idea are not net neutrality concerns, says Nikhil Pahwa, co-founder of Save the Internet, the campaign that played a key role in framing India’s net neutrality rules. “In India, net neutrality regulation covers…whether specific internet services or apps are either being priced differentially or being offered at speeds different from the rest of the Internet.” However, from a consumer perspective, he adds, “I think it is important for the TRAI to investigate these plans because…it is impossible for telecom operators to guarantee speeds for customers. What needs to be investigated is whether speeds are effectively deprecated for a particular set of consumers, because the throughput from a mobile base station is limited.”

Since July 2018, India has had stringent net neutrality regulations in place—possibly among the strongest in the world—at least on paper. Any form of data discrimination is banned; blocking, degrading, slowing down or granting preferential speeds or treatment by providers is prohibited; and Internet service providers stand to lose their licenses if found in violation. This was the result of a massive, public, volunteer-driven campaign since 2015. Save the Internet estimates that over 1 million citizens were part of the campaign at one point or another.

The concept of net neutrality captured public imagination when, in 2014, Airtel decided it would charge extra for VoIP services. The company pulled its plan after public outcry, but the wheels of differential pricing were set in motion. This resulted in TRAI prohibiting discriminatory tariffs for data services in 2016—a precursor to the net neutrality principles adopted two years later. These developments also forced Facebook to withdraw its zero-rated Free Basics service in India.

“We have not seen net neutrality enforcement in India till now in a very clear manner,” says Gupta, adding that TRAI is in the process of coming up with an enforcement mechanism. “They opened a consultation on it, and invited views from people… Right now they’re in the process of making…recommendations to the Department of Telecom, which can then frame them under the Telegraph Act.” The telecom department exercises wider powers under this Act, even though TRAI also has specific powers in administering certain licensing conditions, including quality of service and interconnection.

“[The] internet is built around the idea that all users have equal right to create websites, applications, and services for the rest of the world, and enables innovation because it is a space with infinite competition,” Pahwa says. And net neutrality is at the core of that freedom.

Infinera and Windstream Beam 800 Gigabits Per Second Through a Single Optical Fiber

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/infinera-and-windstream-beam-800-gigabits-per-second-through-a-single-optical-fiber

For the first time, an 800 gigabit per second connection has been made over a live fiber optic link. The connection, a joint test in June conducted by Infinera and Windstream, beamed through a fiber optic line stretching from San Diego and Phoenix. If widely implemented, 800G connections could reduce the costs of operating long-haul fiber networks.

800G should not be confused with the more commonly-known 5G cellular service. In the latter, the “G” refers to the current generation of wireless technology. In fiber optics, the “G” indicates how many gigabits per second an individual cable can carry. For most long-haul routes today, 100G is standard.

The test conducted by Infinera, an optical transmission equipment manufacturer, and Windstream, a service provider, is not the first 800G demonstration, nor is it even the first 800G over long distances. It is, however, the first demonstration over a live network, where conditions are rarely, if ever, as ideal as a laboratory.

“We purposely selected this travel route because of how typical it looks,” says Art Nichols, Windstream’s vice president of architecture and technology.

In a real-world route, amplifiers and repeaters, which boost and regenerate optical signals respectively, are not placed regularly along the route for optimal performance. Instead, they’re placed near where people actually live, work, and transmit data. This means that a setup that might deliver 800 Gbps in a lab may not necessarily work over an irregular live network.

For 800G fiber, 800 Gbps is the maximum data rate possible and usually is not sustainable over very long distances, often falling off after about 100 kilometers. The 800G test conducted by Infinera and Windstream successfully delivered the maximum data rate through a single fiber across more than 730 km. “There’s really a fundamental shift in the underlying technology that made this happen,” says Rob Shore, the senior vice president of marketing at Infinera.

Shore credits Infinera’s Nyquist subcarriers [PDF] for sustaining maximum data rates over long distances. Named for electrical engineer Harry Nyquist, the subcarriers digitally divide a single laser beam into 8 components.

“It’s the same optical signal, and we’re essentially dividing it or compartmentalizing it into separate individual data streams,” Shore says.

Infinera’s use of Nyquist subcarriers amplifies the effect of another, widely-adopted optical technique: probabilistic constellation shaping. According to Shore, the technique, originally pioneered by Nokia, is a way to “groom” individual optical signals for better performance—including traveling longer distances before suffering from attenuation. Shore says that treating each optical signal as 8 separate signals thanks to the Nyquist subcarriers essentially compounds the effects of probabilistic constellation shaping, allowing Infinera’s 800G headline data rates to travel much further than is typically possible.

What’s next for 800G after this test? “Obviously, the very first thing we need to is to actually release the product” used for the demonstration, Shore says, which he expects Infinera to do before the end of the year. 800G fiber could come to play an important part in network backhaul, especially as 5G networks come on-line around the world. All that wireless data will have to travel through the wired infrastructure somehow, and 800G fiber could ensure there will be bandwidth to spare.

Researchers Use Lasers to Bring the Internet Under the Sea

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/researchers-lasers-bring-internet-underwater

Communicating underwater has always been a hassle. Radio transmissions, the ubiquitous wireless standard above the waves, can’t transmit very far before being entirely absorbed by the water. Acoustic transmissions (think sonar) are the preferred choice underwater, but they suffer from very low data rates. Wouldn’t it be nice if we could all just have Wi-Fi underwater instead?

Underwater Wi-Fi is exactly what researchers at the King Abdullah University of Science and Technology (KAUST) in Thuwal, Saudi Arabia, have developed. The system, which they call Aqua-Fi, uses a combination of lasers and off-the-shelf components to create a bi-directional wireless connection for underwater devices. The system is fully compliant with  IEEE 802.11 wireless standards, meaning it can easily connect to and function as part of the broader Internet.

Here’s how it works: Say you have a device underwater that needs to transmit data (for the KAUST researchers, it was waterproofed smartphones). They then used a regular old Wi-Fi signal to connect that device to an underwater “modem.” Specifically, they used a Raspberry Pi to function as that modem. The Raspberry Pi converted the wireless signal to an optical signal (in this case, a laser) that was beamed to receiver attached to a surface buoy. From there, established communications techniques were used to send the signal to an orbiting satellite. For the underwater device to receive data, the process is simply reversed.

Aqua-Fi stems from work that the KAUST researchers did back in 2017, when they used a blue laser to transmit a 1.2-gigabit file underwater. But that wasn’t interesting enough, according to Basem Shihada, an associate professor of computer science at KAUST and one of the researchers on the Aqua-Fi project. “Who cares about submitting just a file?” he says. “Let’s do something with a bit more life.”

It was that thinking that spurred the team to start looking at bi-directional communications, with the ultimate goal of building a system that can transfer high-resolution video.

Shihada says it was important to him that all of the components be off the shelf. “My first rule when we started this project: I do not want to have something that is [custom made for this],” he says. The only exception is the circuit in the Raspberry Pi that converts the wireless signal to an optical signal and vice versa.

The team used LEDs instead of lasers in their first design, but found the LEDs were not powerful enough for high data rates. With LEDs, the beams were limited to distances of about 7 meters and data rates of about 1 kilobit per second. When they upgraded to blue and green lasers, they achieved 2.11 megabits per second over 20 meters.

Shihada says that currently, the system is limited by the capabilities of the Raspberry Pi. The team burned out the custom circuit responsible for converting optical and wireless signals when, on two occasions, they used a laser that was too powerful. He says that in order for this setup to incorporate more powerful lasers that can both communicate farther and transmit more data, the Raspberry Pi will need to be swapped out for a dedicated optical modem.

Even with the limitations of the Raspberry Pi, the KAUST researchers were able to use Aqua-Fi to place Skype calls and transfer files.

But there’s still a big problem that needs to be addressed in order to make a system like Aqua-Fi commercially viable; and it can’t be solved as easily as swapping out the Raspberry Pi. “If you want to imagine how to build the Internet underwater,” says Shihada, “laser alignment remains the most challenging part.” Because lasers are so precise, even mildly turbulent waters can knock a beam off course and cause it to miss a receptor.

The KAUST researchers are exploring two options to solve the alignment problem. The first is to use a technique similar to the “photonic fence” developed to kill mosquitoes. A low-power guide laser would scan for the receptor. When a connection is made, it would inform another, higher-powered laser to begin sending data. If the waves misaligned the system again, the high-power laser would shut off and the guide laser would kick in, initiating another search.

The other option is a MIMO-like solution using a small array of receptors, so that even if the laser emitter is jostled a bit by the water, it will still maintain a connection.

You might still be asking yourself at this point why anyone even needs the Internet underwater? First, there’s plenty of need in underwater conservation for remote monitoring of sea life and coral reefs, for example. High definition video collected and transmitted by wireless undersea cameras can be immensely helpful to conservationists.

But it’s also helpful to the high-tech world. Companies like Microsoft are exploring the possibility of placing data centers offshore and underwater. Placing data centers on the ocean floor can perhaps save money both on cooling the equipment as well as energy costs, if the kinetic energy of the waves can be harvested and converted to electricity. And if there are data centers underwater, the Internet will need to be there too.

Analysis of COVID-19 Tweets Reveals Who Uses Racially Charged Language

Post Syndicated from Michelle Hampson original https://spectrum.ieee.org/tech-talk/telecom/internet/analysis-tweets-related-covid19-racially-charged-language

Journal Watch report logo, link to report landing page
IEEE COVID-19 coverage logo, link to landing page

As the COVID-19 pandemic began to spread around the globe, it was followed by an increase in media coverage of racist attacks. Some have argued that the use of racially charged language to describe the novel coronavirus, including terms such as the “Chinese flu” or “Chinese virus,” may have played a role in these attacks.

In a recent study, published 21 May in IEEE Transactions on Big Data, researchers analyzed Twitter data to better understand which users are more likely to use racially charged versus neutral terms during the pandemic. In a second study, the group analyzed the general language used by these two groups of Twitter users, shedding light on their priorities and emotional states.

COVID-19 Makes It Clear That Broadband Access Is a Human Right

Post Syndicated from Stacey Higginbotham original https://spectrum.ieee.org/telecom/internet/covid19-makes-clear-broadband-access-is-human-right

Like clean water and electricity, broadband access has become a modern-day necessity. The spread of COVID-19 and the ensuing closure of schools and workplaces and even the need for remote diagnostics make this seem like a new imperative, but the idea is over a decade old. Broadband is a fundamental human right, essential in times like now, but just as essential when the world isn’t in chaos.

A decade ago, Finland declared broadband a legal right. In 2011, the United Nations issued a report [PDF] with a similar conclusion. At the time, the United States was also debating its broadband policy and a series of policy efforts that would ensure everyone had access to broadband. But decisions made by the Federal Communications Commission between 2008 and 2012 pertaining to broadband mapping, network neutrality, data caps and the very definition of broadband are now coming back to haunt the United States as cities lock themselves down to flatten the curve on COVID-19.

While some have voiced concerns about whether the strain of everyone working remotely might break the Internet, the bigger issue is that not everyone has Internet access in the first place. Most U.S. residential networks are built for peak demand, and even the 20 to 40 percent increase in network traffic seen in locations hard hit by the virus won’t be enough to buckle networks.

An estimated 21 to 42 million people in the United States don’t have physical access to broadband, and even more cannot afford it or are reliant on mobile plans with data limits. For a significant portion of our population, this makes remote schooling and work prohibitively expensive at best and simply not an option at worst. This number hasn’t budged significantly in the last decade, and it’s not just a problem for the United States. In Hungary, Spain, and New Zealand, a similar percentage of households also lack a broadband subscription according to data from the Organization for Economic Co-operation and Development.

Faced with the ongoing COVID-19 outbreak, Internet service providers in the United States. have already taken several steps to expand broadband access. Comcast, for example, has made its public Wi-Fi network available to anyone. The company has also expanded its Internet Essentials program—which provides a US $9.95 monthly connection and a subsidized laptop—to a larger number of people on some form of government assistance.

To those who already have access but are now facing financial uncertainty, AT&T, Comcast, and more than 200 other U.S. ISPs have pledged not to cut off subscribers who can’t pay their bills and not to charge late fees, as part of an FCC plan called Keep Americans Connected. Additionally, AT&T, Comcast, and Verizon have also promised to eliminate data caps for the near future, so customers don’t have to worry about blowing past a data limit while learning and working remotely.

It’s good to keep people connected during quarantines and social distancing, but going forward, some of these changes should become permanent. It’s not enough to say that broadband is a basic necessity; we have to push for policies that ensure companies treat it that way.

“If it wasn’t clear before this crisis, it is crystal clear now that broadband is a necessity for every aspect of modern civic and commercial life. U.S. policymakers need to treat it that way,” FCC Commissioner Jessica Rosenworcel says. “We should applaud public spirited efforts from our companies, but we shouldn’t stop there.” 

This article appears in the May 2020 print issue as “We All Deserve Broadband.”

How the Internet Can Cope With the Explosion of Demand for “Right Now” Data During the Coronavirus Outbreak

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/everyone-staying-home-because-of-covid19-is-targeting-the-internets-biggest-weak-spot

The continuing spread of COVID-19 has forced far more people to work and learn remotely than ever before. And more people in self-isolation and quarantine means more people are streaming videos and playing online games. The spike in Internet usage has some countries looking at ways to curb streaming data to avoid overwhelming the Internet.

But the amount of data we’re collectively using now is not actually the main cause of the problem. It’s the fact that we’re all suddenly using many more low-latency applications: teleconferencing, video streaming, and so on. The issue is not that the Internet is running out of throughput. It’s that there’s a lot more demand for data that needs to be delivered without any perceivable delay.

“The Internet is getting overwhelmed,” says Bayan Towfiq, the founder and CEO of Subspace, a startup focusing on improving the delivery of low-latency data. “The problem is going to get worse before it gets better.” Subspace wasn’t planning to come out of stealth mode until the end of this year, but the COVID-19 crisis has caused the company to rethink those plans.

“What’s [been a noticeable problem while everyone is at home streaming and videoconferencing] is less than one percent of data. [For these applications] it’s more important than the other 99 percent,” says Towfiq. While we all collectively use far more data loading webpages and browsing social media, we don’t notice if a photo takes half a second to load in the same way we notice a half-second delay on a video conference call.

So if we’re actually not running out of data throughput, why the concern over streaming services and teleconferencing overloading the Internet?

“The Internet doesn’t know about the applications running on it,” says Towfiq. Put another way, the Internet is agnostic about the type of data moving from point A to point B. What matters most, based on how the Internet has been built, is moving as much data as possible.

And normally that’s fine, if most of the data is in the form of emails or people browsing Amazon. If a certain junction is overwhelmed by data, load times may be a little slower. But again, we barely notice a delay in most of the things for which we use the Internet.

The growing use of low-latency applications, however, means those same bottlenecks are painfully apparent. When a staff Zoom meeting has to contend with someone trying to watch the Mandalorian, the Internet sees no difference between your company’s videochat and Baby Yoda.

For Towfiq, the solution to the Internet’s current stress is not to cut back on the amount of video-conferencing, streaming, and online gaming, as has been suggested. Instead, the solution is what Subspace has been focused on since its founding last year: changing how the Internet works by forcing it to prioritize that one percent of data that absolutely, positively has to get there right away.

Subspace has been installing both software and hardware for ISPs in cities around the world designed to do exactly that. Towfiq says ISPs already saw the value in Subspace’s tech after the company demonstrated that it could make online gaming far smoother for players by reducing the amount of lag they dealt with.

Initially Subspace was sending out engineers to personally install its equipment and software for ISPs and cities they were working with. But with the rising demand and the pandemic itself, the company is transitioning to “palletizing” its equipment: making it so that, after shipping it, the city or ISP can plug in just a few cables and change how their networks function.

Now, Towfiq says, the pandemic has made it clear that the startup needed to immediately come out of stealth. Even though Subspace was already connecting its new tech to cities’ network infrastructure at a rate of five per week in February, coming out of stealth will allow the company to publicly share information about what it’s working on. The urgency, says Towfiq, outweighed the company’s original plans to conduct some proof-of-concept trials and build out a customer base.

“There’s a business need that’s been pulled out of us to move faster and unveil right now,” Towfiq says. He adds that Subspace didn’t make the decision to come out of stealth until last Tuesday. “There’s a macro thing happening with governments and Internet providers not knowing what to do.”

Subspace could offer the guidance these entities need to avoid overwhelming their infrastructure. And once we’re all back to something approximating normal after the COVID-19 outbreak, the Internet will still benefit from the types of changes Subspace is making. As Towfiq says, “We’re becoming a new kind of hub for the Internet.”

How to Detect a Government’s Hand Behind Internet Shutdowns

Post Syndicated from Jeremy Hsu original https://spectrum.ieee.org/tech-talk/telecom/internet/how-to-detect-a-governments-hand-behind-internet-shutdowns

Internet shutdowns that affect entire regions or countries and cost billions of dollars annually have become a widespread phenomenon, especially as various governments wield them like a blunt club to restrict citizens’ access to online information.

Some governments deploy Internet shutdowns in an attempt to suppress protests, while Iraq’s Ministry of Education even orders shutdowns to prevent cheating during national school exams. The trick for independent observers trying to keep track of it all involves figuring out the difference between government-ordered shutdowns versus other causes of Internet outages.

In early 2020, the five-person team behind the nongovernmental organization NetBlocks was watching dips in Internet connectivity happening in a particular region of China over several months. That could have sparked suspicion that China’s online censors—who restrict access to certain online content as part of China’s “Great Firewall”—were perhaps throttling some popular online services or social media networks. But the NetBlocks team’s analysis showed that such patterns likely had to do with businesses shutting down or limiting operations to comply with government efforts aimed at containing the coronavirus outbreak that has since become a pandemic.

“When you’re investigating an internet shutdown, you need to work from both ends to conclusively verify that incident has happened, and to understand why it’s happened,” says Alp Toker, executive director of NetBlocks. “This means ruling out different types of outages.”

NetBlocks is among the independent research groups trying to keep an eye on the growing prevalence of Internet shutdowns. Since it formed in 2016, the London-based NetBlocks has expanded its focus from Turkey and the Middle East to other parts of the world by using remote measurement techniques. These include analytics software that monitors how well millions of phones and other devices can access certain online websites and services, along with both hardware probes plugged into local routers and an Internet browser probe that anyone can use to check their local connectivity.

But NetBlocks also relies upon what Toker describes as a more hands-on investigation to manually check out various incidents. That could mean checking in with local engineers or Internet service providers who are in a position to help confirm or rule out certain lines of inquiry. This combined approach has helped NetBlocks investigate all sorts of causes of Internet shutdowns, including major hurricanes, nationwide power outages in Venezuela and cuts in undersea Internet cables affecting Africa and the Middle East. Each of these types of outages provides data that NetBlocks is using to train machine learning algorithms in hopes of better automating detection and analysis of different events.

“Each of the groups that’s currently monitoring Internet censorship uses a different technical approach and can observe different aspects of what’s happening,” says Zachary Weinberg, a postdoctoral researcher at the University of Massachusetts Amherst and member of the Information Controls Lab (ICLab) project. “We’re working with them on combining all of our data sets to get a more complete picture.

ICLab relies heavily on a network of commercial virtual private networks (VPNs) to gain observation points that provide a window into Internet connectivity in each country, along with a handful of human volunteers based around the world. These VPN observation points can do bandwidth-intensive tests and collect lots of data on network traffic without endangering volunteers in certain countries. But one limitation of this approach is that VPN locations in commercial data centers are sometimes not subject to the same Internet censorship affecting residential networks and mobile networks.

If a check turns up possible evidence of a network shutdown, ICLab’s internal monitoring alerts the team. The researchers use manual confirmation checks to make sure it’s a government-ordered shutdown action and not something like a VPN service malfunction. “We have some ad-hoc rules in our code to try to distinguish these possibilities, and plans to dig into the data [collected] so far and come up with something more principled,” Weinberg says.

The Open Observatory of Network Interference (OONI) takes a more decentralized, human-reliant approach to measuring Internet censorship and outages. OONI’s six-person team has developed and refined a computer software tool called OONI probe that people can download and run to can check local Internet connectivity with a number of websites, including a global test list of internationally relevant websites (such as Facebook) and a country-specific test list. 

The OONI project began when members of the Tor Project, the nonprofit organization that oversees the Tor network designed to enable people to use the Internet anonymously, began creating “ad hoc scripts” to investigate blocking of Tor software and other examples of Internet censorship, says Arturo Filasto, lead developer of OONI. Since 2012, that has evolved into the free and open-source OONI probe with an openly-documented methodology explaining how it measures Internet censorship, along with a frequently updated database that anyone can search.

“We eventually consolidated [that] into the software that now tens of thousands of people run all over the world to collect their own evidence of Internet censorship and contribute to this growing pool of open data that anybody can use to research and investigate various forms of information controls on the Internet,” Filasto says.

Beyond the tens of thousands of active monthly users, hundreds of millions of people have downloaded the OONI probe. That probe is currently available as a mobile app and for desktop Linux and macOS users who don’t mind using the command-line interface, but the team aims to launch a more user-friendly desktop program for Windows and macOS users in April 2020. 

Other groups have their own approaches. The CensoredPlanet lab at the University of Michigan uses echo servers that exist primarily to bounce messages back to senders as observation points. The Cooperative Association for Internet Data Analysis (CAIDA) at the University of California in San Diego monitors global online traffic involving the Border Gateway Protocol, which backbone routers use to communicate with each other. 

On the low-tech side, news articles and word-of-mouth reports from ordinary people can also provide valuable internet outage data for websites such as the Internet Shutdown Tracker run by the Software Freedom Law Centre in New Delhi, India. But the Internet Shutdown Tracker website also invites mobile users to download and install the OONI probe tool as a way of helping gather more data on regional and city-level Internet shutdowns ordered by India’s government.

Whatever their approach, most of the groups tracking Internet shutdowns and online censorship still consist of small teams with budget constraints. For example, ICLab’s team would like to speed up and automate much of their process, but their budget is reliant in large part upon getting grants from the U.S. National Science Foundation. They also have limited data storage that restricts them to checking each country about two or three times a week on average to collect detailed cycles of measurements—amounting to about 500 megabytes of raw data per country. 

Another challenge comes on the data collection side. People may face personal risk in downloading and using OONI probe or similar tools in some countries, especially if the government’s laws regard such actions as illegal or even akin to espionage. This is why the OONI team openly warns about the risk up front as part of what they consider their informed consent process, and even require mobile users to complete a quiz before starting to use the OONI probe app.

“Thanks to the fact that many people are running OONI probe in China and Iran, we’ve been able to uncover a lot of really interesting and important cases of Internet censorship that we wouldn’t otherwise have known,” Filasto says. “So we are very grateful to the brave users of OONI probe that have gathered these important measurements.”

Recent trends in both government information control strategies and the broader Internet landscape may also complicate the work of such groups. Governments in countries such as China, Russia, and Iran have begun moving away from network-level censorship toward embedding censorship policies within large social media platforms and chat systems such as Tencent’s WeChat in China. Detecting more subtle censorship within these platforms represents an even bigger challenge than collecting evidence of a region-wide Internet shutdown.

“We have to create accounts on all these systems, which in some cases requires proof of physical-space identity, and then we have to automate access to them, which the platforms intentionally make as difficult as possible,” Weinberg says. “And then we have to figure out whether someone’s post isn’t showing up because of censorship, or because the ‘algorithm’ decided our test account wouldn’t be interested in it.”

In 2019, large-scale Internet shutdowns affecting entire countries occurred alongside the shift toward “more nuanced Internet disruptions that happen on different layers,” Toker says. The NetBlocks team is refining its analytical capability to home in on different types of outages by learning more about the daily pattern of Internet traffic that reflects each country’s normal economic activities. But Toker is also hoping that his group and others can continue forging international cooperation to study these issues together. For now, NetBlocks relies upon community contributions, the technical community, and volunteers.

“There are bubbles of expertise in different parts of the world, and those haven’t necessarily combined, so from where we’ve been coming I think those bridges are just starting to be built,” Toker says. “And that means really getting engineers together from different fields and different backgrounds, whether it’s electrical engineering or Internet engineering.”

Facebook Switches to New Timekeeping Service

Post Syndicated from Amy Nordrum original https://spectrum.ieee.org/tech-talk/telecom/internet/facebook-new-time-keeping-service

Facebook recently switched millions of its own servers and consumer products (including Portal and Oculus VR headsets) over to a new timekeeping service. The company says the new service, built in-house by the company’s engineers using open-source tools, is more scalable than the one it used previously. What’s more, it will improve the accuracy of device’s internal clocks from 10 milliseconds to 100 microseconds.

To figure out what time it is, Internet-connected devices look to timekeeping services maintained by companies or government agencies such as the U.S. National Institute of Standards and Technology (NIST). There are dozens of such services available. Devices constantly ping them for fresh timestamps formatted in the Network Time Protocol (NTP), and use the info to set or recalibrate their internal clocks.

With the announcement, Facebook joins other tech companies including Apple and Google that operate publicly-available timekeeping services of their own. Facebook’s service is now available for free to the public at time.facebook.com.

Snag With Linking Google’s Undersea Cable to Saint Helena Could Leave Telecom Monopoly Entrenched

Post Syndicated from Michael Koziol original https://spectrum.ieee.org/tech-talk/telecom/internet/googles-planned-equiano-undersea-cable-branch-to-saint-helena-hits-a-snag

Last June, Google announced an addition to the company’s planned Equiano undersea cable. In addition to stretching down the length of Africa’s western coastline, a branch would split off to connect to the remote island of Saint Helena, part of the United Kingdom. The cable would be an incredible gain for the island of about 4,500 people, who today rely on a shared 50-megabit-per-second satellite link for Internet connections.

The cable will deliver an expected minimum of several hundred gigabits per second. That’s far more data than the island’s residents can use, and would in fact be prohibitively expensive for Helenians to afford. To make the cable’s costs feasible, Christian von der Ropp—who started the Connect Saint Helena campaign in 2011—worked on the possibility of getting satellite ground stations installed on the island.

These ground stations would be crucial links between the growing number of satellites in orbit and our global network infrastructure. One of the biggest problems with satellites, especially lower-orbiting ones, is that they spend significant chunks of time without a good backhaul connection—usually because they’re over an ocean or a remote area on land. The southern Atlantic, as you can surely guess, is one such spot. Saint Helena happens to be right in the middle of it.

Von der Ropp found there was interest among satellite companies to build ground stations on the island. OneWeb, Spire Global, and Laser Light have all expressed interest in building infrastructure on the island. The ground stations would be a perfect match for the cable’s throughput, taking up the bulk of the cable’s usage and effectively subsidizing the costs of high-speed access for Helenians.

But what seemed like smooth sailing for the cable has now run into another bump, however. The island government is currently at odds with the island’s telecom monopoly, Sure South Atlantic. If the dispute cannot be resolved by the time the cable lands in late 2021 or early 2022, Helenians could see incredibly fast Internet speeds come to their shores—only to go nowhere once they arrive.

“The arrival of unlimited broadband places [Sure’s] business at risk,” says von der Ropp. He points out that, in general, when broadband access becomes essentially unlimited, users move away from traditional phone and television services in favor of messaging and streaming services like Skype, WhatsApp, and Netflix. If fewer Helenians are paying for Sure’s service packages, the company may instead jack up the prices on Internet access—which Helenians would then be forced to pay.

Most pressing, however, is that the island’s infrastructure simply cannot handle the data rates the Equiano branch will deliver. Because Sure is a monopoly, the company has little incentive to upgrade or repair infrastructure in any but the direst circumstances (Sure did not respond to a request for comment for this story).

That could give satellite operators cold feet as well. Under Sure’s current contract with the island government, satellite operators would be forbidden from running their own fiber from their ground stations to the undersea cable’s terminus. They would be reliant on Sure’s existing infrastructure to make the connection.

Sure’s current monopoly contract is due to expire on December 31, 2022—about a year after the cable is due to land on the island—assuming that the Saint Helena government does not renew the contract. Given the dissatisfaction of many on the island with the quality of service, that appears to be a distinct possibility. Right now, for example, Helenians pay 82 pounds per month for 11 gigabytes of data according to Sure’s Gold Package. The moment they exceed their data cap, Sure charges them 5 pence per megabyte, which is a 670 percent increase in the cost of data.

11 GB per month may seem hard to burn through, but remember that for Helenians, that data covers everything—streaming, browsing the Internet, phone calls, and texting. For a Helenian that has exceeded their data cap, a routine 1.5 GB iPhone update could cost them an additional 75 pounds.

But it could be hard to remove Sure as a monopoly. If the island government ends the contract, Sure has a right of compensation for all assets on the island. Von der Ropp estimates that means the government would be required to compensate Sure in the ballpark of four or five million pounds. That’s an extremely hefty sum, considering the government’s total annual budget is between 10 and 20 million pounds.

“They will need cash to pay the monopoly’s ransom,” says von der Ropp, adding that it will likely be up to the United Kingdom to foot the bill. Meanwhile, the island will need to look for new providers to replace Sure, ones that will hopefully invest in upgrading the island’s deteriorating infrastructure.

 There is interest in doing just that. As Councilor Cyril Leo put it in a recent speech to the island’s Legislative Council, “Corporate monopoly in St Helena cannot have the freedom to extract unregulated profits, from the fiber-optic cable enterprise, at the expense of the people of Saint Helena.” What remains to be seen is if the island can actually find a way to remove that corporate monopoly.