Tag Archives: Verizon

Phone Store Employee Sued For Promoting ‘Pirate’ App Showbox

Post Syndicated from Ernesto original https://torrentfreak.com/phone-store-employee-sued-for-promoting-pirate-app-showbox-180523/

In recent years, a group of select companies have pressured hundreds of thousands of alleged pirates to pay significant settlement fees, or face legal repercussions.

Traditionally, the companies go after BitTorrent users, as they are easy to track down by their IP-addresses. In Hawaii, however, a newly filed case adds a twist to this scheme.

The studios ME2 Productions and Headhunter, who own the rights to the movies ‘Mechanic: Resurrection‘ and ‘A Family Man‘ respectively, are suing an employee of a phone store who allegedly promoted and installed the ‘pirate’ application Showbox on a customer’s device.

Showbox is a popular movie and TV-show streaming application that’s particularly popular among mobile Android users. The app is capable of streaming torrents and works on a wide variety of devices.

While it can be used to stream legitimate content, many people use it to stream copyrighted works. In fact, the application itself displays this infringing use on its homepage, showing off pirated movies.

In a complaint filed at the US District Court of Hawaii, the studios accuse local resident Taylor Wolf of promoting Showbox and its infringing uses.

According to the studios, Wolf works at the Verizon-branded phone store Victra, where she helped customers set up and install phones, tablets and other devices. In doing so, the employee allegedly recommended the Showbox application.

“The Defendant promoted the software application Show Box to said members of the general public, including Kazzandra Pokini,” the complaint reads, adding that Wolf installed the Showbox app on the customer’s tablet, so she could watch pirated content.

From the complaint

The movie studios note that the defendant told the customer in question that her tablet could be used to watch free movies. The employee allegedly installed the Showbox app on the device in the store and showed the customer how to use it.

“Defendant knew that the Show Box app would cause Kazzandra Pokini to make copies of copyrighted content in violation of copyright laws of the United States,” the complaint adds.

The lawsuit is unique in the sense that the studios are going after someone who’s not directly accused of sharing their films. In the traditional lawsuits, they go after the people who share their work.

The complaint doesn’t mention why they chose this tactic. One option is that they initially went after the customer, who then pointed ME2 and Headhunter toward the phone store employee.

Neither studio is new to the piracy lawsuit game. ME2 is connected to Millennium Films and Headhunter is an affiliate of Voltage Pictures, one of the pioneers of so-called copyright trolling cases in the US.

As in most other cases, the copyright holders demand a preliminary injunction to stop Wolf from engaging in any infringing activities, as well as statutory damages, which theoretically can go up to $150,000 per pirated film, but are usually settled for a fraction of that.

A copy of the complaint filed against Taylor Wolf at the US District Court of Hawaii is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Accessing Cell Phone Location Information

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/05/accessing_cell_.html

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Key Internet Players Excoriate Canadian Pirate Site Blocking Plan

Post Syndicated from Ernesto original https://torrentfreak.com/key-internet-players-excoriate-canadian-pirate-site-blocking-plan-180323/

In January, a coalition of Canadian companies called on the country’s telecom regulator CRTC to establish a local pirate site blocking program, which would be the first of its kind in North America.

The Canadian deal is supported by FairPlay Canada, a coalition of both copyright holders and major players in the telco industry, such as Bell and Rogers, which also have media companies of their own.

Before making any decisions, the CRTC has asked the public for comments. Last week we highlighted a few from both sides, but in recent days two Internet heavyweights have chimed in.

The first submission comes from the Internet Infrastructure Coalition (i2Coalition), which counts Google, Amazon, Cogeco PEER1, and Tucows among its members. These are all key players in the Internet ecosystem, with a rather strong opinion.

In a strongly-worded letter, the coalition urges the CRTC to reject the proposed “government-backed internet censorship committee” which they say will hurt the public as well as various companies that rely on a free and open Internet.

“The not-for-profit organization envisioned by the FairPlay Canada proposal lacks accountability and oversight, and is certain to cause tremendous collateral damage to innocent Internet business owners,” they write.

“There is shockingly little judicial review or due process in establishing and approving the list of websites being blocked — and no specifics of how this blocking is actually to be implemented.”

According to the coalition, the proposal would stifle innovation, shutter legitimate businesses through overblocking, and harm Canada’s Internet economy.

In addition, they fear that it may lead to broad blockades of specific technologies. This includes VPNs, which Bell condemned in the past, as well as BitTorrent traffic.

“VPN usage itself could be targeted by this proposal, as could the use of torrents, another technology with wide legitimate usage, including digital security on public wifi, along with myriad other business requirements,” the coalition writes.

“We caution that this proposal could be used to attempt to restrict technology innovation. There are no provisions within the FairPlay proposal to avoid vilification of specific technologies. Technologies themselves cannot be bad actors.”

According to the i2Coalition, Canada’s Copyright Modernization Act is already one of the toughest anti-piracy laws in the world and they see no need to go any further. As such, they urge the authorities to reject the plan.

“The government and the CRTC should not hesitate to firmly reject the website blocking plan as a disproportionate, unconstitutional proposal sorely lacking in due process that is inconsistent with the current communications law framework,” the letter concludes.

The second submission we want to highlight comes from the Internet Society. In addition to many individual members, it is supported by dozens of major companies. This includes Google and Facebook, but also ISPs such as Verizon and Comcast, and even copyright holders such as 21st Century Fox and Disney.

While the Internet Society’s Hollywood members have argued in favor of pirate site blockades in the past, even in court, the organization’s submission argues fiercely against this measure.

Pointing to an extensive report Internet Society published last Spring, they inform the CRTC that website blocking techniques “do not solve the problem” and “inflict collateral damage.”

The Internet Society calls on the CRTC to carefully examine the proposal’s potential negative effects on the security of the Internet, the privacy of Canadians, and how it may inadvertently block legitimate websites.

“In our opinion, the negative impacts of disabling access greatly outweigh any benefits,” the Internet Society writes.

Thus far, nearly 10,000 responses have been submitted to the CRTC. The official deadline passes on March 29, after which it is up to the telecoms regulator to factor the different opinions into its final decision.

The i2Coalition submission is available here (pdf) and the Internet Society’s comments can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Success at Apache: A Newbie’s Narrative

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/170536010891

yahoodevelopers:

Kuhu Shukla (bottom center) and team at the 2017 DataWorks Summit


By Kuhu Shukla

This post first appeared here on the Apache Software Foundation blog as part of ASF’s “Success at Apache” monthly blog series.

As I sit at my desk on a rather frosty morning with my coffee, looking up new JIRAs from the previous day in the Apache Tez project, I feel rather pleased. The latest community release vote is complete, the bug fixes that we so badly needed are in and the new release that we tested out internally on our many thousand strong cluster is looking good. Today I am looking at a new stack trace from a different Apache project process and it is hard to miss how much of the exceptional code I get to look at every day comes from people all around the globe. A contributor leaves a JIRA comment before he goes on to pick up his kid from soccer practice while someone else wakes up to find that her effort on a bug fix for the past two months has finally come to fruition through a binding +1.

Yahoo – which joined AOL, HuffPost, Tumblr, Engadget, and many more brands to form the Verizon subsidiary Oath last year – has been at the frontier of open source adoption and contribution since before I was in high school. So while I have no historical trajectories to share, I do have a story on how I found myself in an epic journey of migrating all of Yahoo jobs from Apache MapReduce to Apache Tez, a then-new DAG based execution engine.

Oath grid infrastructure is through and through driven by Apache technologies be it storage through HDFS, resource management through YARN, job execution frameworks with Tez and user interface engines such as Hive, Hue, Pig, Sqoop, Spark, Storm. Our grid solution is specifically tailored to Oath’s business-critical data pipeline needs using the polymorphic technologies hosted, developed and maintained by the Apache community.

On the third day of my job at Yahoo in 2015, I received a YouTube link on An Introduction to Apache Tez. I watched it carefully trying to keep up with all the questions I had and recognized a few names from my academic readings of Yarn ACM papers. I continued to ramp up on YARN and HDFS, the foundational Apache technologies Oath heavily contributes to even today. For the first few weeks I spent time picking out my favorite (necessary) mailing lists to subscribe to and getting started on setting up on a pseudo-distributed Hadoop cluster. I continued to find my footing with newbie contributions and being ever more careful with whitespaces in my patches. One thing was clear – Tez was the next big thing for us. By the time I could truly call myself a contributor in the Hadoop community nearly 80-90% of the Yahoo jobs were now running with Tez. But just like hiking up the Grand Canyon, the last 20% is where all the pain was. Being a part of the solution to this challenge was a happy prospect and thankfully contributing to Tez became a goal in my next quarter.

The next sprint planning meeting ended with me getting my first major Tez assignment – progress reporting. The progress reporting in Tez was non-existent – “Just needs an API fix,”  I thought. Like almost all bugs in this ecosystem, it was not easy. How do you define progress? How is it different for different kinds of outputs in a graph? The questions were many.

I, however, did not have to go far to get answers. The Tez community actively came to a newbie’s rescue, finding answers and posing important questions. I started attending the bi-weekly Tez community sync up calls and asking existing contributors and committers for course correction. Suddenly the team was much bigger, the goals much more chiseled. This was new to anyone like me who came from the networking industry, where the most open part of the code are the RFCs and the implementation details are often hidden. These meetings served as a clean room for our coding ideas and experiments. Ideas were shared, to the extent of which data structure we should pick and what a future user of Tez would take from it. In between the usual status updates and extensive knowledge transfers were made.

Oath uses Apache Pig and Apache Hive extensively and most of the urgent requirements and requests came from Pig and Hive developers and users. Each issue led to a community JIRA and as we started running Tez at Oath scale, new feature ideas and bugs around performance and resource utilization materialized. Every year most of the Hadoop team at Oath travels to the Hadoop Summit where we meet our cohorts from the Apache community and we stand for hours discussing the state of the art and what is next for the project. One such discussion set the course for the next year and a half for me.

We needed an innovative way to shuffle data. Frameworks like MapReduce and Tez have a shuffle phase in their processing lifecycle wherein the data from upstream producers is made available to downstream consumers. Even though Apache Tez was designed with a feature set corresponding to optimization requirements in Pig and Hive, the Shuffle Handler Service was retrofitted from MapReduce at the time of the project’s inception. With several thousands of jobs on our clusters leveraging these features in Tez, the Shuffle Handler Service became a clear performance bottleneck. So as we stood talking about our experience with Tez with our friends from the community, we decided to implement a new Shuffle Handler for Tez. All the conversation points were tracked now through an umbrella JIRA TEZ-3334 and the to-do list was long. I picked a few JIRAs and as I started reading through I realized, this is all new code I get to contribute to and review. There might be a better way to put this, but to be honest it was just a lot of fun! All the whiteboards were full, the team took walks post lunch and discussed how to go about defining the API. Countless hours were spent debugging hangs while fetching data and looking at stack traces and Wireshark captures from our test runs. Six months in and we had the feature on our sandbox clusters. There were moments ranging from sheer frustration to absolute exhilaration with high fives as we continued to address review comments and fixing big and small issues with this evolving feature.

As much as owning your code is valued everywhere in the software community, I would never go on to say “I did this!” In fact, “we did!” It is this strong sense of shared ownership and fluid team structure that makes the open source experience at Apache truly rewarding. This is just one example. A lot of the work that was done in Tez was leveraged by the Hive and Pig community and cross Apache product community interaction made the work ever more interesting and challenging. Triaging and fixing issues with the Tez rollout led us to hit a 100% migration score last year and we also rolled the Tez Shuffle Handler Service out to our research clusters. As of last year we have run around 100 million Tez DAGs with a total of 50 billion tasks over almost 38,000 nodes.

In 2018 as I move on to explore Hadoop 3.0 as our future release, I hope that if someone outside the Apache community is reading this, it will inspire and intrigue them to contribute to a project of their choice. As an astronomy aficionado, going from a newbie Apache contributor to a newbie Apache committer was very much like looking through my telescope - it has endless possibilities and challenges you to be your best.

About the Author:

Kuhu Shukla is a software engineer at Oath and did her Masters in Computer Science at North Carolina State University. She works on the Big Data Platforms team on Apache Tez, YARN and HDFS with a lot of talented Apache PMCs and Committers in Champaign, Illinois. A recent Apache Tez Committer herself she continues to contribute to YARN and HDFS and spoke at the 2017 Dataworks Hadoop Summit on “Tez Shuffle Handler: Shuffling At Scale With Apache Hadoop”. Prior to that she worked on Juniper Networks’ router and switch configuration APIs. She likes to participate in open source conferences and women in tech events. In her spare time she loves singing Indian classical and jazz, laughing, whale watching, hiking and peering through her Dobsonian telescope.

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

NetNeutrality vs. Verizon censoring Naral

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/11/netneutrality-vs-verizon-censoring-naral.html

People keep retweeting this ACLU graphic in support of net neutrality. It’s wrong. In this post, I debunk the second item. I debunk other items in other posts [1] [4].

Firstly, it’s not a NetNeutrality issue (which applies only to the Internet), but an issue with text-messages. In other words, it’s something that will continue to happen even with NetNeutrality rules. People relate this to NetNeutrality as an analogy, not because it actually is such an issue.

Secondly, it’s an edge/content issue, not a transit issue. The details in this case is that Verizon provides a program for sending bulk messages to its customers from the edge of the network. Verizon isn’t censoring text messages in transit, but from the edge. You can send a text message to your friend on the Verizon network, and it won’t be censored. Thus the analogy is incorrect — the correct analogy would be with content providers like Twitter and Facebook, not ISPs like Comcast.

Like all cell phone vendors, Verizon polices this content, canceling accounts that abuse the system, like spammers. We all agree such censorship is a good thing, and that such censorship of content providers is not remotely a NetNeutrality issue. Content providers do this not because they disapprove of the content of spam such much as the distaste their customers have for spam.
Content providers that are political, rather than neutral to politics is indeed worrisome. It’s not a NetNeutrality issue per se, but it is a general “neutrality” issue. We free-speech activists want all content providers (Twitter, Facebook, Verizon mass-texting programs) to be free of political censorship — though we don’t want government to mandate such neutrality.
But even here, Verizon may be off the hook. They appear not be to be censoring one political view over another, but the controversial/unsavory way Naral expresses its views. Presumably, Verizon would be okay with less controversial political content.

In other words, as Verizon expresses it’s principles, it wants to block content that drivers away customers, but is otherwise neutral to the content. While this may unfairly target controversial political content, it’s at least basically neutral.

So in conclusion, while activists portray this as a NetNeutrality issue, it isn’t. It’s not even close.

SOPA Ghosts Hinder U.S. Pirate Site Blocking Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/sopa-ghosts-hinder-u-s-pirate-site-blocking-efforts-171008/

Website blocking has become one of the entertainment industries’ favorite anti-piracy tools.

All over the world, major movie and music industry players have gone to court demanding that ISPs take action, often with great success.

Internal MPAA research showed that website blockades help to deter piracy and former boss Chris Dodd said that they are one of the most effective anti-tools available.

While not everyone is in agreement on this, the numbers are used to lobby politicians and convince courts. Interestingly, however, nothing is happening in the United States, which is where most pirate site visitors come from.

This is baffling to many people. Why would US-based companies go out of their way to demand ISP blocking in the most exotic locations, but fail to do the same at home?

We posed this question to Neil Turkewitz, RIAA’s former Executive Vice President International, who currently runs his own consulting group.

The main reason why pirate site blocking requests have not yet been made in the United States is down to SOPA. When the proposed SOPA legislation made headlines five years ago there was a massive backlash against website blocking, which isn’t something copyright groups want to reignite.

“The legacy of SOPA is that copyright industries want to avoid resurrecting the ghosts of SOPA past, and principally focus on ways to creatively encourage cooperation with platforms, and to use existing remedies,” Turkewitz tells us.

Instead of taking the likes of Comcast and Verizon to court, the entertainment industries focused on voluntary agreements, such as the now-defunct Copyright Alerts System. However, that doesn’t mean that website blocking and domain seizures are not an option.

“SOPA made ‘website blocking’ as such a four-letter word. But this is actually fairly misleading,” Turkewitz says.

“There have been a variety of civil and criminal actions addressing the conduct of entities subject to US jurisdiction facilitating piracy, regardless of the source, including hundreds of domain seizures by DHS/ICE.”

Indeed, there are plenty of legal options already available to do much of what SOPA promised. ABS-CBN has taken over dozens of pirate site domain names through the US court system. Most recently even through an ex-parte order, meaning that the site owners had no option to defend themselves before they lost their domains.

ISP and search engine blocking is also around the corner. As we reported earlier this week, a Virginia magistrate judge recently recommended an injunction which would require search engines and Internet providers to prevent users from accessing Sci-Hub.

Still, the major movie and music companies are not yet using these tools to take on The Pirate Bay or other major pirate sites. If it’s so easy, then why not? Apparently, SOPA may still be in the back of their minds.

Interestingly, the RIAA’s former top executive wasn’t a fan of SOPA when it was first announced, as it wouldn’t do much to extend the legal remedies that were already available.

“I actually didn’t like SOPA very much since it mostly reflected existing law and maintained a paradigm that didn’t involve ISP’s in creative interdiction, and simply preserved passivity. To see it characterized as ‘copyright gone wild’ was certainly jarring and incongruous,” Turkewitz says.

Ironically, it looks like a bill that failed to pass, and didn’t impress some copyright holders to begin with, is still holding them back after five years. They’re certainly not using all the legal options available to avoid SOPA comparison. The question is, for how long?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Six Strikes Piracy Scheme May Be Dead But Those Warnings Keep on Coming

Post Syndicated from Andy original https://torrentfreak.com/six-strikes-piracy-scheme-may-be-dead-but-those-warnings-keep-on-coming-171001/

After at least 15 years of Internet pirates being monitored by copyright holders, one might think that the message would’ve sunk in by now. For many, it definitely hasn’t.

Bottom line: when people use P2P networks and protocols (such as BitTorrent) to share files including movies and music, copyright holders are often right there, taking notes about what is going on, perhaps in preparation for further action.

That can take a couple of forms, including suing users or, more probably, firing off a warning notice to their Internet service providers. Those notices are a little like a speeding ticket, telling the subscriber off for sharing copyrighted material but letting them off the hook if they promise to be good in future.

In 2013, the warning notice process in the US was formalized into what was known as the Copyright Alert System, a program through which most Internet users could receive at least six piracy warning notices without having any serious action taken against them. In January 2017, without having made much visible progress, it was shut down.

In some corners of the web there are still users under the impression that since the “six strikes” scheme has been shut down, all of a sudden US Internet users can forget about receiving a warning notice. In reality, the complete opposite is true.

While it’s impossible to put figures on how many notices get sent out (ISPs are reluctant to share the data), monitoring of various piracy-focused sites and forums indicates that plenty of notices are still being sent to ISPs, who are cheerfully sending them on to subscribers.

Also, over the past couple of months, there appears to have been an uptick in subscribers seeking advice after receiving warnings. Many report basic notices but there seems to be a bit of a trend of Internet connections being suspended or otherwise interrupted, apparently as a result of an infringement notice being received.

“So, over the weekend my internet got interrupted by my ISP (internet service provider) stating that someone on my network has violated some copyright laws. I had to complete a survey and they brought back the internet to me,” one subscriber wrote a few weeks ago. He added that his (unnamed) ISP advised him that seven warnings would get his account disconnected.

Another user, who named his ISP as Comcast, reported receiving a notice after downloading a game using BitTorrent. He was warned that the alleged infringement “may result in the suspension or termination of your Service account” but what remains unclear is how many warnings people can receive before this happens.

For example, a separate report from another Comcast user stated that one night of careless torrenting led to his mother receiving 40 copyright infringement notices the next day. He didn’t state which company the notices came from but 40 is clearly a lot in such a short space of time. That being said and as far as the report went, it didn’t lead to a suspension.

Of course, it’s possible that Comcast doesn’t take action if a single company sends many notices relating to the same content in a small time frame (Rightscorp is known to do this) but the risk is still there. Verizon, it seems, can suspend accounts quite easily.

“So lately I’ve been getting more and more annoyed with pirating because I get blasted with a webpage telling me my internet is disconnected and that I need to delete the file to reconnect, with the latest one having me actually call Verizon to reconnect,” a subscriber to the service reported earlier this month.

A few days ago, a Time Warner Cable customer reported having to take action after receiving his third warning notice from the ISP.

“So I’ve gotten three notices and after the third one I just went online to my computer and TWC had this page up that told me to stop downloading illegally and I had to click an ‘acknowledge’ button at the bottom of the page to be able to continue to use my internet,” he said.

Also posting this week, another subscriber of an unnamed ISP revealed he’d been disconnected twice in the past year. His comments raise a few questions that keep on coming up in these conversations.

“The first time [I was disconnected] was about a year ago and the next was a few weeks ago. When it happened I was downloading some fairly new movies so I was wondering if they monitor these new movie releases since they are more popular. Also are they monitoring what I am doing since I have been caught?” he asked.

While there is plenty of evidence to suggest that old content is also monitored, there’s little doubt that the fresher the content, the more likely it is to be monitored by copyright holders. If people are downloading a brand new movie, they should expect it to be monitored by someone, somewhere.

The second point, about whether risk increases after being caught already, is an interesting one, for a number of reasons.

Following the BMG v Cox Communication case, there is now a big emphasis on ISPs’ responsibility towards dealing with subscribers who are alleged to be repeat infringers. Anti-piracy outfit Rightscorp was deeply involved in that case and the company has a patent for detecting repeat infringers.

It’s becoming clear that the company actively targets such people in order to assist copyright holders (which now includes the RIAA) in strategic litigation against ISPs, such as Grande Communications, who are claimed to be going soft on repeat infringers.

Overall, however, there’s no evidence that “getting caught” once increases the chances of being caught again, but subscribers should be aware that the Cox case changed the position on the ground. If anecdotal evidence is anything to go by, it now seems that ISPs are tightening the leash on suspected pirates and are more likely to suspend or disconnect them in the face of repeated complaints.

The final question asked by the subscriber who was disconnected twice is a common one among people receiving notices.

“What can I do to continue what we all love doing?” he asked.

Time and time again, on sites like Reddit and other platforms attracting sharers, the response is the same.

“Get a paid VPN. I’m amazed you kept torrenting without protection after having your internet shut off, especially when downloading recent movies,” one such response reads.

Nevertheless, this still fails to help some people fully understand the notices they receive, leaving them worried about what might happen after receiving one. However, the answer is nearly always straightforward.

If the notice says “stop sharing content X”, then recipients should do so, period. And, if the notice doesn’t mention specific legal action, then it’s almost certain that no action is underway. They are called warning notices for a reason.

Also, notice recipients should consider the part where their ISP assures them that their details haven’t been shared with third parties. That is the truth and will remain that way unless subscribers keep ignoring notices. Then there’s a slim chance that a rightsholder will step in to make a noise via a lawyer. At that point, people shouldn’t say they haven’t been warned.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Director of Kim Dotcom Documentary Speaks Out on Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/director-of-kim-dotcom-documentary-speaks-out-on-piracy-170902/

When you make a documentary about Kim Dotcom, someone who’s caught up in one of the largest criminal copyright infringement cases in history, the piracy issue is unavoidable.

And indeed, the topic is discussed in depth in “Kim Dotcom: Caught in the Web,” which enjoyed its digital release early last week.

As happens with most digital releases, a pirated copy soon followed. While no filmmaker would actively encourage people not to pay for their work, director Annie Goldson wasn’t surprised at all when she saw the first unauthorized copies appear online.

The documentary highlights that piracy is in part triggered by lacking availability, so it was a little ironic that the film itself wasn’t released worldwide on all services. However, Goldson had no direct influence on the distribution process.

“It was inevitable really. We have tried to adopt a distribution model that we hope will encourage viewers to buy legal copies making it available as widely as possible,” Goldson informs TorrentFreak.

“We had sold the rights, so didn’t have complete control over reach or pricing which I think are two critical variables that do impact on the degree of piracy. Although I think our sales agent did make good strides towards a worldwide release.”

Now that millions of pirates have access to her work for free, it will be interesting to see how this impacts sales. For now, however, there’s still plenty of legitimate interest, with the film now appearing in the iTunes top ten of independent films.

In any case, Goldson doesn’t subscribe to the ‘one instance of piracy is a lost sale’ theory and notes that views about piracy are sharply polarized.

“Some claim financial devastation while others argue that infringement leads to ‘buzz,’ that this can generate further sales – so we shall see. At one level, watching this unfold is quite an interesting research exercise into distribution, which ironically is one of the big themes of the film of course,” Goldson notes.

Piracy overall doesn’t help the industry forward though, she says, as it hurts the development of better distribution models.

“I’m opposed to copyright infringement and piracy as it muddies the waters when it comes to devising a better model for distribution, one that would nurture and support artists and creatives, those that do the hard yards.”

Kim Dotcom: Caught in the Web trailer

The director has no issues with copyright enforcement either. Not just to safeguard financial incentives, but also because the author does have moral and ethical rights about how their works are distributed. That said, instead of pouring money into enforcement, it might be better spent on finding a better business model.

“I’m with Wikipedia founder Jimmy Wales who says [in the documentary] that the problem is primarily with the existing business model. If you make films genuinely available at prices people can afford, at the same time throughout the world, piracy would drop to low levels.

“I think most people would prefer to access their choice of entertainment legally rather than delving into dark corners of the Internet. I might be wrong of course,” Goldson adds.

In any case, ‘simply’ enforcing piracy into oblivion seems to be an unworkable prospect – not without massive censorship, or the shutdown of the entire Internet.

“I feel the risk is that anti-piracy efforts will step up and erode important freedoms. Or we have to close down the Internet altogether. After all, the unwieldy beast is a giant copying machine – making copies is what it does well,” Goldson says.

The problems is that the industry is keeping piracy intact through its own business model. When people can’t get what they want, when, and where they want it, they often turn to pirate sites.

“One problem is that the industry has been slow to change and hence we now have generations of viewers who have had to regularly infringe to be part of a global conversation.

“I do feel if the industry is promoting and advertising works internationally, using globalized communication and social media, then denying viewers from easily accessing works, either through geo-blocking or price points, obviously, digitally-savvy viewers will find them regardless,” Goldson adds.

And yes, this ironically also applies to her own documentary.

The solution is to continue to improve the legal options. This is easier said than done, as Goldson and her team tried hard, so it won’t happen overnight. However, universal access for a decent price would seem to be the future.

Unless the movie industry prefers to shut down the Internet entirely, of course.

For those who haven’t seen “Kim Dotcom: Caught in the Web yet,” the film is available globally on Vimeo OnDemand, and in a lot of territories on iTunes, the PlayStation Store, Amazon, Google Play, and the Microsoft/Xbox Store. In the US there is also Vudu, Fandango Now & Verizon.

If that doesn’t work, then…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

“Top ISPs” Are Discussing Fines & Browsing Hijacking For Pirates

Post Syndicated from Andy original https://torrentfreak.com/top-isps-are-discussing-fines-browsing-hijacking-for-pirates-170614/

For the past several years, anti-piracy outfit Rightscorp has been moderately successful in forcing smaller fringe ISPs in the United States to collaborate in a low-tier copyright trolling operation.

The way it works is relatively simple. Rightscorp monitors BitTorrent networks, captures the IP addresses of alleged infringers, and sends DMCA notices to their ISPs. Rightscorp expects ISPs to forward these to their customers along with an attached cash settlement demand.

These demands are usually for small amounts ($20 or $30) but most of the larger ISPs don’t forward them to their customers. This deprives Rightscorp (and clients such as BMG) of the opportunity to generate revenue, a situation that the anti-piracy outfit is desperate to remedy.

One of the problems is that when people who receive Rightscorp ‘fines’ refuse to pay them, the company does nothing, leading to a lack of respect for the company. With this in mind, Rightscorp has been trying to get ISPs involved in forcing people to pay up.

In 2014, Rightscorp said that its goal was to have ISPs place a redirect page in front of ‘pirate’ subscribers until they pay a cash fine.

“[What] we really want to do is move away from termination and move to what’s called a hard redirect, like, when you go into a hotel and you have to put your room number in order to get past the browser and get on to browsing the web,” the company said.

In the three years since that statement, the company has raised the issue again but nothing concrete has come to fruition. However, there are now signs of fresh movement which could be significant, if Rightscorp is to be believed.

“An ISP Good Corporate Citizenship Program is what we feel will drive revenue associated with our primary revenue model. This program is an attempt to garner the attention and ultimately inspire a behavior shift in any ISP that elects to embrace our suggestions to be DMCA-compliant,” the company told shareholders yesterday.

“In this program, we ask for the ISPs to forward our notices referencing the infringement and the settlement offer. We ask that ISPs take action against repeat infringers through suspensions or a redirect screen. A redirect screen will guide the infringer to our payment screen while limiting all but essential internet access.”

At first view, this sounds like a straightforward replay of Rightscorp’s wishlist of three years ago, but it’s worth noting that the legal landscape has shifted fairly significantly since then.

Perhaps the most important development is the BMG v Cox Communications case, in which the ISP was sued for not doing enough to tackle repeat infringers. In that case (for which Rightscorp provided the evidence), Cox was held liable for third-party infringement and ordered to pay damages of $25 million alongside $8 million in legal fees.

All along, the suggestion has been that if Cox had taken action against infringing subscribers (primarily by passing on Rightscorp ‘fines’ and/or disconnecting repeat infringers) the ISP wouldn’t have ended up in court. Instead, it chose to sweat it out to a highly unfavorable decision.

The BMG decision is a potentially powerful ruling for Rightscorp, particularly when it comes to seeking ‘cooperation’ from other ISPs who might not want a similar legal battle on their hands. But are other ISPs interested in getting involved?

According to the Rightscorp, preliminary negotiations are already underway with some big players.

“We are now beginning to have some initial and very thorough discussions with a handful of the top ISPs to create and implement such a program that others can follow. We have every reason to believe that the litigations referred to above are directly responsible for the beginning of a change in thinking of ISPs,” the company says.

Rightscorp didn’t identify these “top ISPs” but by implication, these could include companies such as Comcast, AT&T, Time Warner Cable, CenturyLink, Charter, Verizon, and/or even Cox Communications.

With cooperation from these companies, Rightscorp predicts that a “cultural shift” could be brought about which would significantly increase the numbers of subscribers paying cash demands. It’s also clear that while it may be seeking cooperation from ISPs, a gun is being held under the table too, in case any feel hesitant about putting up a redirect screen.

“This is the preferred approach that we advocate for any willing ISP as an alternative to becoming a defendant in a litigation and facing potential liability and significantly larger statutory damages,” Rightscorp says.

A recent development suggests the company may not be bluffing. Back in April the RIAA sued ISP Grande Communcations for failing to disconnect persistent pirates. Yet again, Rightscorp is deeply involved in the case, having provided the infringement data to the labels for a considerable sum.

Whether the “top ISPs” in the United States will cave into the pressure and implied threats remains to be seen but there’s no doubting the rising confidence at Rightscorp.

“We have demonstrated the tenacity to support two major litigation efforts initiated by two of our clients, which we feel will set a precedent for the entire anti-piracy industry led by Rightscorp. If you can predict the law, you can set the competition,” the company concludes.

Meanwhile, Rightscorp appears to continue its use of disingenuous tactics to extract money from alleged file-sharers.

In the wake of several similar reports, this week a Reddit user reported that Rightscorp asked him to pay a single $20 fine for pirating a song. After paying up, the next day the company allegedly called the user back and demanded payment for a further 200 notices.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Hot Startups – May 2017

Post Syndicated from Tina Barr original https://aws.amazon.com/blogs/aws/aws-hot-startups-may-2017/

April showers bring May startups! This month we have three hot startups for you to check out. Keep reading to find out what they’re up to, and how they’re using AWS to do it.

Today’s post features the following startups:

  • Lobster – an AI-powered platform connecting creative social media users to professionals.
  • Visii – helping consumers find the perfect product using visual search.
  • Tiqets – a curated marketplace for culture and entertainment.

Lobster (London, England)

Every day, social media users generate billions of authentic images and videos to rival typical stock photography. Powered by Artificial Intelligence, Lobster enables brands, agencies, and the press to license visual content directly from social media users so they can find that piece of content that perfectly fits their brand or story. Lobster does the work of sorting through major social networks (Instagram, Flickr, Facebook, Vk, YouTube, and Vimeo) and cloud storage providers (Dropbox, Google Photos, and Verizon) to find media, saving brands and agencies time and energy. Using filters like gender, color, age, and geolocation can help customers find the unique content they’re looking for, while Lobster’s AI and visual recognition finds images instantly. Lobster also runs photo challenges to help customers discover the perfect image to fit their needs.

Lobster is an excellent platform for creative people to get their work discovered while also protecting their content. Users are treated as copyright holders and earn 75% of the final price of every sale. The platform is easy to use: new users simply sign in with an existing social media or cloud account and can start showcasing their artistic talent right away. Lobster allows users to connect to any number of photo storage sources so they’re able to choose which items to share and which to keep private. Once users have selected their favorite photos and videos to share, they can sit back and watch as their work is picked to become the signature for a new campaign or featured on a cool website – and start earning money for their work.

Lobster is using a variety of AWS services to keep everything running smoothly. The company uses Amazon S3 to store photography that was previously ordered by customers. When a customer purchases content, the respective piece of content must be available at any given moment, independent from the original source. Lobster is also using Amazon EC2 for its application servers and Elastic Load Balancing to monitor the state of each server.

To learn more about Lobster, check them out here!

Visii (London, England)

In today’s vast web, a growing number of products are being sold online and searching for something specific can be difficult. Visii was created to cater to businesses and help them extract value from an asset they already have – their images. Their SaaS platform allows clients to leverage an intelligent visual search on their websites and apps to help consumers find the perfect product for them. With Visii, consumers can choose an image and immediately discover more based on their tastes and preferences. Whether it’s clothing, artwork, or home decor, Visii will make recommendations to get consumers to search visually and subsequently help businesses increase their conversion rates.

There are multiple ways for businesses to integrate Visii on their website or app. Many of Visii’s clients choose to build against their API, but Visii also work closely with many clients to figure out the most effective way to do this for each unique case. This has led Visii to help build innovative user interfaces and figure out the best integration points to get consumers to search visually. Businesses can also integrate Visii on their website with a widget – they just need to provide a list of links to their products and Visii does the rest.

Visii runs their entire infrastructure on AWS. Their APIs and pipeline all sit in auto-scaling groups, with ELBs in front of them, sending things across into Amazon Simple Queue Service and Amazon Aurora. Recently, Visii moved from Amazon RDS to Aurora and noted that the process was incredibly quick and easy. Because they make heavy use of machine learning, it is crucial that their pipeline only runs when required and that they maximize the efficiency of their uptime.

To see how companies are using Visii, check out Style Picker and Saatchi Art.

Tiqets (Amsterdam, Netherlands)

Tiqets is making the ticket-buying experience faster and easier for travelers around the world.  Founded in 2013, Tiqets is one of the leading curated marketplaces for admission tickets to museums, zoos, and attractions. Their mission is to help travelers get the most out of their trips by helping them find and experience a city’s culture and entertainment. Tiqets partners directly with vendors to adapt to a customer’s specific needs, and is now active in over 30 cities in the US, Europe, and the Middle East.

With Tiqets, travelers can book tickets either ahead of time or at their destination for a wide range of attractions. The Tiqets app provides real-time availability and delivers tickets straight to customer’s phones via email, direct download, or in the app. Customers save time skipping long lines (a perk of the app!), save trees (don’t need to physically print tickets), and most importantly, they can make the most out of their leisure time. For each attraction featured on Tiqets, there is a lot of helpful information including best modes of transportation, hours, commonly asked questions, and reviews from other customers.

The Tiqets platform consists of the consumer-facing website, the internal and external-facing APIs, and the partner self-service portals. For the app hosting and infrastructure, Tiqets uses AWS services such as Elastic Load Balancing, Amazon EC2, Amazon RDS, Amazon CloudFront, Amazon Route 53, and Amazon ElastiCache. Through the infrastructure orchestration of their AWS configuration, they can easily set up separate development or test environments while staying close to the production environment as well.

Tiqets is hiring! Be sure to check out their jobs page if you are interested in joining the Tiqets team.

Thanks for reading and don’t forget to check out April’s Hot Startups if you missed it.

-Tina Barr

 

 

John Oliver is wrong about Net Neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/john-oliver-is-wrong-about-net.html

People keep linking to John Oliver bits. We should stop doing this. This is comedy, but people are confused into thinking Oliver is engaging in rational political debate:
Enlightened people know that reasonable people disagree, that there’s two sides to any debate. John Oliver’s bit erodes that belief, making one side (your side) sound smart, and the other side sound unreasonable.
The #1 thing you should know about Net Neutrality is that reasonable people disagree. It doesn’t mean they are right, only that they are reasonable. They aren’t stupid. They aren’t shills for the telcom lobby, or confused by the telcom lobby. Indeed, those opposed to Net Neutrality are the tech experts who know how packets are routed, whereas the supporters tend only to be lawyers, academics, and activists. If you think that the anti-NetNeutrality crowd is unreasonable, then you are in a dangerous filter bubble.
Most everything in John Oliver’s piece is incorrect.
For example, he says that without Net Neutrality, Comcast can prefer original shows it produces, and slow down competing original shows by Netflix. This is silly: Comcast already does that, even with NetNeutrality rules.
Comcast owns NBC, which produces a lot of original shows. During prime time (8pm to 11pm), Comcast delivers those shows at 6-mbps to its customers, while Netflix is throttled to around 3-mbps. Because of this, Comcast original shows are seen at higher quality than Netflix shows.
Comcast can do this, even with NetNeutrality rules, because it separates its cables into “channels”. One channel carries public Internet traffic, like Netflix. The other channels carry private Internet traffic, for broadcast TV shows and pay-per-view.
All NetNeutrality means is that if Comcast wants to give preference to its own contents/services, it has to do so using separate channels on the wire, rather than pushing everything over the same channel. This is a detail nobody tells you because NetNeutrality proponents aren’t techies. They are lawyers and academics. They maximize moral outrage, while ignoring technical details.
Another example in Oliver’s show is whether search engines like Google or the (hypothetical) Bing can pay to get faster access to customers. They already do that. The average distance a packet travels on the web is less than 100-miles. That’s because the biggest companies (Google, Facebook, Netflix, etc.) pay to put servers in your city close to you. Smaller companies, such as search engine DuckDuckGo.com, also pay third-party companies like Akamai or Amazon Web Services to get closer to you. The smallest companies, however, get poor performance, being a thousand miles away.
You can test this out for yourself. Run a packet-sniffer on your home network for a week, then for each address, use mapping tools like ping and traceroute to figure out how far away things are.
The Oliver bit mentioned how Verizon banned Google Wallet. Again, technical details are important here. It had nothing to do with Net Neutrality issues blocking network packets, but only had to do with Verizon-branded phones blocking access to the encrypted enclave. You could use Google Wallet on unlocked phones you bought separately. Moreover, market forces won in the end, with Google Wallet (aka. Android Wallet) now the preferred wallet on their network. In other words, this incident shows that the “free market” fixes things in the long run without the heavy hand of government.
Oliver shows a piece where FCC chief Ajit Pai points out that Internet companies didn’t do evil without Net Neutrality rules, and thus NetNeutrality rules were unneeded. Oliver claimed this was a “disingenuous” argument. No, it’s not “disingenuous”, it entirely the point of why Net Neutrality is bad. It’s chasing theoretical possibility of abuse, not the real thing. Sure, Internet companies will occasionally go down misguided paths. If it’s truly bad, customers will rebel. In some cases, it’s not actually a bad thing, and will end up being a benefit to customers (e.g. throttling BitTorrent during primetime would benefit most BitTorrent users). It’s the pro-NetNeutrality side that’s being disingenuous, knowingly trumping up things as problems that really aren’t.
The point is this. The argument here is a complicated one, between reasonable sides. For humor, John Oliver has created a one-sided debate that falls apart under any serious analysis. Those like the EFF should not mistake such humor for intelligent technical debate.

President Trump Signs Internet Privacy Repeal Into Law

Post Syndicated from Andy original https://torrentfreak.com/president-trump-signs-internet-privacy-repeal-into-law-170404/

In a major setback to those who value their online privacy in the United States, last week the House of Representatives voted to grant Internet service providers permission to sell subscribers’ browsing histories to third parties.

The bill repeals broadband privacy rules adopted last year by the Federal Communications Commission, which required ISPs to obtain subscribers’ consent before using their browsing records for advertising or marketing purposes.

Soon after, the Trump Administration officially announced its support for the bill, noting that the President’s advisors would advise him to sign it, should it be presented. Yesterday, that’s exactly what happened.

To howls of disapproval from Internet users and privacy advocates alike, President Trump signed into law a resolution that seriously undermines the privacy of all citizens using ISPs to get online in the US. The bill removes protections that were approved by the FCC in the final days of the Obama administration but had not yet gone into effect.

The dawning reality is that telecoms giants including Comcast, AT&T, and Verizon, are now free to collect and leverage the browsing histories of subscribers – no matter how sensitive – in order to better target them with advertising and other marketing.

The White House says that the changes will simply create an “equal playing field” between ISPs and Internet platforms such as Google and Facebook, who are already able to collect data for advertising purposes.

The repeal has drawn criticism from all sides, with Mozilla’s Executive Director Mark Surman openly urging the public to fight back.

“The repeal should be a call to action. And not just to badger our lawmakers,” Surman said.

“It should be an impetus to take online privacy into our own hands.”

With the bill now signed into law, that’s the only real solution if people want to claw back their privacy. Surman has a few suggestions, including the use of Tor and encrypted messaging apps like Signal. But like so many others recently, he leads with the use of VPN technology.

As reported last week, Google searches for the term VPN reached unprecedented levels when the public realized that their data would soon be up for grabs.

That trend continued through the weekend, with many major VPN providers reporting increased interest in their products.

Only time will tell if interest from the mainstream will continue at similar levels. However, in broad terms, the recent public outcry over privacy is only likely to accelerate the uptake of security products and the use of encryption as a whole. It could even prove to be the wake-up call the Internet needed.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Congress Removes FCC Privacy Protections on Your Internet Usage

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/03/congress_remove.html

Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T, and Verizon collected all of your browsing history and sold it on to the highest bidder. That’s what will probably happen if Congress has its way.

This week, lawmakers voted to allow Internet service providers to violate your privacy for their own profit. Not only have they voted to repeal a rule that protects your privacy, they are also trying to make it illegal for the Federal Communications Commission to enact other rules to protect your privacy online.

That this is not provoking greater outcry illustrates how much we’ve ceded any willingness to shape our technological future to for-profit companies and are allowing them to do it for us.

There are a lot of reasons to be worried about this. Because your Internet service provider controls your connection to the Internet, it is in a position to see everything you do on the Internet. Unlike a search engine or social networking platform or news site, you can’t easily switch to a competitor. And there’s not a lot of competition in the market, either. If you have a choice between two high-speed providers in the US, consider yourself lucky.

What can telecom companies do with this newly granted power to spy on everything you’re doing? Of course they can sell your data to marketers — and the inevitable criminals and foreign governments who also line up to buy it. But they can do more creepy things as well.

They can snoop through your traffic and insert their own ads. They can deploy systems that remove encryption so they can better eavesdrop. They can redirect your searches to other sites. They can install surveillance software on your computers and phones. None of these are hypothetical.

They’re all things Internet service providers have done before, and they are some of the reasons the FCC tried to protect your privacy in the first place. And now they’ll be able to do all of these things in secret, without your knowledge or consent. And, of course, governments worldwide will have access to these powers. And all of that data will be at risk of hacking, either by criminals and other governments.

Telecom companies have argued that other Internet players already have these creepy powers — although they didn’t use the word “creepy” — so why should they not have them as well? It’s a valid point.

Surveillance is already the business model of the Internet, and literally hundreds of companies spy on your Internet activity against your interests and for their own profit.

Your e-mail provider already knows everything you write to your family, friends, and colleagues. Google already knows our hopes, fears, and interests, because that’s what we search for.

Your cellular provider already tracks your physical location at all times: it knows where you live, where you work, when you go to sleep at night, when you wake up in the morning, and — because everyone has a smartphone — who you spend time with and who you sleep with.

And some of the things these companies do with that power is no less creepy. Facebook has run experiments in manipulating your mood by changing what you see on your news feed. Uber used its ride data to identify one-night stands. Even Sony once installed spyware on customers’ computers to try and detect if they copied music files.

Aside from spying for profit, companies can spy for other purposes. Uber has already considered using data it collects to intimidate a journalist. Imagine what an Internet service provider can do with the data it collects: against politicians, against the media, against rivals.

Of course the telecom companies want a piece of the surveillance capitalism pie. Despite dwindling revenues, increasing use of ad blockers, and increases in clickfraud, violating our privacy is still a profitable business — especially if it’s done in secret.

The bigger question is: why do we allow for-profit corporations to create our technological future in ways that are optimized for their profits and anathema to our own interests?

When markets work well, different companies compete on price and features, and society collectively rewards better products by purchasing them. This mechanism fails if there is no competition, or if rival companies choose not to compete on a particular feature. It fails when customers are unable to switch to competitors. And it fails when what companies do remains secret.

Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their Internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.

Today, technology is changing the fabric of our society faster than at any other time in history. We have big questions that we need to tackle: not just privacy, but questions of freedom, fairness, and liberty. Algorithms are making decisions about policing, healthcare.

Driverless vehicles are making decisions about traffic and safety. Warfare is increasingly being fought remotely and autonomously. Censorship is on the rise globally. Propaganda is being promulgated more efficiently than ever. These problems won’t go away. If anything, the Internet of things and the computerization of every aspect of our lives will make it worse.

In today’s political climate, it seems impossible that Congress would legislate these things to our benefit. Right now, regulatory agencies such as the FTC and FCC are our best hope to protect our privacy and security against rampant corporate power. That Congress has decided to reduce that power leaves us at enormous risk.

It’s too late to do anything about this bill — Trump will certainly sign it — but we need to be alert to future bills that reduce our privacy and security.

This post previously appeared on the Guardian.

EDITED TO ADD: Former FCC Commissioner Tom Wheeler wrote a good op-ed on the subject. And here’s an essay laying out what this all means to the average Internet user.

IoT Attack Against a University Network

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/iot_attack_agai.html

Verizon’s Data Brief Digest 2017 describes an attack against an unnamed university by attackers who hacked a variety of IoT devices and had them spam network targets and slow them down:

Analysis of the university firewall identified over 5,000 devices making hundreds of Domain Name Service (DNS) look-ups every 15 minutes, slowing the institution’s entire network and restricting access to the majority of internet services.

In this instance, all of the DNS requests were attempting to look up seafood restaurants — and it wasn’t because thousands of students all had an overwhelming urge to eat fish — but because devices on the network had been instructed to repeatedly carry out this request.

“We identified that this was coming from their IoT network, their vending machines and their light sensors were actually looking for seafood domains; 5,000 discreet systems and they were nearly all in the IoT infrastructure,” says Laurance Dine, managing principal of investigative response at Verizon.

The actual Verizon document doesn’t appear to be available online yet, but there is an advance version that only discusses the incident above, available here.

AWS Direct Connect Update – Link Aggregation Groups, Bundles, and re:Invent Recap

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-direct-connect-update-link-aggregation-groups-bundles-and-reinvent-recap/

AWS Direct Connect helps our large-scale customers to create private, dedicated network connections to their office, data center, or colocation facility. Our customers create 1 Gbps and 10 Gbps connections in order to reduce their network costs, increase data transfer throughput, and to get a more consistent network experience than is possible with an Internet-based connection.

Today I would like to tell you about a new Link Aggregation feature for Direct Connect. I’d also like to tell you about our new Direct Connect Bundles and to tell you more about how we used Direct Connect to provide a first-class customer experience at AWS re:Invent 2016.

Link Aggregation Groups
Some of our customers would like to set up multiple connections (generally known as ports) between their location and one of the 46 Direct Connect locations. Some of them would like to create a highly available link that is resilient in the face of network issues outside of AWS; others simply need more data transfer throughput.

In order to support this important customer use case, you can now purchase up to 4 ports and treat them as a single managed connection, which we call a Link Aggregation Group or LAG. After you have set this up, traffic is load-balanced across the ports at the level of individual packet flows. All of the ports are active simultaneously, and are represented by a single BGP session. Traffic across the group is managed via Dynamic LACP (Link Aggregation Control Protocol – or ISO/IEC/IEEE 8802-1AX:2016). When you create your group, you also specify the minimum number of ports that must be active in order for the connection to be activated.

You can order a new group with multiple ports and you can aggregate existing ports into a new group. Either way, all of the ports must have the same speed (1 Gbps or 10 Gbps).

All of the ports in as group will connect to the same device on the AWS side. You can add additional ports to an existing group as long as there’s room on the device (this information is now available in the Direct Connect Console). If you need to expand an existing group and the device has no open ports, you can simply order a new group and migrate your connections.

Here’s how you can make use of link aggregation from the Console. First, creating a new LAG from scratch:

And second, creating a LAG from existing connections:


Link Aggregation Groups are now available in the US East (Northern Virginia), US West (Northern California), US East (Ohio), US West (Oregon), Canada (Central), South America (São Paulo), Asia Pacific (Mumbai), and Asia Pacific (Seoul) Regions and you can create them today. We expect to make them available in the remaining regions by the end of this month.

Direct Connect Bundles
We announced some powerful new Direct Connect Bundles at re:Invent 2016. Each bundle is an advanced, hybrid reference architecture designed to reduce complexity and to increase performance. Here are the new bundles:

Level 3 Communications Powers Amazon WorkSpaces – Connects enterprise applications, data, user workspaces, and end-point devices to offer reliable performance and a better end-user experience:

SaaS Architecture enhanced by AT&T NetBond – Enhances quality and user experience for applications migrated to the AWS Cloud:

Aviatrix User Access Integrated with Megaport DX – Supports encrypted connectivity between AWS Cloud Regions, between enterprise data centers and AWS, and on VPN access to AWS:

Riverbed Hybrid SDN/NFV Architecture over Verizon Secure Cloud Interconnect – Allows enterprise customers to provide secure, optimized access to AWS services in a hybrid network environment:

Direct Connect at re:Invent 2016
In order to provide a top-notch experience for attendees and partners at re:Invent, we worked with Level 3 to set up a highly available and fully redundant set of connections. This network was used to support breakout sessions, certification exams, the hands-on labs, the keynotes (including the live stream to over 25,000 viewers in 122 countries), the hackathon, bootcamps, and workshops. The re:Invent network used four 10 Gbps connections, two each to US West (Oregon) and US East (Northern Virginia):

It supported all of the re:Invent venues:

Here are some video resources that will help you to learn more about how we did this, and how you can do it yourself:

Jeff;

No, Yahoo! isn’t changing its name

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/no-yahoo-isnt-changing-its-name.html

Trending on social media is how Yahoo is changing it’s name to “Altaba” and CEO Marissa Mayer is stepping down. This is false.

What is happening instead is that everything we know of as “Yahoo” (including the brand name) is being sold to Verizon. The bits that are left are a skeleton company that holds stock in Alibaba and a few other companies. Since the brand was sold to Verizon, that investment company could no longer use it, so chose “Altaba”. Since 83% of its investment is in Alibabi, “Altaba” makes sense. It’s not like this new brand name means anything — the skeleton investment company will be wound down in the next year, either as a special dividend to investors, sold off to Alibaba, or both.

Marissa Mayer is an operations CEO. Verizon didn’t want her to run their newly acquired operations, since the entire point of buying them was to take the web operations in a new direction (though apparently she’ll still work a bit with them through the transition). And of course she’s not an appropriate CEO for an investment company. So she had no job left — she made her own job disappear.

What happened today is an obvious consequence of Alibaba going IPO in September 2014. It meant that Yahoo’s stake of 16% in Alibaba was now liquid. All told, the investment arm of Yahoo was worth $36-billion while the web operations (Mail, Fantasy, Tumblr, etc.) was worth only $5-billion.

In other words, Yahoo became a Wall Street mutual fund who inexplicably also offered web mail and cat videos.

Such a thing cannot exist. If Yahoo didn’t act, shareholders would start suing the company to get their money back.That $36-billion in investments doesn’t belong to Yahoo, it belongs to its shareholders. Thus, the moment the Alibaba IPO closed, Yahoo started planning on how to separate the investment arm from the web operations.

Yahoo had basically three choices.

  • The first choice is simply give the Alibaba (and other investment) shares as a one time dividend to Yahoo shareholders. 
  • A second choice is simply split the company in two, one of which has the investments, and the other the web operations. 
  • The third choice is to sell off the web operations to some chump like Verizon.

Obviously, Marissa Mayer took the third choice. Without a slushfund (the investment arm) to keep it solvent, Yahoo didn’t feel it could run its operations profitably without integration with some other company. That meant it either had to buy a large company to integrate with Yahoo, or sell the Yahoo portion to some other large company.

Every company, especially Internet ones, have a legacy value. It’s the amount of money you’ll get from firing everyone, stop investing in the future, and just raking in year after year a stream of declining revenue. It’s the fate of early Internet companies like Earthlink and Slashdot. It’s like how I documented with Earthlink [*], which continues to offer email to subscribers, but spends only enough to keep the lights on, not even upgrading to the simplest of things like SSL.

Presumably, Verizon will try to make something of a few of the properties. Apparently, Yahoo’s Fantasy sports stuff is popular, and will probably be rebranded as some new Verizon thing. Tumblr is already it’s own brand name, independent of Yahoo, and thus will probably continue to exist as its own business unit.

One of the weird things is Yahoo Mail. It permanently bound to the “yahoo.com” domain, so you can’t do much with the “Yahoo” brand without bringing Mail along with it. Though at this point, the “Yahoo” brand is pretty tarnished. There’s not much new you can put under that brand anyway. I can’t see how Verizon would want to invest in that brand at all — just milk it for what it can over the coming years.

The investment company cannot long exist on its own. Investors want their money back, so they can make future investment decisions on their own. They don’t want the company to make investment choices for them.

Think about when Yahoo made its initial $1-billion investment for 40% of Alibaba in 2005, it did not do so because it was a good “investment opportunity”, but because Yahoo believed it was good strategic investment, such as providing an entry in the Chinese market, or providing an e-commerce arm to compete against eBay and Amazon. In other words, Yahoo didn’t consider as a good way of investing its money, but a good way to create a strategic partnership — one that just never materialized. From that point of view, the Alibaba investment was a failure.

In 2012, Marissa Mayer sold off 25% of Alibaba, netting $4-billion after taxes. She then lost all $4-billion on the web operations. That stake would be worth over $50-billion today. You can see the problem: companies with large slush funds just fritter them away keeping operations going. Marissa Mayer abused her position of trust, playing with money that belong to shareholders.

Thus, Altbaba isn’t going to play with shareholder’s money. It’s a skeleton company, so there’s no strategic value to investments. They can make no better investment choices than its shareholders can with their own money. Thus, the only purpose of the skeleton investment company is to return the money back to the shareholders. I suspect it’ll choose the most tax efficient way of doing this, like selling the whole thing to Alibaba, which just exchanges the Altaba shares for Alibaba shares, with a 15% bonus representing the value of the other Altaba investments. Either way, if Altaba is still around a year from now, it’s because it’s board is skimming money that doesn’t belong to them.


Key points:

  • Altaba is the name of the remaining skeleton investment company, the “Yahoo” brand was sold with the web operations to Verizon.
  • The name Altaba sucks because it’s not a brand name that will stick around for a while — the skeleton company is going to return all its money to its investors.
  • Yahoo had to spin off its investments — there’s no excuse for 90% of its market value to be investments and 10% in its web operations.
  • In particular, the money belongs to Yahoo’s investors, not Yahoo the company. It’s not some sort of slush fund Yahoo’s executives could use. Yahoo couldn’t use that money to keep its flailing web operations going, as Marissa Mayer was attempting to do.
  • Most of Yahoo’s web operations will go the way of Earthlink and Slashdot, as Verizon milks the slowly declining revenue while making no new investments in it.

How Stack Overflow plans to survive the next DNS attack

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2017/01/09/surviving-the-next-dns-attack/

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

The Domain Name System

DNS had its moment in the spotlight in October 2016, with a major Distributed Denial of Service (DDos) attack launched against Dyn, which affected the ability for Internet users to connect to some of their favourite websites, such as Twitter, CNN, imgur, Spotify, and literally thousands of other sites.

But for most systems administrators or website operators, DNS is mostly kept in a little black box, outsourced to a 3rd party, and mostly forgotten about. And, for the most part, this is the way it should be. But as you start to grow to 1.3+ billion pageviews a month with a website where performance is a feature, every little bit matters.

In this post, I’m going to explain some of the decisions we’ve made around DNS in the past, and where we’re going with it in the future. I will eschew deep technical details and gloss over low-level DNS implementation in favour of the broad strokes.

In the beginning

So first, a bit of history: In the beginning, we ran our own DNS on-premises using artisanally crafted zone files with BIND. It was fast enough when we were doing only a few hundred million hits a month, but eventually hand-crafted zonefiles were too much hassle to maintain reliably. When we moved to Cloudflare as our CDN, their service is intimately coupled with DNS, so we demoted our BIND boxes out of production and handed off DNS to Cloudflare.

The search for a new provider

Fast forward to early 2016 and we moved our CDN to Fastly. Fastly doesn’t provide DNS service, so we were back on our own in that regards and our search for a new DNS provider began. We made a list of every DNS provider we could think of, and ended up with a shortlist of 10:

  • Dyn
  • NS1
  • Amazon Route 53
  • Google Cloud DNS
  • Azure DNS (beta)
  • DNSimple
  • Godaddy
  • EdgeCast (Verizon)
  • Hurricane Electric
  • DNS Made Easy

From this list of 10 providers, we did our initial investigations into their service offerings, and started eliminating services that were either not suited to our needs, outrageously expensive, had insufficient SLAs, or didn’t offer services that we required (such as a fully featured API). Then we started performance testing. We did this by embedding a hidden iFrame on 5% of the visitors to stackoverflow.com, which forced a request to a different DNS provider. We did this for each provider until we had some pretty solid performance numbers.

Using some basic analytics, we were able to measure the real-world performance, as seen by our real-world users, broken down into geographical area. We built some box plots based on these tests which allowed us to visualise the different impact each provider had.

If you don’t know how to interpret a boxplot, here’s a brief primer for you. For the data nerds, these were generated with R’s standard boxplot functions, which means the upper and lower whiskers are min(max(x), Q_3 + 1.5 * IQR) and max(min(x), Q_1 – 1.5 * IQR), where IQR = Q_3 – Q_1

This is the results of our tests as seen by our users in the United States:

DNS Performance in the United States

You can see that Hurricane Electric had a quarter of requests return in < 16ms and a median of 32ms, with the three “cloud” providers (Azure, Google Cloud DNS and Route 53) being slightly slower (being around 24ms first quarter and 45ms median), and DNS Made Easy coming in 2nd place (20ms first quarter, 39ms median).

You might wonder why the scale on that chart goes all the way to 700ms when the whiskers go nowhere near that high. This is because we have a worldwide audience, so just looking at data from the United States is not sufficient. If we look at data from New Zealand, we see a very different story:

DNS Performance in New Zealand

Here you can see that Route 53, DNS Made Easy and Azure all have healthy 1st quarters, but Hurricane Electric and Google have very poor 1st quarters. Try to remember this, as this becomes important later on.

We also have Stack Overflow in Portuguese, so let’s check the performance from Brazil:

DNS Performance in Brazil

Here we can see Hurricane Electric, Route 53 and Azure being favoured, with Google and DNS Made Easy being slower.

So how do you reach a decision about which DNS provider to choose, when your main goal is performance? It’s difficult, because regardless of which provider you end up with, you are going to be choosing a provider that is sub-optimal for part of your audience.

You know what would be awesome? If we could have two DNS providers, each one servicing the areas that they do best! Thankfully this is something that is possible to implement with DNS. However, time was short, so we had to put our dual-provider design on the back-burner and just go with a single provider for the time being.

Our initial rollout of DNS was using Amazon Route 53 as our provider: they had acceptable performance figures over a large number of regions and had very effective pricing (on that note Route 53, Azure DNS, and Google Cloud DNS are all priced identically for basic DNS services).

The DYN attack

Roll forwards to October 2016. Route 53 had proven to be a stable, fast, and cost-effective DNS provider. We still had dual DNS providers on our backlog of projects, but like a lot of good ideas it got put on the back-burner until we had more time.

Then the Internet ground to a halt. The DNS provider Dyn had come under attack, knocking a large number of authoritative DNS servers off the Internet, and causing widespread issues with connecting to major websites. All of a sudden DNS had our attention again. Stack Overflow and Stack Exchange were not affected by the Dyn outage, but this was pure luck.

We knew if a DDoS of this scale happened to our DNS provider, the solution would be to have two completely separate DNS providers. That way, if one provider gets knocked off the Internet, we still have a fully functioning second provider who can pick up the slack. But there were still questions to be answered and assumptions to be validated:

  • What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?
  • What is the performance impact for our users if one of the providers is offline?
  • What is the best number of nameservers to be using?
  • How are we going to keep our DNS providers in sync?

These were pretty serious questions – some of which we had hypothesis that needed to be checked and others that were answered in the DNS standards, but we know from experience that DNS providers in the wild do not always obey the DNS standards.

What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?

This one should be fairly easy to test. We’ve already done it once, so let’s just do it again. We fired up our tests, as we did in early 2016, but this time we specified two DNS providers:

  • Route 53 & Google Cloud
  • Route 53 & Azure DNS
  • Route 53 & Our internal DNS

We did this simply by listing Name Servers from both providers in our domain registration (and obviously we set up the same records in the zones for both providers).

Running with Route 53 and Google or Azure was fairly common sense – Google and Azure had good coverage of the regions that Route 53 performed poorly in. Their pricing is identical to Route 53, which would make forecasting for the budget easy. As a third option, we decided to see what would happen if we took our formerly demoted, on-premises BIND servers and put them back into production as one of the providers. Let’s look at the data for the three regions from before: United States, New Zealand and Brazil:

United States
DNS Performance for dual providers in the United States

New Zealand
DNS Performance for dual providers in New Zealand

Brazil

DNS Performance for dual providers in Brazil

There is probably one thing you’ll notice immediately from these boxplots, but there’s also another not-so obvious change:

  1. Azure is not in there (the obvious one)
  2. Our 3rd quarters are measurably slower (the not-so obvious one).

Azure

Azure has a fatal flaw in their DNS offering, as of the writing of this blog post. They do not permit the modification of the NS records in the apex of your zone:

You cannot add to, remove, or modify the records in the automatically created NS record set at the zone apex (name = “@”). The only change that’s permitted is to modify the record set TTL.

These NS records are what your DNS provider says are authoritative DNS servers for a given domain. It’s very important that they are accurate and correct, because they will be cached by clients and DNS resolvers and are more authoritative than the records provided by your registrar.

Without going too much into the actual specifics of how DNS caching and NS records work (it would take me another 2,500 words to describe this in detail), what would happen is this: Whichever DNS provider you contact first would be the only DNS provider you could contact for that domain until your DNS cache expires. If Azure is contacted first, then only Azure’s nameservers will be cached and used. This defeats the purpose of having multiple DNS providers, as in the event that the provider you’ve landed on goes offline, which is roughly 50:50, you will have no other DNS provider to fall back to.

So until Azure adds the ability to modify the NS records in the apex of a zone, they’re off the table for a dual-provider setup.

The 3rd quarter

What the third quarter represents here is the impact of latency on DNS. You’ll notice that in the results for ExDNS (which is the internal name for our on-premises BIND servers) the box plot is much taller than the others. This is because those servers are located in New Jersey and Colorado – far, far away from where most of our visitors come from. So as expected, a service with only two points of presence in a single country (as opposed to dozens worldwide) performs very poorly for a lot of users.

Performance conclusions

So our choices were narrowed for us to Route 53 and Google Cloud, thanks to Azure’s lack of ability to modify critical NS records. Thankfully, we have the data to back up the fact that Route 53 combined with Google is a very acceptable combination.

Remember earlier, when I said that the performance of New Zealand was important? This is because Route 53 performed well, but Google Cloud performed poorly in that region. But look at the chart again. Don’t scroll up, I’ll show you another chart here:

Comparison for DNS performance data in New Zealand between single and dual providers

See how Google on its own performed very poorly in NZ (its 1st quarter is 164ms versus 27ms for Route 53)? However, when you combine Google and Route 53 together, the performance basically stays the same as when there was just Route 53.

Why is this? Well, it’s due to a technique called Smooth Round Trip Time. Basically, DNS resolvers (namely certain version of BIND and PowerDNS) keep track of which DNS servers respond faster, and weight queries towards those DNS servers. This means that the faster provider should be skewed to more often than the slower providers. There’s a nice presentation over here if you want to learn more about this. The short version is that if you have many DNS servers, DNS cache servers will favour the fastests ones. As a result, if one provider is fast in Auckland but slow in London, and another provider is the reverse, DNS cache servers in Auckland will favour the first provider and DNS cache servers in London will favor the other. This is a very little known feature of modern DNS servers but our testing shows that enough ISPs support it that we are confident we can rely on it.

What is the performance impact for our users if one of the providers is offline?

This is where having some on-premises DNS servers comes in very handy. What we can essentially do here is send a sample of our users to our on-premises servers, get a baseline performance measurement, then break one of the servers and run the performance measurements again. We can also measure in multiple places: We have our measurements as reported by our clients (what the end user actually experienced), and we can look at data from within our network to see what actually happened. For network analysis, we turned to our trusted network analysis tool, ExtraHop. This would allow us to look at the data on the wire, and get measurements from a broken DNS server (something you can’t do easily with a pcap on that server, because, you know. It’s broken).

Here’s what healthy performance looked like on the wire (as measured by ExtraHop), with two DNS servers, both of them fully operational, over a 24-hour period (this chart is additive for the two series):

DNS performance with two healthy name servers

Blue and brown are the two different, healthy DNS servers. As you can see, there’s a very even 50:50 split in request volume. Because both of the servers are located in the same datacenter, Smoothed Round Trip Time had no effect, and we had a nice even distribution – as we would expect.

Now, what happens when we take one of those DNS servers offline, to simulate a provider outage?

DNS performance with a broken nameserver

In this case, the blue DNS server was offline, and the brown DNS server was healthy. What we see here is that the blue, broken, DNS server received the same number of requests as it did when the DNS server was healthy, but the brown, healthy, DNS server saw twice as many requests. This is because those users who were hitting the broken server eventually retried their requests to the healthy server and started to favor it. So what does this look like in terms of actual client performance?

I’m only going to share one chart with you this time, because they were all essentially the same:

Comparison of healthy vs unhealthy DNS performance

What we see here is a substantial number of our visitors saw a performance decrease. For some it was minor, for others, quite major. This is because the 50% of visitors who hit the faulty server need to retry their request, and the amount of time it takes to retry that request seems to vary. You can see again a large increase in the long tail, which indicates that they are clients who took over 300 milliseconds to retry their request.

What does this tell us?

What this means is that in the event of a DNS provider going offline, we need to pull that DNS provider out of rotation to provide best performance, but until we do our users will still receive service. A non-trivial number of users will be seeing a large performance impact.

What is the best number of nameservers to be using?

Based on the previous performance testing, we can assume that the number of retries a client may have to make is N/2+1, where N is the number of nameservers listed. So if we list eight nameservers, with four from each provider, the client may potentially have to make 5 DNS requests before they finally get a successful message (the four failed requests, plus a final successful one). A statistician better than I would be able to tell you the exact probabilities of each scenario you would face, but the short answer here is:

Four.

We felt that based on our use case, and the performance penalty we were willing to take, we would be listing a total of four nameservers – two from each provider. This may not be the right decision for those who have a web presence orders of magnitudes larger than ours, but Facebook provide two nameservers on IPv4 and two on IPv6. Twitter provides eight, four from Dyn and four from Route 53. Google provides 4.

How are we going to keep our DNS providers in sync?

DNS has built in ways of keeping multiple servers in sync. You have domain transfers (IXFR, AXFR), which are usually triggered by a NOTIFY packet sent to all the servers listed as NS records in the zone. But these are not used in the wild very often, and have limited support from DNS providers. They also come with their own headaches, like maintaining an ACL IP Whitelist, of which there could be hundreds of potential servers (all the different points of presence from multiple providers), of which you do not control any. You also lose the ability to audit who changed which record, as they could be changed on any given server.

So we built a tool to keep our DNS in sync. We actually built this tool years ago, once our artisanally crafted zone files became too troublesome to edit by hand. The details of this tool are out of scope for this blog post though. If you want to learn about it, keep an eye out around March 2017 as we plan to open-source it. The tool lets us describe the DNS zone data in one place and push it to many different DNS providers.

So what did we learn?

The biggest takeaway from all of this, is that even if you have multiple DNS servers, DNS is still a single point of failure if they are all with the same provider and that provider goes offline. Until the Dyn attack this was pretty much “in theory” if you were using a large DNS provider, because until first the successful attack no large DNS provider had ever had an extended outage on all of its points of presence.

However, implementing multiple DNS providers is not entirely straightforward. There are performance considerations. You need to ensure that both of your zones are serving the same data. There can be such a thing as too many nameservers.

Lastly, we did all of this whilst following DNS best practices. We didn’t have to do any weird DNS trickery, or write our own DNS server to do non-standard things. When DNS was designed in 1987, I wonder if the authors knew the importance of what they were creating. I don’t know, but their design still stands strong and resilient today.

Attributions

  • Thanks to Camelia Nicollet for her work in R to produce the graphs in this blog post

2016: The Year In Tech, And A Sneak Peek Of What’s To Come

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/2016-year-tech-sneak-peek-whats-come/

2016 is safely in our rear-view mirrors. It’s time to take a look back at the year that was and see what technology had the biggest impact on consumers and businesses alike. We also have an eye to 2017 to see what the future holds.

AI and machine learning in the cloud

Truly sentient computers and robots are still the stuff of science fiction (and the premise of one of 2016’s most promising new SF TV series, HBO’s Westworld). Neural networks are nothing new, but 2016 saw huge strides in artificial intelligence and machine learning, especially in the cloud.

Google, Amazon, Apple, IBM, Microsoft and others are developing cloud computing infrastructures designed especially for AI work. It’s this technology that’s underpinning advances in image recognition technology, pattern recognition in cybersecurity, speech recognition, natural language interpretation and other advances.

Microsoft’s newly-formed AI and Research Group is finding ways to get artificial intelligence into Microsoft products like its Bing search engine and Cortana natural language assistant. Some of these efforts, while well-meaning, still need refinement: Early in 2016 Microsoft launched Tay, an AI chatbot designed to mimic the natural language characteristics of a teenage girl and learn from interacting with Twitter users. Microsoft had to shut Tay down after Twitter users exploited vulnerabilities that caused Tay to begin spewing really inappropriate responses. But it paves the way for future efforts that blur the line between man and machine.

Finance, energy, climatology – anywhere you find big data sets you’re going to find uses for machine learning. On the consumer end it can help your grocery app guess what you might want or need based on your spending habits. Financial firms use machine learning to help predict customer credit scores by analyzing profile information. One of the most intriguing uses of machine learning is in security: Pattern recognition helps systems predict malicious intent and figure out where exploits will come from.

Meanwhile we’re still waiting for Rosie the Robot from the Jetsons. And flying cars. So if Elon Musk has any spare time in 2017, maybe he can get on that.

AR Games

Augmented Reality (AR) games have been around for a good long time – ever since smartphone makers put cameras on them, game makers have been toying with the mix of real life and games.

AR games took a giant step forward with a game released in 2016 that you couldn’t get away from, at least for a little while. We’re talking about Pokémon GO, of course. Niantic, makers of another AR game called Ingress, used the framework they built for that game to power Pokémon GO. Kids, parents, young, old, it seemed like everyone with an iPhone that could run the game caught wild Pokémon, hatched eggs by walking, and battled each other in Pokémon gyms.

For a few weeks, anyway.

Technical glitches, problems with scale and limited gameplay value ultimately hurt Pokémon GO’s longevity. Today the game only garners a fraction of the public interest it did at peak. It continues to be successful, albeit not at the stratospheric pace it first set.

Niantic, the game’s developer, was able to tie together several factors to bring such an explosive and – if you’ll pardon the overused euphemism – disruptive – game to bear. One was its previous work with a game called Ingress, another AR-enhanced game that uses geomap data. In fact, Pokémon GO uses the same geomap data as Ingress, so Niantic had already done a huge amount of legwork needed to get Pokémon GO up and running. Niantic cleverly used Google Maps data to form the basis of both games, relying on already-identified public landmarks and other locations tagged by Ingress players (Ingress has been around since 2011).

Then, of course, there’s the Pokémon connection – an intensely meaningful gaming property that’s been popular with generations of video games and cartoon watchers since the 1990s. The dearth of Pokémon-branded games on smartphones meant an instant explosion of popularity upon Pokémon GO’s release.

2016 also saw the introduction of several new virtual reality (VR) headsets designed for home and mobile use. Samsung Gear VR and Google Daydream View made a splash. As these products continue to make consumer inroads, we’ll see more games push the envelope of what you can achieve with VR and AR.

Hybrid Cloud

Hybrid Cloud services combine public cloud storage (like B2 Cloud Storage) or public compute (like Amazon Web Services) with a private cloud platform. Specialized content and file management software glues it all together, making the experience seamless for the user.

Businesses get the instant access and speed they need to get work done, with the ability to fall back on on-demand cloud-based resources when scale is needed. B2’s hybrid cloud integrations include OpenIO, which helps businesses maintain data storage on-premise until it’s designated for archive and stored in the B2 cloud.

The cost of entry and usage of Hybrid Cloud services have continued to fall. For example, small and medium-sized organizations in the post production industry are finding Hybrid Cloud storage is now a viable strategy in managing the large amounts of information they use on a daily basis. This strategy is enabled by the low cost of B2 Cloud Storage that provides ready access to cloud-stored data.

There are practical deployment and scale issues that have kept Hybrid Cloud services from being used widespread in the largest enterprise environments. Small to medium businesses and vertical markets like Media & Entertainment have found promising, economical opportunities to use it, which bodes well for the future.

Inexpensive 3D printers

3D printing, once a rarified technology, has become increasingly commoditized over the past several years. That’s been in part thanks to the “Maker Movement:” Thousands of folks all around the world who love to tinker and build. XYZprinting is out in front of makers and others with its line of inexpensive desktop da Vinci printers.

The da Vinci Mini is a tabletop model aimed at home users which starts at under $300. You can download and tweak thousands of 3D models to build toys, games, art projects and educational items. They’re built using spools of biodegradable, non-toxic plastics derived from corn starch which dispense sort of like the bobbin on a sewing machine. The da Vinci Mini works with Macs and PCs and can connect via USB or Wi-Fi.

DIY Drones

Quadcopter drones have been fun tech toys for a while now, but the new trend we saw in 2016 was “do it yourself” models. The result was Flybrix, which combines lightweight drone motors with LEGO building toys. Flybrix was so successful that they blew out of inventory for the 2016 holiday season and are backlogged with orders into the new year.

Each Flybrix kit comes with the motors, LEGO building blocks, cables and gear you need to build your own quad, hex or octocopter drone (as well as a cheerful-looking LEGO pilot to command the new vessel). A downloadable app for iOS or Android lets you control your creation. A deluxe kit includes a handheld controller so you don’t have to tie up your phone.

If you already own a 3D printer like the da Vinci Mini, you’ll find plenty of model files available for download and modification so you can print your own parts, though you’ll probably need help from one of the many maker sites to know what else you’ll need to aerial flight and control.

5D Glass Storage

Research at the University of Southampton may yield the next big leap in optical storage technology meant for long-term archival. The boffins at the Optoelectronics Research Centre have developed a new data storage technique that embeds information in glass “nanostructures” on a storage disc the size of a U.S. quarter.

A Blu-Ray Disc can hold 50 GB, but one of the new 5D glass storage discs – only the size of a U.S. quarter – can hold 360 TB – 7200 times more. It’s like a super-stable supercharged version of a CD. Not only is the data inscribed on much smaller structures within the glass, but reflected at multiple angles, hence “5D.”

An upside to this is an absence of bit rot: The glass medium is extremely stable, with a shelf life predicted in billions of years. The downside is that this is still a write-once medium, so it’s intended for long term storage.

This tech is still years away from practical use, but it took a big step forward in 2016 when the University announced the development of a practical information encoding scheme to use with it.

Smart Home Tech

Are you ready to talk to your house to tell it to do things? If you’re not already, you probably will be soon. Google’s Google Home is a $129 voice-activated speaker powered by the Google Assistant. You can use it for everything from streaming music and video to a nearby TV to reading your calendar or to do list. You can also tell it to operate other supported devices like the Nest smart thermostat and Philips Hue lights.

Amazon has its own similar wireless speaker product called the Echo, powered by Amazon’s Alexa information assistant. Amazon has differentiated its Echo offerings by making the Dot – a hockey puck-sized device that connects to a speaker you already own. So Amazon customers can begin to outfit their connected homes for less than $50.

Apple’s HomeKit software kit isn’t a speaker like Amazon Echo or Google Home. It’s software. You use the Home app on your iOS 10-equipped iPhone or iPad to connect and configure supported devices. Use Siri, Apple’s own intelligent assistant, on any supported Apple device. HomeKit turns on lights, turns up the thermostat, operates switches and more.

Smart home tech has been coming in fits and starts for a while – the Nest smart thermostat is already in its third generation, for example. But 2016 was the year we finally saw the “Internet of things” coalescing into a smart home that we can control through voice and gestures in a … well, smart way.

Welcome To The Future

It’s 2017, welcome to our brave new world. While it’s anyone’s guess what the future holds, there are at least a few tech trends that are pretty safe to bet on. They include:

  • Internet of Things: More smart-connected devices are coming online in the home and at work every day, and this trend will accelerate in 2017 with more and more devices requiring some form of Internet connectivity to work. Expect to see a lot more appliances, devices, and accessories that make use of the API’s promoted by Google, Amazon, and Apple to help let you control everything in your life just using your voice and a smart speaker setup.
  • Blockchain security: Blockchain is the digital ledger security technology that makes Bitcoin work. Its distribution methodology and validation system help you make certain that no one’s tampered with the records, which make it well-suited for applications besides cryptocurrency, like make sure your smart thermostat (see above) hasn’t been hacked). Expect 2017 to be the year we see more mainstream acceptance, use, and development of blockchain technology from financial institutions, the creation of new private blockchain networks, and improved usability aimed at making blockchain easier for regular consumers to use. Blockchain-based voting is here too. It also wouldn’t surprise us, given all this movement, to see government regulators take a much deeper interest in blockchain, either.
  • 5G: Verizon is field-testing 5G on its wireless network, which it says deliver speeds 30-50 times faster than 4G LTE. We’ll be hearing a lot more about 5G from Verizon and other wireless players in 2017. In fairness, we’re still a few years away from widescale 5G deployment, but field-testing has already started.

Your Predictions?

Enough of our bloviation. Let’s open the floor to you. What do you think were the biggest technology trends in 2016? What’s coming in 2017 that has you the most excited? Let us know in the comments!

The post 2016: The Year In Tech, And A Sneak Peek Of What’s To Come appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

What’s the Diff: Megabits and Megabytes

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/megabits-vs-megabytes/

Megabits vs. Megabytes

What is the difference between a megabit and a megabyte? The answer is obvious to computer people – it’s “a factor of eight,” since there are eight bits in a single byte. But there’s a lot more to the answer, too, involving how data moves, is stored, and the history of computing.

What are Megabits?

“Megabit” is a term we use most often when talking about the speed of our Internet connection. Megabits per second, or Mbps, is a measurement of data transfer speed. 1 Mbps is 1 million bits per second.

Take Internet service providers, for example. My cable provider has upped my maximum download speed from 25 to 75 to 150 Mbps over the years. Fiber optic connections (Verizon’s FIOS, Google Fiber) can be much faster, where you can get the service.

What is a Megabyte?

“Megabyte” is a measurement most often used to describe both hard drive space and memory storage capacity, though the term of art we throw around most frequently these days is the next order of magnitude, the Gigabyte (GB). My computer has 8 GB of RAM, for example, and 512 GB of storage capacity.

How to Measure Megabits and Megabytes

A bit is a single piece of information, expressed at its most elementary in the computer as a binary 0 or 1. Bits are organized into units of data eight digits long – that is a byte. Kilobytes, megabytes, gigabytes, terabytes, petabytes – each unit of measurement is 1,000 times the size before it.

So why does network bandwidth get measured in megabits, while storage gets measured in megabytes? There are a lot of theories and expositions about why. I haven’t found a “hard” answer yet, but the most reasonable explanation I’ve heard from networking engineers is that it’s because a bit is the lowest common denominator, if you will – the smallest meaningful unit of measurement to understand network transfer speed. As in bits per second. It’s like measuring the flow rate of the plumbing in your house.

As to why data is assembled in bytes, Wikipedia cites the popularity of IBM’s System/360 as one likely reason: The computer used a then-novel eight-bit data format. IBM defined computing for a generation of engineers, so it’s the standard that moved forward. The old marketing adage was, “No one ever got fired for buying IBM.”

Plausible? Absolutely. Is it the only reason? Well, Wikipedia presents an authoritative case. You’ll find a lot of conjecture but few hard answers if you look elsewhere on the Internet.

Which means aliens are behind it all, as far as I’m concerned.

What Does It All Mean

Anyway, here we stand today, with the delineation clear: Bandwidth is measured in bits, storage capacity in bytes. Simple, but what can be confusing is when we mix the two. Let’s say your network upload speed is 8 Mbps (megabits per second), that means that the absolute most you can upload is 1 MB (megabyte) of data from your hard drive per second. Megabits versus Megabytes, remember to keep the distinction in your head as you see how fast data moves over your network or to the Internet.

The post What’s the Diff: Megabits and Megabytes appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.