Tag Archives: spotify

Disney Ditching Netflix Keeps Piracy Relevant

Post Syndicated from Ernesto original https://torrentfreak.com/disney-ditching-netflix-keeps-piracy-relevant-170809/

There is little doubt that, in the United States, Netflix has become the standard for watching movies on the Internet.

The subscription service is responsible for a third of all Internet traffic during peak hours, dwarfing that of online piracy and other legal video platforms.

It’s safe to assume that Netflix-type streaming services are among the best and most convenient alternative to piracy at this point. There is a problem though. The whole appeal of the streaming model becomes diluted when there are too many ‘Netflixes.’

Yesterday, Disney announced that it will end its partnership with Netflix in 2019. The company is working on its own Disney-branded movie streaming platforms, where titles such as Frozen 2 and Toy Story 4 will end up in the future.

Disney titles are among the most-watched content on Netflix, and the company’s stock took a hit when the news came out. In a statement late yesterday, Disney CEO Bob noted that the company has a good relationship with Netflix but the companies will part ways at the end of next year.

At the moment no decision has been made on what happens to Lucasfilm and Marvel films, but these could find a new home as well. Marvel TV shows such as Jessica Jones and Luke Cage will reportedly stay at Netflix

Although Disney’s decision may be good for Disney, a lot of Netflix users are not going to be happy. It likely means that they need another streaming platform subscription to get what they want, which isn’t a very positive prospect.

In piracy discussions, Hollywood insiders often stress that people have no reason to pirate, as pretty much all titles are available online legally. What they don’t mention, however, is that users need access to a few dozen paid services, to access them all.

In a way, this fragmentation is keeping the pirate ecosystems intact. While legal streaming services work just fine, having dozens of subscriptions is expensive, and not very practical. Especially not compared to pirate streaming sites, where everything can be accessed on the same site.

The music business has a better model, or had initially. Services such as Spotify allowed fans to access most popular music in one place, although that’s starting to crumble as well, due to exclusive deals and more fragmentation.

Admittedly, for a no-name observer, it’s easy to criticize and point fingers. The TV and movie business is built on complicated licensing deals, where a single Netflix may not be able to generate enough revenue for an entire industry.

But there has to be a better way than simply adding more streaming platforms, one would think?

Instead of solely trying to stamp down on pirate sites, it might be a good idea to take a careful look at the supply side as well. At the moment, fragmentation is keeping pirate sites relevant.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Deploying Java Microservices on Amazon EC2 Container Service

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/deploying-java-microservices-on-amazon-ec2-container-service/

This post and accompanying code graciously contributed by:

Huy Huynh
Sr. Solutions Architect
Magnus Bjorkman
Solutions Architect

Java is a popular language used by many enterprises today. To simplify and accelerate Java application development, many companies are moving from a monolithic to microservices architecture. For some, it has become a strategic imperative. Containerization technology, such as Docker, lets enterprises build scalable, robust microservice architectures without major code rewrites.

In this post, I cover how to containerize a monolithic Java application to run on Docker. Then, I show how to deploy it on AWS using Amazon EC2 Container Service (Amazon ECS), a high-performance container management service. Finally, I show how to break the monolith into multiple services, all running in containers on Amazon ECS.

Application Architecture

For this example, I use the Spring Pet Clinic, a monolithic Java application for managing a veterinary practice. It is a simple REST API, which allows the client to manage and view Owners, Pets, Vets, and Visits.

It is a simple three-tier architecture:

  • Client
    You simulate this by using curl commands.
  • Web/app server
    This is the Java and Spring-based application that you run using the embedded Tomcat. As part of this post, you run this within Docker containers.
  • Database server
    This is the relational database for your application that stores information about owners, pets, vets, and visits. For this post, use MySQL RDS.

I decided to not put the database inside a container as containers were designed for applications and are transient in nature. The choice was made even easier because you have a fully managed database service available with Amazon RDS.

RDS manages the work involved in setting up a relational database, from provisioning the infrastructure capacity that you request to installing the database software. After your database is up and running, RDS automates common administrative tasks, such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.

Walkthrough

You can find the code for the example covered in this post at amazon-ecs-java-microservices on GitHub.

Prerequisites

You need the following to walk through this solution:

  • An AWS account
  • An access key and secret key for a user in the account
  • The AWS CLI installed

Also, install the latest versions of the following:

  • Java
  • Maven
  • Python
  • Docker

Step 1: Move the existing Java Spring application to a container deployed using Amazon ECS

First, move the existing monolith application to a container and deploy it using Amazon ECS. This is a great first step before breaking the monolith apart because you still get some benefits before breaking apart the monolith:

  • An improved pipeline. The container also allows an engineering organization to create a standard pipeline for the application lifecycle.
  • No mutations to machines.

You can find the monolith example at 1_ECS_Java_Spring_PetClinic.

Container deployment overview

The following diagram is an overview of what the setup looks like for Amazon ECS and related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The load balancer that distributes requests across all available ports and instances registered in the application’s target group using round-robin.
  • The target group that is updated by Amazon ECS to always have an up-to-date list of all the service containers in the cluster. This includes the port on which they are accessible.
  • One Amazon ECS cluster that hosts the container for the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Each container has a single application process that is bound to port 8080 within its namespace. In reality, all the containers are exposed on a different, randomly assigned port on the host.

The architecture is containerized but still monolithic because each container has all the same features of the rest of the containers

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon EC2 Container Registry (Amazon ECR) repository for the application.
  • A service/task definition that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers, so that the application can connect to the MySQL RDS instance.

I have automated setup with the 1_ECS_Java_Spring_PetClinic/ecs-cluster.cf AWS CloudFormation template and a Python script.

The Python script calls the CloudFormation template for the initial setup of the VPC, Amazon ECS cluster, and RDS instance. It then extracts the outputs from the template and uses those for API calls to create Amazon ECR repositories, tasks, services, Application Load Balancer, and target groups.

Environment variables and Spring properties binding

As part of the Python script, you pass in a number of environment variables to the container as part of the task/container definition:

'environment': [
{
'name': 'SPRING_PROFILES_ACTIVE',
'value': 'mysql'
},
{
'name': 'SPRING_DATASOURCE_URL',
'value': my_sql_options['dns_name']
},
{
'name': 'SPRING_DATASOURCE_USERNAME',
'value': my_sql_options['username']
},
{
'name': 'SPRING_DATASOURCE_PASSWORD',
'value': my_sql_options['password']
}
],

The preceding environment variables work in concert with the Spring property system. The value in the variable SPRING_PROFILES_ACTIVE, makes Spring use the MySQL version of the application property file. The other environment files override the following properties in that file:

  • spring.datasource.url
  • spring.datasource.username
  • spring.datasource.password

Optionally, you can also encrypt sensitive values by using Amazon EC2 Systems Manager Parameter Store. Instead of handing in the password, you pass in a reference to the parameter and fetch the value as part of the container startup. For more information, see Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks.

Spotify Docker Maven plugin

Use the Spotify Docker Maven plugin to create the image and push it directly to Amazon ECR. This allows you to do this as part of the regular Maven build. It also integrates the image generation as part of the overall build process. Use an explicit Dockerfile as input to the plugin.

FROM frolvlad/alpine-oraclejdk8:slim
VOLUME /tmp
ADD spring-petclinic-rest-1.7.jar app.jar
RUN sh -c 'touch /app.jar'
ENV JAVA_OPTS=""
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]

The Python script discussed earlier uses the AWS CLI to authenticate you with AWS. The script places the token in the appropriate location so that the plugin can work directly against the Amazon ECR repository.

Test setup

You can test the setup by running the Python script:
python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:
curl <your endpoint from output above>/owner

You can clean this up before going to the next section:
python setup.py -m cleanup -r <your region>

Step 2: Converting the monolith into microservices running on Amazon ECS

The second step is to convert the monolith into microservices. For a real application, you would likely not do this as one step, but re-architect an application piece by piece. You would continue to run your monolith but it would keep getting smaller for each piece that you are breaking apart.

By migrating microservices, you would get four benefits associated with microservices:

  • Isolation of crashes
    If one microservice in your application is crashing, then only that part of your application goes down. The rest of your application continues to work properly.
  • Isolation of security
    When microservice best practices are followed, the result is that if an attacker compromises one service, they only gain access to the resources of that service. They can’t horizontally access other resources from other services without breaking into those services as well.
  • Independent scaling
    When features are broken out into microservices, then the amount of infrastructure and number of instances of each microservice class can be scaled up and down independently.
  • Development velocity
    In a monolith, adding a new feature can potentially impact every other feature that the monolith contains. On the other hand, a proper microservice architecture has new code for a new feature going into a new service. You can be confident that any code you write won’t impact the existing code at all, unless you explicitly write a connection between two microservices.

Find the monolith example at 2_ECS_Java_Spring_PetClinic_Microservices.
You break apart the Spring Pet Clinic application by creating a microservice for each REST API operation, as well as creating one for the system services.

Java code changes

Comparing the project structure between the monolith and the microservices version, you can see that each service is now its own separate build.
First, the monolith version:

You can clearly see how each API operation is its own subpackage under the org.springframework.samples.petclinic package, all part of the same monolithic application.
This changes as you break it apart in the microservices version:

Now, each API operation is its own separate build, which you can build independently and deploy. You have also duplicated some code across the different microservices, such as the classes under the model subpackage. This is intentional as you don’t want to introduce artificial dependencies among the microservices and allow these to evolve differently for each microservice.

Also, make the dependencies among the API operations more loosely coupled. In the monolithic version, the components are tightly coupled and use object-based invocation.

Here is an example of this from the OwnerController operation, where the class is directly calling PetRepository to get information about pets. PetRepository is the Repository class (Spring data access layer) to the Pet table in the RDS instance for the Pet API:

@RestController
class OwnerController {

    @Inject
    private PetRepository pets;
    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> visitList.addAll(pet.getVisits()));
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }
}

In the microservice version, call the Pet API operation and not PetRepository directly. Decouple the components by using interprocess communication; in this case, the Rest API. This provides for fault tolerance and disposability.

@RestController
class OwnerController {

    @Value("#{environment['SERVICE_ENDPOINT'] ?: 'localhost:8080'}")
    private String serviceEndpoint;

    @Inject
    private OwnerRepository owners;
    private static final Logger logger = LoggerFactory.getLogger(OwnerController.class);

    @RequestMapping(value = "/owner/{ownerId}/getVisits", method = RequestMethod.GET)
    public ResponseEntity<List<Visit>> getOwnerVisits(@PathVariable int ownerId){
        List<Pet> petList = this.owners.findById(ownerId).getPets();
        List<Visit> visitList = new ArrayList<Visit>();
        petList.forEach(pet -> {
            logger.info(getPetVisits(pet.getId()).toString());
            visitList.addAll(getPetVisits(pet.getId()));
        });
        return new ResponseEntity<List<Visit>>(visitList, HttpStatus.OK);
    }

    private List<Visit> getPetVisits(int petId){
        List<Visit> visitList = new ArrayList<Visit>();
        RestTemplate restTemplate = new RestTemplate();
        Pet pet = restTemplate.getForObject("http://"+serviceEndpoint+"/pet/"+petId, Pet.class);
        logger.info(pet.getVisits().toString());
        return pet.getVisits();
    }
}

You now have an additional method that calls the API. You are also handing in the service endpoint that should be called, so that you can easily inject dynamic endpoints based on the current deployment.

Container deployment overview

Here is an overview of what the setup looks like for Amazon ECS and the related services:

This setup consists of the following resources:

  • The client application that makes a request to the load balancer.
  • The Application Load Balancer that inspects the client request. Based on routing rules, it directs the request to an instance and port from the target group that matches the rule.
  • The Application Load Balancer that has a target group for each microservice. The target groups are used by the corresponding services to register available container instances. Each target group has a path, so when you call the path for a particular microservice, it is mapped to the correct target group. This allows you to use one Application Load Balancer to serve all the different microservices, accessed by the path. For example, https:///owner/* would be mapped and directed to the Owner microservice.
  • One Amazon ECS cluster that hosts the containers for each microservice of the application.
  • A VPC network to host the Amazon ECS cluster and associated security groups.

Because you are running multiple containers on the same instances, use dynamic port mapping to avoid port clashing. By using dynamic port mapping, the container is allocated an anonymous port on the host to which the container port (8080) is mapped. The anonymous port is registered with the Application Load Balancer and target group so that traffic is routed correctly.

The following is also part of the solution but not depicted in the above diagram:

  • One Amazon ECR repository for each microservice.
  • A service/task definition per microservice that spins up containers on the instances of the Amazon ECS cluster.
  • A MySQL RDS instance that hosts the applications schema. The information about the MySQL RDS instance is sent in through environment variables to the containers. That way, the application can connect to the MySQL RDS instance.

I have again automated setup with the 2_ECS_Java_Spring_PetClinic_Microservices/ecs-cluster.cf CloudFormation template and a Python script.

The CloudFormation template remains the same as in the previous section. In the Python script, you are now building five different Java applications, one for each microservice (also includes a system application). There is a separate Maven POM file for each one. The resulting Docker image gets pushed to its own Amazon ECR repository, and is deployed separately using its own service/task definition. This is critical to get the benefits described earlier for microservices.

Here is an example of the POM file for the Owner microservice:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.springframework.samples</groupId>
    <artifactId>spring-petclinic-rest</artifactId>
    <version>1.7</version>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.5.2.RELEASE</version>
    </parent>
    <properties>
        <!-- Generic properties -->
        <java.version>1.8</java.version>
        <docker.registry.host>${env.docker_registry_host}</docker.registry.host>
    </properties>
    <dependencies>
        <dependency>
            <groupId>javax.inject</groupId>
            <artifactId>javax.inject</artifactId>
            <version>1</version>
        </dependency>
        <!-- Spring and Spring Boot dependencies -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-actuator</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-cache</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <!-- Databases - Uses HSQL by default -->
        <dependency>
            <groupId>org.hsqldb</groupId>
            <artifactId>hsqldb</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <scope>runtime</scope>
        </dependency>
        <!-- caching -->
        <dependency>
            <groupId>javax.cache</groupId>
            <artifactId>cache-api</artifactId>
        </dependency>
        <dependency>
            <groupId>org.ehcache</groupId>
            <artifactId>ehcache</artifactId>
        </dependency>
        <!-- end of webjars -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
            </plugin>
            <plugin>
                <groupId>com.spotify</groupId>
                <artifactId>docker-maven-plugin</artifactId>
                <version>0.4.13</version>
                <configuration>
                    <imageName>${env.docker_registry_host}/${project.artifactId}</imageName>
                    <dockerDirectory>src/main/docker</dockerDirectory>
                    <useConfigFile>true</useConfigFile>
                    <registryUrl>${env.docker_registry_host}</registryUrl>
                    <!--dockerHost>https://${docker.registry.host}</dockerHost-->
                    <resources>
                        <resource>
                            <targetPath>/</targetPath>
                            <directory>${project.build.directory}</directory>
                            <include>${project.build.finalName}.jar</include>
                        </resource>
                    </resources>
                    <forceTags>false</forceTags>
                    <imageTags>
                        <imageTag>${project.version}</imageTag>
                    </imageTags>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

Test setup

You can test this by running the Python script:

python setup.py -m setup -r <your region>

After the script has successfully run, you can test by querying an endpoint:

curl <your endpoint from output above>/owner

Conclusion

Migrating a monolithic application to a containerized set of microservices can seem like a daunting task. Following the steps outlined in this post, you can begin to containerize monolithic Java apps, taking advantage of the container runtime environment, and beginning the process of re-architecting into microservices. On the whole, containerized microservices are faster to develop, easier to iterate on, and more cost effective to maintain and secure.

This post focused on the first steps of microservice migration. You can learn more about optimizing and scaling your microservices with components such as service discovery, blue/green deployment, circuit breakers, and configuration servers at http://aws.amazon.com/containers.

If you have questions or suggestions, please comment below.

Hardcore UK Pirates Dwindle But Illegal Streaming Poses New Threat

Post Syndicated from Andy original https://torrentfreak.com/hardcore-uk-pirates-dwindle-but-illegal-streaming-poses-new-threat-170707/

For as many years as ‘pirate’ services have been online it has been clear that licensed services need to aggressively compete to stay in the game.

Both the music and movie industries were initially slow to get off the mark but in recent years the position has changed. Licensed services such as Spotify and Netflix are now household names and doing well, even among people who have traditionally consumed illicit content.

This continuing trend was highlighted again this morning in a press release by the UK’s Intellectual Property Office. In a fairly upbeat tone, the IPO notes that innovative streaming models offered by both Netflix and Spotify are helping to keep online infringement stable in the UK.

“The Online Copyright Infringement (OCI) Tracker, commissioned by the UK Intellectual Property Office (IPO), has revealed that 15 per cent of UK internet users, approximately 7 million people, either stream or download material that infringes copyright,” the IPO reports.

The full tracking report, which is now on its 7th wave, is yet to be released but the government has teased a few interesting stats. While the 7 million infringer number is mostly unchanged from last year, the mix of hardcore (only use infringing sources) and casual infringers (also use legal sources) has changed.

“Consumers accessing exclusively free content is at an all-time low,” the IPO reveals, noting that legitimate streaming is also on the up, with Spotify increasing its userbase by 7% since 2016.

But despite the positive signs, the government says that there are concerns surrounding illicit streaming, both of music and video content. Unsurprisingly, ‘pirate’ set-top boxes get a prominent mention and are labeled a threat to positive trends.

“Illicitly adapted set top boxes, which allow users to illegally stream premium TV content such as blockbuster movies, threaten to undermine recent progress. 13 per cent of online infringers are using streaming boxes that can be easily adapted to stream illicit content,” the IPO says.

Again, since the report hasn’t yet been published, there are currently no additional details to be examined. However, the “boxes that can be easily adapted” comment could easily reference Amazon Firesticks, for example, that are currently being used for entirely legitimate means.

The IPO notes that an IPTV consultation is underway which may provide guidance on how the devices can be dealt with in the future. A government response is due to be published later in the summer.

Also heavily on the radar is a fairly steep reported increase in stream-ripping, which is the unlicensed downloading of music from streaming sources so that it can be kept on a user’s hard drive or device.

A separate report, commissioned by the IPO and PRS for Music, reveals that 15% of Internet users have stream-ripped in some way and the use of ripping services is on the up.

“The use of stream-ripping websites increased by 141.3% between 2014 and 2016,” the IPO notes.

“In a survey of over 9000 people, 57% of UK adults claimed to be aware of stream-ripping services. Those who claimed to have used a stream-ripping service were significantly more likely to be male and between the ages of 16 to 34 years.”

PRS goes into a little more detail, claiming that stream-ripping is now “the most prevalent and fastest growing form of music piracy in the UK.” The music licensing outfit claims that almost 70% of music-specific infringement is accounted for by stream-ripping.

The survey, carried out by INCOPRO and Kantar Media, looked at 80 stream-ripping services, which included apps, websites, browser plug-ins and other stand-alone software. Each supplied content from a range of sources including SoundCloud, Spotify and Deezer, but YouTube was found to be the most popular source, accounting for 75 of the 80 services.

There are several reported motivations for users to stream-rip but interestingly the number one reason involves what some people consider to be ‘honest’ piracy. A total of 31% of stream-rippers said that since they already own the music, and only use ripping services to obtain it in another format.

Just over a quarter (26%) said they wanted to listen to music while not connected to the Internet while 25% said that a permanent copy helps them while on the move. Around one in five people who stream-rip say that music is either unaffordable or overpriced.

“We hope that this research will provide the basis for a renewed and re-focused commitment to tackling online copyright infringement,” says Robert Ashcroft, Chief Executive, PRS for Music.

“The long term health of the UK’s cultural and creative sectors is in everyone’s best interests, including those of the digital service providers, and a co-ordinated industry and government approach to tackling stream ripping is essential.”

Ros Lynch, Copyright and IP Enforcement Director at the IPO, took the opportunity to praise the widespread use of legitimate platforms. However, he also noted that innovation also continues in piracy circles, with stream-ripping a prime example.

“It’s great that legal streaming sites continue to be a hugely popular choice for consumers. The success and popularity of these platforms show the importance of evolution and innovation in the entertainment industry,” Lynch said.

“Ironically it is innovation that also benefits those looking to undermine IP rights and benefit financially from copyright infringement. There has never been more choice or flexibility for consumers of TV and music, however illicit streaming devices and stream-ripping are threatening this progress.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA & RIAA Demand Tough Copyright Standards in NAFTA Negotiations

Post Syndicated from Andy original https://torrentfreak.com/mpaa-riaa-demand-tough-copyright-standards-in-nafta-negotiations-170621/

The North American Free Trade Agreement (NAFTA) between the United States, Canada, and Mexico was negotiated more than 25 years ago. With a quarter of a decade of developments to contend with, the United States wants to modernize.

“While our economy and U.S. businesses have changed considerably over that period, NAFTA has not,” the government says.

With this in mind, the US requested comments from interested parties seeking direction for negotiation points. With those comments now in, groups like the MPAA and RIAA have been making their positions known. It’s no surprise that intellectual property enforcement is high on the agenda.

“Copyright is the lifeblood of the U.S. motion picture and television industry. As such, MPAA places high priority on securing strong protection and enforcement disciplines in the intellectual property chapters of trade agreements,” the MPAA writes in its submission.

“Strong IPR protection and enforcement are critical trade priorities for the music industry. With IPR, we can create good jobs, make significant contributions to U.S. economic growth and security, invest in artists and their creativity, and drive technological innovation,” the RIAA notes.

While both groups have numerous demands, it’s clear that each seeks an environment where not only infringers can be held liable, but also Internet platforms and services.

For the RIAA, there is a big focus on the so-called ‘Value Gap’, a phenomenon found on user-uploaded content sites like YouTube that are able to offer infringing content while avoiding liability due to Section 512 of the DMCA.

“Today, user-uploaded content services, which have developed sophisticated on-demand music platforms, use this as a shield to avoid licensing music on fair terms like other digital services, claiming they are not legally responsible for the music they distribute on their site,” the RIAA writes.

“Services such as Apple Music, TIDAL, Amazon, and Spotify are forced to compete with services that claim they are not liable for the music they distribute.”

But if sites like YouTube are exercising their rights while acting legally under current US law, how can partners Canada and Mexico do any better? For the RIAA, that can be achieved by holding them to standards envisioned by the group when the DMCA was passed, not how things have panned out since.

Demanding that negotiators “protect the original intent” of safe harbor, the RIAA asks that a “high-level and high-standard service provider liability provision” is pursued. This, the music group says, should only be available to “passive intermediaries without requisite knowledge of the infringement on their platforms, and inapplicable to services actively engaged in communicating to the public.”

In other words, make sure that YouTube and similar sites won’t enjoy the same level of safe harbor protection as they do today.

The RIAA also requires any negotiated safe harbor provisions in NAFTA to be flexible in the event that the DMCA is tightened up in response to the ongoing safe harbor rules study.

In any event, NAFTA should not “support interpretations that no longer reflect today’s digital economy and threaten the future of legitimate and sustainable digital trade,” the RIAA states.

For the MPAA, Section 512 is also perceived as a problem. While noting that the original intent was to foster a system of shared responsibility between copyright owners and service providers, the MPAA says courts have subsequently let copyright holders down. Like the RIAA, the MPAA also suggests that Canada and Mexico can be held to higher standards.

“We recommend a new approach to this important trade policy provision by moving to high-level language that establishes intermediary liability and appropriate limitations on liability. This would be fully consistent with U.S. law and avoid the same misinterpretations by policymakers and courts overseas,” the MPAA writes.

“In so doing, a modernized NAFTA would be consistent with Trade Promotion Authority’s negotiating objective of ‘ensuring that standards of protection and enforcement keep pace with technological developments’.”

The MPAA also has some specific problems with Mexico, including unauthorized camcording. The Hollywood group says that 85 illicit audio and video recordings of films were linked to Mexican theaters in 2016. However, recording is not currently a criminal offense in Mexico.

Another issue for the MPAA is that criminal sanctions for commercial scale infringement are only available if the infringement is for profit.

“This has hampered enforcement against the above-discussed camcording problem but also against online infringement, such as peer-to-peer piracy, that may be on a scale that is immensely harmful to U.S. rightsholders but nonetheless occur without profit by the infringer,” the MPAA writes.

“The modernized NAFTA like other U.S. bilateral free trade agreements must provide for criminal sanctions against commercial scale infringements without proof of profit motive.”

Also of interest are the MPAA’s complaints against Mexico’s telecoms laws. Unlike in the US and many countries in Europe, Mexico’s ISPs are forbidden to hand out their customers’ personal details to rights holders looking to sue. This, the MPAA says, needs to change.

The submissions from the RIAA and MPAA can be found here and here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hollywood Sees Illegal Streaming Devices as ‘Piracy 3.0’

Post Syndicated from Ernesto original https://torrentfreak.com/hollywood-sees-illegal-streaming-devices-as-piracy-3-0-170502/

Piracy remains a major threat for the movie industry, MPA Stan McCoy said yesterday during a panel session at the St. Petersburg International Economic Forum.

After McCoy praised the collaboration between the MPA(A) and Russian authorities in their fight against online piracy, the ‎President and Managing Director of the MPA’s EMEA region noted that pirates are not standing still.

Much like Hollywood, copyright infringers are innovators who constantly change their “business models” and means of obtaining content. Where torrents were dominant a few years ago, illegal streaming devices are now the main threat, with McCoy describing their rise as Piracy 3.0.

“Piracy is not a static challenge. The pirates are great innovators in their own right. So even as we innovate in trying to pursue these issues, and pursue novel ways of fighting piracy, the pirates are out there coming up with new business models of their own,” McCoy said.

“If you think of old-fashioned peer-to-peer piracy as 1.0, and then online illegal streaming websites as 2.0, in the audio-visual sector, in particular, we now face challenge number 3.0, which is what I’ll call the challenge of illegal streaming devices.”

The panel

The MPA boss went on to explain how the new piracy ecosystem works. The new breed of pirates relies on streaming devices such as set-top boxes, which often run Kodi and are filled with pirate add-ons.

This opens the door to a virtually unlimited library of pirated content. For one movie there may be hundreds of pirate links available, which are impossible to take down in an effective manner by rightsholders, he added, while showcasing the Exodus add-on to the public.

McCoy stressed that the devices themselves, and software such as Kodi, are ‘probably’ not illegal. However, the addition of copyright-infringing pirate add-ons turns them into an unprecedented piracy threat.

“The device itself is probably not illegal, the software itself is probably not illegal, the confluence of all three of these is a major category killer for online piracy,” McCoy said.

McCoy showing Exodus

McCoy went on to say that the new “Piracy 3.0” is not that popular in Russia yet. However, in the UK, America, and several other countries, it’s already huge, matching the popularity of legal services such as Spotify.

“The result is a pirate service operating on a truly massive scale. The scale of this kind of piracy, while it’s not huge yet in the Russian Federation, has reached epidemic levels similar to major services like Spotify, in markets like the UK, and other markets in Western Europe and North America.”

“This is a new sort of global Netflix but no rightsholder gets paid,” McCoy added.

The MPA chief stresses that this new form of piracy should be dealt with through a variety of measures including legislation, regulation, consumer education, and voluntary agreements with third-party stakeholders.

He notes that in Europe, rightsholders are backed by a recent decision of the Court of Justice, which outlawed the sales of devices with pre-loaded pirate add-ons. However, there is still a lot more work to be done to crack down on this emerging piracy threat.

“This is an area where […] innovative responses are required. We have to be just as good as the pirates in thinking of new ways to tackle these challenges,” McCoy said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

EU Votes Today On Content Portability to Reduce Piracy (Updated)

Post Syndicated from Andy original https://torrentfreak.com/eu-votes-today-on-content-portability-to-reduce-piracy-170518/

Being a fully-paid up customer of a streaming service such as Spotify or Netflix should be a painless experience, but for citizens of the EU, complexities exist.

Subscribers of Netflix, for example, have access to different libraries, depending on where they’re located. This means that a viewer in the Netherlands could begin watching a movie at home, travel to France for a weekend break, and find on arrival that the content he paid for is not available there.

A similar situation can arise with a UK citizen’s access to BBC’s iPlayer. While he has free access to the service he previously paid for while at home, travel to Spain for a week and access is denied, since the service believes he’s not entitled to view.

While the EU is fiercely protective of its aim to grant free movement to both people and goods, this clearly hasn’t always translated well to the digital domain. There are currently no explicit provisions under EU law which mandate cross-border portability of online content services.

Following a vote today, however, all that may change.

In a few hours time, Members of the European Parliament will vote on whether to introduce new ‘Cross-border portability’ rules (pdf), that will give citizens the freedom to enjoy their media wherever they are in the EU, without having to resort to piracy.

“If you live for instance in Germany but you go on holiday or visit your family or work in Spain, you will be able to access the services that you had in Germany in any other country in the Union, because the text covers the EU,” says Jean-Marie Cavada, the French ALDE member responsible for steering the new rules through Parliament.

But while freedom to receive content is the aim, there will be a number of restrictions in practice. While travelers to other EU countries will get access to the same content they would back home on the same range of devices, it will only be available on a temporary basis.

People traveling on a holiday, business, or study trip will enjoy the freedom to consume “for a limited period.” Extended stays will not be catered for under the new rules so as not to upset licensing arrangements already in place between rightsholders and service providers.

So how will the system work in practice?

At the moment, services like Netflix use the current IP address of the subscriber to determine where they are and therefore which regional library they’ll have access to when they sign in.

It appears that a future system would have to consider in which country the user signed up, before checking to ensure that the user trying to access the service in another EU country is the same person. That being said, if copyright holders agree, service providers can omit the verification process.

“The draft text to be voted on calls for safeguarding measures to be included in the regulation to ensure that the data and privacy of users are respected throughout the verification process,” European Parliament news said this week.

If adopted, the new rules would come into play during the first six months of 2018 and would apply to subscriptions already in place.

Separately, MEPs are also considering new rules on geo-blocking “to ensure that online sellers do not discriminate against consumers” because of where they live in the EU.

Update: The vote has passed. Here is the full statement by Vice-President for the Digital Single Market, Andrus Ansip.

I welcome today’s positive vote of the European Parliament on the portability of online content across borders, following the agreement reached between the European Parliament, Council and Commission at the beginning of the year.

I warmly thank the European Parliament rapporteur Jean-Marie Cavada for his work in achieving this and look forward to final approval by Member States in the coming weeks.

The rules voted today mean that, as of the beginning of next year, people who have subscribed to their favourite series, music and sports events at home will be able to enjoy them when they travel in the European Union.

Combined with the end of roaming charges, it means that watching films or listening to music while on holiday abroad will not bring any additional costs to people who use mobile networks.

This is an important step in breaking down barriers in the Digital Single Market.

We now need agreements on our other proposals to modernise EU copyright rules and ensure wider access to creative content across borders and fairer rules for creators. I rely on the European Parliament and Member States to make swift progress to make this happen.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

YouTube Keeps People From Pirate Sites, Study Shows

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-keeps-people-from-pirate-sites-study-shows-170511/

The music industry has witnessed some dramatic changes over the past decade and a half.

With the rise of digital, people’s music consumption habits evolved dramatically, followed by more change when subscription streaming services came along.

Another popular way for people to enjoy music noawdays is via YouTube. The video streaming platform offers free access to millions of songs, which are often uploaded by artists or the labels themselves.

Still, YouTube is getting little praise from the major labels. Instead, music insiders often characterize the video platform as a DMCA-protected piracy racketeer, that exploits legal loopholes to profit from artists’ hard work.

YouTube is generating healthy profits at a minimal cost and drives people away from legal platforms, the argument goes.

In an attempt to change this perception, YouTube has commissioned a study from the research outfit RBB Economics to see how the service impacts the music industry. The first results, published today, are a positive start.

The study examined exclusive YouTube data and a survey of 1,500 users across Germany, France, Italy and the U.K, asking them about their consumption habits. In particular, they were asked if YouTube keeps them away from paid music alternatives.

According to YouTube, which just unveiled the results, the data paints a different picture.

“The study finds that this is not the case. In fact, if YouTube didn’t exist, 85% of time spent on YouTube would move to lower value channels, and would result in a significant increase in piracy,” YouTube’s Simon Morrison writes.

If YouTube disappeared overnight, roughly half of all the time spent there on music would be “lost.” Furthermore, a significant portion of YouTube users would switch to using pirate sites and services instead.

“The results suggest that if YouTube were no longer able to offer music, time spent listening to pirated content would increase by +29%. This is consistent with YouTube being a substitute for pirated content,” RBB Economics writes.

In addition, the researchers also found that blocking music on YouTube doesn’t lead to an increase in streaming on other platforms, such as Spotify.

While YouTube doesn’t highlight it, the report also finds that some people would switch to “higher value” (e.g. paid) services if YouTube weren’t available. This amounts to roughly 15% of the total.

In other words, if the music industry is willing to pass on the $1 billion YouTube currently pays out and accept a hefty increase in piracy, there would be a boost in revenue through other channels. Whether that’s worth it is up for debate of course.

YouTube believes that the results are pretty convincing though. They rely on RBB Economic’s conclusion that there no evidence of “significant cannibalization” and believe that their service has a positive impact overall.

“The cumulative effect of these findings is that YouTube has a market expansion effect, not a cannibalizing one,” YouTube writes.

The full results are available here (pdf), courtesy of RBB Economics. YouTube announced that more of these reports will follow in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Spotify’s Beta Used ‘Pirate’ MP3 Files, Some From Pirate Bay

Post Syndicated from Andy original https://torrentfreak.com/spotifys-beta-used-pirate-mp3-files-some-from-pirate-bay-170509/

While some pirates will probably never be tempted away from the digital high seas, over the past decade millions have ditched or tapered down their habit with the help of Spotify.

It’s no coincidence that from the very beginning more than a decade ago, the streaming service had more than a few things in common with the piracy scene.

Spotify CEO Daniel Ek originally worked with uTorrent creator Ludvig ‘Ludde’ Strigeus before the pair sold to BitTorrent Inc. and began work on Spotify. Later, the company told TF that pirates were their target.

“Spotify is a new way of enjoying music. We believe Spotify provides a viable alternative to music piracy,” the company said.

“We think the way forward is to create a service better than piracy, thereby converting users into a legal, sustainable alternative which also enriches the total music experience.”

The technology deployed by Spotify was also familiar. Like the majority of ‘pirate’ platforms at the time, Spotify operated a peer-to-peer (P2P) system which grew to become one of the largest on the Internet. It was shut down in 2011.

But in the clearest nod to pirates, Spotify was available for free, supported by ads if the user desired. This was the platform’s greatest asset as it sought to win over a generation that had grown accustomed to gorging on free MP3s. Interestingly, however, an early Pirate Bay figure has now revealed that Spotify also had a use for the free content floating around the Internet.

As one of the early members of Sweden’s infamous Piratbyrån (piracy bureau), Rasmus Fleischer was also one of key figures at The Pirate Bay. Over the years he’s been a writer, researcher, debater and musician, and in 2012 he finished his PhD thesis on “music’s political economy.”

As part of a five-person team, Fleischer is now writing a book about Spotify. Titled ‘Spotify Teardown – Inside the Black Box of Streaming Music’, the book aims to shine light on the history of the famous music service and also spills the beans on a few secrets.

In an interview with Sweden’s DI.se, Fleischer reveals that when Spotify was in early beta, the company used unlicensed music to kick-start the platform.

“Spotify’s beta version was originally a pirate service. It was distributing MP3 files that the employees happened to have on their hard drives,” he reveals.

Rumors that early versions of Spotify used ‘pirate’ MP3s have been floating around the Internet for years. People who had access to the service in the beginning later reported downloading tracks that contained ‘Scene’ labeling, tags, and formats, which are the tell-tale signs that content hadn’t been obtained officially.

Solid proof has been more difficult to come by but Fleischer says he knows for certain that Spotify was using music obtained not only from pirate sites, but the most famous pirate site of all.

According to the writer, a few years ago he was involved with a band that decided to distribute their music on The Pirate Bay instead of the usual outlets. Soon after, the album appeared on Spotify’s beta service.

“I thought that was funny. So I emailed Spotify and asked how they obtained it. They said that ‘now, during the test period, we will use music that we find’,” Fleischer recalls.

For a company that has attracting pirates built into its DNA, it’s perhaps fitting that it tempted them with the same bait found on pirate sites. Certainly, the company’s history of a pragmatic attitude towards piracy means that few will be shouting ‘hypocrites’ at the streaming platform now.

Indeed, according to Fleischer the successes and growth of Spotify are directly linked to the temporary downfall of The Pirate Bay following the raid on the site in 2006, and the lawsuits that followed.

“The entire Spotify beta period and its early launch history is in perfect sync with the Pirate Bay process,” Fleischer explains.

“They would not have had as much attention if they had not been able to surf that wave. The company’s early history coincides with the Pirate Party becoming a hot topic, and the trial of the Pirate Bay in the Stockholm District Court.”

In 2013, Fleischer told TF that The Pirate Bay had “helped catalyze so-called ‘new business models’,” and it now appears that Spotify is reaping the benefits and looks set to keep doing so into the future.

An in-depth interview with Rasmus Fleischer will be published here soon, including an interesting revelation detailing how TorrentFreak readers positively affected the launch of Spotify in the United States.

Spotify Teardown – Inside the Black Box of Streaming Music will be published early 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tinkernut’s do-it-yourself Pi Zero audio HAT

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-diy-pi-zero-audio/

Why buy a Raspberry Pi Zero audio HAT when Tinkernut can show you how to make your own?

Adding Audio Output To The Raspberry Pi Zero – Tinkernut Workbench

The Raspberry Pi Zero W is an amazing miniature computer piece of technology. I want to turn it into an epic portable Spotify radio that displays visuals such as Album Art. So in this new series called “Tinkernut Workbench”, I show you step by step what it takes to build a product from the ground up.

Raspberry Pi Zero audio

Unlike their grown-up siblings, the Pi Zero and Zero W lack an onboard audio jack, but that doesn’t stop you from using them to run an audio output. Various audio HATs exist on the market, from Adafruit, Pimoroni and Pi Supply to name a few, providing easy audio output for the Zero. But where would the fun be in a Tinkernut video that shows you how to attach a HAT?

Tinkernut Pi Zero Audio

“Take this audio HAT, press it onto the header pins and, errr, done? So … how was your day?”

DIY Audio: Tinkernut style

For the first video in his Hipster Spotify Radio using a Raspberry Pi Tinkernut Workbench series, Tinkernut – real name Daniel Davis – goes through the steps of researching, prototyping and finishing his own audio HAT for his newly acquired Raspberry Pi Zero W.

The build utilises the GPIO pins on the Zero W, specifically pins #18 and #13. FYI, this hidden gem of information comes from the Adafruit Pi Zero PWM Audio guide. Before he can use #18 and #13, header pins need to be soldered. If the thought of soldering pins to the Pi is somewhat daunting, check out the Pimoroni Hammer Header.

Pimoroni Hammer Header for Raspberry Pi

You’re welcome.

Once complete, with Raspbian installed on the micro SD, and SSH enabled for remote access, he’s ready to start prototyping.

Ingredients

Tinkernut uses two 270 ohm resistors, two 150 ohm resistors, two 10μf electrolytic capacitors, two 0.01 μf polyester film capacitors, an audio jack and some wire. You’ll also need a breadboard for prototyping. For the final build, you’ll need a single row female pin header and some prototyping board, if you want to join in at home.

Tinkernut audio board Raspberry Pi Zero W

It should look like this…hopefully.

Once the prototype is working to run audio through to a cheap speaker (thanks to an edit of the config.txt file), the final board can be finished.

What’s next?

The audio board is just one step in the build.

Spotify is such an awesome music service. Raspberry Pi Zero is such an awesome ultra-mini computing device. Obviously, combining the two is something I must do!!! The idea here is to make something that’s stylish, portable, can play Spotify, and hopefully also display visuals such as album art.

Subscribe to Tinkernut’s YouTube channel to keep up to date with the build, and check out some of his other Raspberry Pi builds, such as his cheap 360 video camera, security camera and digital vintage camera.

Have you made your own Raspberry Pi HAT? Show it off in the comments below!

The post Tinkernut’s do-it-yourself Pi Zero audio HAT appeared first on Raspberry Pi.

DMCA Helps YouTube Avoid Up to $1bn in Royalties Per Year, Study Claims

Post Syndicated from Andy original https://torrentfreak.com/dmca-helps-youtube-avoid-up-to-1bn-in-royalties-per-year-study-claims-170330/

With much at stake, one gets the impression that the debate over the safe harbor provisions of the DMCA is likely to boil over before it goes away.

In a nutshell, rightsholders believe that some Internet platforms that allow users to upload audio-visual content abuse their immunity in order to make money from copyrighted content for which they hold no licenses.

Given the recent hostility shown by Hollywood and the music industry towards Google, it’s no surprise that YouTube has become the focal point in this war of words.

In particular, the world’s leading record labels argue that YouTube draws a massive commercial benefit from infringing songs uploaded by its users since it avoids paying for the kinds of licenses ‘fairly’ negotiated with the likes of Spotify and Apple.

In its defense, YouTube says it does all it can to combat infringement, quickly taking down unlawful content when asked to and spending small fortunes on systems like Content ID, which allows creators to monetize otherwise infringing content, should they choose to do so. It also pays huge sums to the labels.

It’s a problem that may eventually be settled by a change in the law but in the meantime the entertainment industries are working hard to paint Google and YouTube as freeloaders making a fortune from other people’s hard work.

Exactly how much money is at stake is rarely quantified but a new study from the Phoenix Center in Washington claims to do just that. The numbers cited in ‘Safe Harbors and the Evolution of Music Retailing’ by authors T. Randolph Beard, PhD, George S. Ford, PhD, and Michael Stern, PhD, are frankly enormous.

“Music is vital to YouTube’s platform and advertising revenues, accounting for 40% of its views. Yet, YouTube pays the recording industry well-below market rates for this heavy and on-demand use of music by relying on those ‘safe harbor’ provisions,” the paper begins.

Citing figures from 2016 provided by IFPI, the study notes that 68 million global subscriptions to music services (priced as a result of regular licensing negotiations) generated $2 billion in revenues for artists and labels at around $0.008 per track play.

On the other hand, the 900 million users of ad-based services (like YouTube) are said to generate just $634 million in revenues, paying the recording industry just $0.001 per play.

“It’s plainly a huge price difference for close substitutes,” the paper notes.

What follows in the 20-page study is an economist-pleasing barrage of figures and theories that peak into what can only be described as an RIAA-friendly conclusion. As an on-demand music service, YouTube should be paying nearer the same kinds of royalties per spin as its subscription-based rivals do, the paper suggests.

“More rational royalty policies would significantly and positively affect the recording industry, helping it recover from the devastating consequences of the Digital Age and outdated public policies affecting the industry,” the paper notes.

“Simulating royalty rate changes for YouTube, one of the nation’s largest purveyors of digital music, we estimate, using 2015 data, that a plausible royalty rate increase could produce increased royalty revenues in the U.S. of $650 million to over one billion dollars a year.

“This is a sizeable effect, and lends credence to the recording industry’s complaints about YouTube’s use of the safe harbor,” it concludes.

Given the timely nature of this report from an industry perspective, TF asked co-author George S. Ford what motivated the study and if any music industry entity had commissioned or been involved in its financing.

“We do a lot of work in copyright and I’ve run into this type of problem in numerous settings, including the recent SDARS III case before the CRB. I’ve wanted to write on this topic for ages and finally got around to it,” Ford told TF.

“The Phoenix Center does not take money to do specific projects, except for instances where a government asks us to do something, and then we indicate funding was received for that project. As noted in the paper, we relied on the RIAA for data.”

Since that did not specifically answer our question we tried again, asking whether the RIAA, IFPI, or any of their member labels are donors to and/or supporters of The Phoenix Center. We received no reponse.

The Phoenix Center has produced a number of pro-industry reports in recent years, including a study applauded by the MPAA which attacked earlier research concerning Megaupload.

The full paper can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

72% Of UK Broadband Users Think Piracy Warnings Will Fail

Post Syndicated from Andy original https://torrentfreak.com/72-of-uk-broadband-users-think-piracy-warnings-will-fail-170309/

This January it was revealed that after much build-up, UK ISPs and the movie and music industries had finally reached a deal to send infringment notices to allegedly pirating subscribers.

The copyright alerts program is part of the larger Creative Content UK (CCUK) initiative, which includes various PR campaigns targeted at the public and classrooms.

The notices themselves (detailed here) are completely non-aggressive, with an aim to educate rather than bully consumers. However, according to a new survey just completed by UK-based broadband comparison website Broadband Genie, progress may be difficult to come by.

The survey involved 2,047 respondents, comprised of both Broadband Genie customers and general Internet users, split roughly 50/50 male and female, the vast majority (94%) aged between 18 and 64 years old. Respondents were asked about the notice scheme and piracy in general.

Overall, a worrying 72% said that they believe that the scheme won’t achieve its aim of stopping people from accessing or sharing copyrighted content.

While ‘stopping’ piracy entirely is a fairly dramatic goal (the program would quietly settle for an all-round reduction), three-quarters of respondents already having no faith in the scheme is significant. So what, if anything, might persuade Internet users to stop pirating content?

Again, the survey offers a pretty bleak outlook. A stubborn 29% believe that nothing can be done, which sounds about right in this context. Worryingly, however, just over a fifth of respondents felt that legal action would do the trick. The same amount (22%) felt that losing a broadband connection might stop the pirates.

While the chart above indicates that a fifth of respondents believe that cheaper content is the solution to fighting piracy, an unbalanced six-out-of-ten agreed that the cost of using genuine sites and services is the main reason why people pirate in the first place.

Surprisingly, just 13% said that easy access to copyrighted content on pirate networks was the main factor, with an even lower 10% citing limited access to genuine content on official platforms. Just 9% blamed delayed release dates for fueling piracy.

Some curious responses are also evident when Broadband Genie asked respondents whether they believed certain activities are illegal. While around three-quarters of respondents said that downloading and/or sharing content without permission is illegal, almost four in ten said that simply using P2P networks such as BitTorrent falls foul of the law.

Of perhaps even greater concern is that 35% identified Spotify, Netflix and Amazon account sharing as an illegal activity. A quarter felt that streaming movies, TV or sports from an unauthorized website is illegal (it probably isn’t) while 11% said that no method of obtaining content without paying for it is against the law.

A final point of worry for Creative Content UK is the visibility of the alerts program itself. Despite boasting a TV appearance, a campaign video on YouTube, some classroom lessons, dozens of news headlines, plus thousands of notices, more than eight-out-of-ten respondents (82%) said that before the survey they had never even heard of the initiative.

Of course, the program is only targeted at the relatively small subset of people who share files but with no data being published by the scheme, it’s difficult to say whether the campaign is reaching its target audience.

That being said, Broadband Genie informs TorrentFreak that 3.5% of respondents (around 70 people) claimed to have received a notice or know someone who had, albeit with certain caveats.

“[N]early half of those said the notice was in error due to incorrect details, their belief that the content or provider was legal or a lack of knowledge about any file sharing having taken place,” the company reports.

This number sounds quite high to us and the company concedes that respondents may have confused the current notice program with earlier ISP correspondence. Nevertheless, notices are definitely going out to subscribers, and people’s social networks are very broad these days. With those variables the figures might hold weight, particularly when considering potential volumes of notices.

The notice system is believed to have launched in the last few days of January and ISPs are reportedly sending around 48,000 notices per week (2.5m notices per year). The survey took place between 17th February and 6th March.

So, if launched at anything like full speed, a maximum of around 250,000 notices could have gone out up until the first week of March. Again, it’s important to note that no hard data is available so it’s impossible to be accurate, but volumes could be quite high.

The full report from Broadband Genie can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Torrent Legend Mininova Will Shut Down For Good

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-legend-mininova-will-shut-down-for-good-170226/

In December 2004, the demise of the mighty Suprnova left a meteor crater in the fledgling BitTorrent landscape.

This gaping hole was soon filled by the dozens of new sites that emerged to fulfill the public’s increasing demands for torrents. Mininova soon became the most successful of them all.

Mininova was founded by five Dutch students just a month after Suprnova closed its doors. The site initially began as a hobby project, but in the years that followed the site’s founders managed to turn it into a successful business that generated millions of dollars in revenue.

With this success also came legal pressure. Even though the site complied with takedown requests, copyright holders were not amused. In 2009 this eventually resulted in a lawsuit filed by local anti-piracy outfit BREIN, which Mininova lost.

As a result, the site had to remove all infringing torrents, a move which ended its reign. The site remained online but instead of allowing everyone to upload content, Mininova permitted only pre-approved publishers to submit files.

Now, more than seven years after “going legal” the site will shut down for good. A notice published on the website urges uploaders to back up their files before April 4th, when the plug will be pulled.

Mininova’s shutting down

The decision doesn’t mean that the legal contribution platform was a total failure. In fact, over 950 million ‘legal’ torrents were downloaded from Mininova in recent years. However, the site’s income couldn’t make up for the costs.

“All goods things come to an end, and after more than 12 years we think it’s a good time to shut down the site which has been running at a loss for some years,” Mininova co-founder Niek tells TorrentFreak.

Looking back, Mininova has many great memories. The site’s users have always been very grateful, for example, and there were also several artists who thanked the site’s operators for offering them a great promotional tool.

“The support from our users was especially amazing to experience, millions of people used the site on a daily basis and we got many emails each day – ranging from a simple ‘thank you’ to some extensive story how a specific upload made their day,” Niek says.

“The feedback from artists was great to see as well, many thanked us for promoting their content, as some of them broke through and signed with labels as a result,” he adds.

The file-sharing and piracy ecosystem has changed quite a bit since Mininova’s dominance. File-hosting services became more popular first, and nowadays streaming sites and tools with slick user interfaces are the new standard.

Torrent sites, on the other hand, show little progress according to Mininova’s founder, who believes that the growth of legal services could make them less relevant in the future.

“We haven’t seen many changes in the last decade – the current torrent sites look very similar to what Mininova did twelve years ago,” Niek says.

“With content-specific distribution platforms such as Spotify and Netflix becoming more and more widespread and bandwidth becoming cheaper, there might be less of a need for torrent sites in the future.”

The original founders of Mininova have moved on as well. They’re no longer students and have parted ways, moving on to different projects and ventures. Now and then, however, they look back at how their lives looked ten years ago, with a smile.

“Overall we’re happy that we have been a part of the history of the Internet,” Niek concludes.

“We want to thank everybody who has been around and supported us through the times! Without our users, there would have been no Mininova. So THANK YOU!”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Community Profile: Matt Reed

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-matt-reed/

This column is from The MagPi issue 51. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

Matt Reed‘s background is in web design/development, extending to graphic design in which he acquired his BFA at the University of Tennessee, Knoxville. In his youth, his passion focused on car stereo systems, designing elaborate builds that his wallet couldn’t afford. However, this enriched his maker skill set by introducing woodwork, electronics, and fabrication exploration into his creations.

Matt Reed Raspberry Pi redpepper MagPi Magazine

Matt hosts the redpepper ‘Touch of Tech’ online series, highlighting the latest in interesting and unusual tech releases

Having joined the integrated marketing agency redpepper eight years ago, Matt originally worked in the design and production of microsites. However, as his interests continued to grow, demand began to evolve, and products such as the Arduino and Raspberry Pi came into the mix. Matt soon found himself moving away from the screen toward physical builds.

“I’m interested in anything that uses tech in a clever way, whether it be AR, VR, front-end, back-end, app dev, servers, hardware, UI, UX, motion graphics, art, science, or human behaviour. I really enjoy coming up with ideas people can relate to.”

Matt’s passion is to make tech seem cool, creative, empowering, and approachable, and his projects reflect this. Away from the Raspberry Pi, Matt has built some amazing creations such as the Home Alone Holidaython, an app that lets you recreate the famous curtain shadow party in Kevin McCallister’s living room. Pick the shadow you want to appear, and projectors illuminate the design against a sheet across the redpepper office window. Christmas on Tweet Street LIVE! captures hilariously negative Christmas-themed tweets from Twitter, displaying them across a traditional festive painting, while DOOR8ELL allows office visitors the opportunity to Slack-message their required staff member via an arcade interface, complete with 8-bit graphics. There’s also been a capacitive piano built with jelly keys, a phone app to simulate the destruction of cars as you sit in traffic, and a working QR code made entirely from Oreos.

Matt Reed Raspberry Pi redpepper MagPi Magazine

The BoomIlluminator, an interactive art installation for the Red Bull Creation Qualifier, used LEDs within empty Red Bull cans that reacted to the bass of any music played. A light show across the cans was then relayed to peoples’ phones, extending the experience.

Playing the ‘technology advocate’ role at redpepper, Matt continues to bridge the gap between the company’s day-to-day business and the fun, intuitive uses of tech. Not only do they offer technological marketing solutions via their rpLab, they have continued to grow, incorporating Google’s Sprint methodology into idea-building and brainstorming within days of receiving a request, “so having tools that are powerful, flexible, and cost-effective like the Pi is invaluable.”

Matt Reed Raspberry Pi redpepper MagPi Magazine

Walk into a room with Doorjam enabled, and suddenly your favourite tune is playing via boombox speakers. Simply select your favourite song from Spotify, walk within range of a Bluetooth iBeacon, and you’re ready to make your entrance in style.

“I just love the intersection of art and science,” Matt explains when discussing his passion for tech. “Having worked with Linux servers for most of my career, the Pi was the natural extension for my interest in hardware. Running Node.js on the Pi has become my go-to toolset.”

Matt Reed Raspberry Pi redpepper MagPi Magazine

Slackbot Bot: Users of the multi-channel messenger service Slack will appreciate this one. Beacons throughout the office allow users to locate Slackbot Bot, which features a tornado siren mounted on a Roomba, and send it to predetermined locations to deliver messages. “It was absolutely hilarious to test in the office.”

We’ve seen Matt’s Raspberry Pi-based portfolio grow over the last couple of years. A few of his builds have been featured in The MagPi, and his Raspberry Preserve was placed 13th in the Top 50 Raspberry Pi Builds in issue 50.

Matt Reed Raspberry Pi redpepper MagPi Magazine

Matt Reed’s ‘Raspberry Preserve’ build allows uses to store their precious photos in a unique memory jar

There’s no denying that Matt will continue to be ‘one to watch’ in the world of quirky, original tech builds. You can follow his work at his website or via his Twitter account.

The post Community Profile: Matt Reed appeared first on Raspberry Pi.

Study: 70% of Young Swedish Men Are Video Pirates

Post Syndicated from Andy original https://torrentfreak.com/study-70-of-young-swedish-men-are-video-pirates-170217/

As illustrated by the ruling handed down against ISP Bredbandsbolaget by the country’s Patent and Market Court of Appeal this week, piracy is still considered a big problem in Sweden.

Despite better access to legal services such as Spotify and Netflix, some citizens still prefer to get their fix from pirate sites. Whether that’s via The Pirate Bay and torrents or newer streaming-based portals, piracy is still a popular route for obtaining media.

According to figures just released by media industry consultants Mediavision, in January 2017 almost a quarter of all Swedes aged between 15 and 74 admitted either streaming or downloading movies from ‘pirate’ sites during the past month.

Perhaps unsurprisingly, the tendency to do so is greater among the young. More than half of 15 to 24-year-olds said they’d used a torrent or streaming site during December. When concentrating that down to only young men in the same age group, the figure leaps to 70%

Mediavision has been tracking the piracy habits of Scandinavians since 2010 and actually reports an overall increase in piracy over the past seven years. However, piracy levels have remained relatively static during the past three years, with roughly 25% of citizens admitting to engaging in the practice.

The company, which previously reported on the activities of Popcorn Time users, says that illegal consumption of media is far more prevalent in Sweden than in the neighboring Nordic countries of Norway, Denmark and Finland.

“The ruling against Bredbandsbolaget is a big thing in this context,” says Natalia Borelius, Project Manager at Mediavision.

“The measures taken to date, with, for example, shutting down illegal websites, has been shown to have limited effect on illegal consumption. In Finland and Denmark, where site blocking has been in place for years, piracy is less than half as prevalent as in Sweden.”

As a result of this week’s decision, rightsholders now have the opportunity to obtain injunctions against all Swedish Internet service providers, barring them from providing customer access to not only The Pirate Bay, but other allegedly infringing sites worldwide.

While this may have some effect on the habits of casual pirates, it remains to be seen how the masses respond. Blocking a couple of sites via one ISP certainly won’t have the desired effect, a conclusion supported by various studies (1, 2). Expect more blocking then, sooner rather than later.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Piracy Notices? There Shouldn’t Be Many UK Torrent Users Left to Warn

Post Syndicated from Andy original https://torrentfreak.com/piracy-notices-there-shouldnt-be-many-uk-torrent-users-left-to-warn-170115/

Later this month in partnership with the Creative Content UK (CCUK) initiative, four major ISPs will begin sending warning notices to subscribers whose connections are being used to pirate content.

BT, Sky, TalkTalk and Virgin Media are all involved in the scheme, which will be educational in tone and designed to encourage users towards legitimate services. The BBC obtained a copy of the email due to be sent out, and it’s very inoffensive.

“Get it Right is a government-backed campaign acting for copyright owners who think their content’s been shared without their permission,” the notice reads.

“It looks like someone has been using your broadband to share copyrighted material (that means things like music, films, sport or books). And as your broadband provider, we have to let you know when this happens.”

The notice then recommends where people can obtain tips to ensure that the unlawful sharing doesn’t happen again. Since the scheme will target mainly BitTorrent users, it’s likely that one of the tips will be to stop using torrents to obtain content. However, that in itself should be an eyebrow-raising statement in the UK.

For the past several years, UK Internet service providers – including all of the ones due to send out piracy notices this month – have been blocking all of the major torrent sites on the orders of the High Court. The Pirate Bay, KickassTorrents (and all their variants), every site in the top 10 most-visited torrent list and hundreds more, are all blocked at the ISP level in the UK.

By any normal means, no significant public torrent sites can be accessed by any subscriber from any major UK ISP and it’s been that way for a long time. Yet here we are in 2017 preparing to send up to 2.5 million warning notices a year to UK BitTorrent users. Something doesn’t add up.

According to various industry reports, there are around six million Internet pirates in the UK, which give or take is around 10% of the population. If we presume that a few years ago the majority were using BitTorrent, they could have conceivably received a couple of notices each per year.

However, if site-blocking is as effective as the music and movie industries claim it to be, then these days we should be looking at a massive decrease in the number of UK BitTorrent users. After all, if users can’t access the sites then they can’t download the .torrent files or magnet links they offer. If users can’t get those, then no downloads can take place.

While this is probably true for some former torrent users, it is obvious that massive site blocking efforts are being evaded on an industrial scale. With that in mind, the warning notices will still go out in large numbers but only to people who are savvy enough to circumvent a blockade but don’t take any other precautions as far as torrent transfers are concerned.

For others, who already turned to VPNs to give them access to blocked torrent sites, the battle is already over. They will never see a warning notice from their ISP and sites will remain available for as long as they stay online.

There’s also another category of users who migrated away from torrents to streaming sites. Users began to notice web-based streaming platforms in their millions when The Pirate Bay was first blocked several years ago, and they have only gained in popularity since. Like VPN users, people who frequent these sites will never see an ISP piracy notice.

Finally, there are those users who don’t understand torrents or web-based streaming but still use the latter on a daily basis via modified Kodi setups. These boxes or sticks utilize online streaming platforms so their users’ activities cannot be tracked. They too will receive no warnings. The same can be said about users who download from online hosting sites, such as Uploaded and Rapidgator.

So, if we trim this down, we’re looking at an educational notice scheme that will mainly target UK pirates who are somehow able to circumvent High Court blockades but do not conceal their IP addresses. How many of these semi-determined pirates exist is unclear but many are likely to receive ‘educational’ notices in the coming months.

Interestingly, the majority of these users will already be well aware that file-sharing copyrighted content is illegal, since when they’ve tried to access torrent sites in recent years they’ve all received a “blocked” message which mentions copyright infringement and the High Court.

When it comes to the crunch, this notice scheme has come several years too late. Technology has again outrun the mitigation measures available, and notices are now only useful as part of a basket of measures.

That being said, no one in the UK will have their Internet disconnected or throttled for receiving a notice. That’s a marked improvement over what was being proposed six years ago as part of the Digital Economy Act. Furthermore, the notices appear to be both polite and considered. On that basis, consumers should have little to complain about.

And, if some people do migrate to services like Netflix and Spotify, that will only be a good thing. Just don’t expect them to give up pirating altogether since not only are pirates the industry’s best customers, site blockades clearly don’t work.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Stack Overflow plans to survive the next DNS attack

Post Syndicated from Mark Henderson original http://blog.serverfault.com/2017/01/09/surviving-the-next-dns-attack/

Let’s talk about DNS. After all, what could go wrong? It’s just cache invalidation and naming things.

tl;dr

This blog post is about how Stack Overflow and the rest of the Stack Exchange network approaches DNS:

  • By bench-marking different DNS providers and how we chose between them
  • By implementing multiple DNS providers
  • By deliberately breaking DNS to measure its impact
  • By validating our assumptions and testing implementations of the DNS standard

The good stuff in this post is in the middle, so feel free to scroll down to “The Dyn Attack” if you want to get straight into the meat and potatoes of this blog post.

The Domain Name System

DNS had its moment in the spotlight in October 2016, with a major Distributed Denial of Service (DDos) attack launched against Dyn, which affected the ability for Internet users to connect to some of their favourite websites, such as Twitter, CNN, imgur, Spotify, and literally thousands of other sites.

But for most systems administrators or website operators, DNS is mostly kept in a little black box, outsourced to a 3rd party, and mostly forgotten about. And, for the most part, this is the way it should be. But as you start to grow to 1.3+ billion pageviews a month with a website where performance is a feature, every little bit matters.

In this post, I’m going to explain some of the decisions we’ve made around DNS in the past, and where we’re going with it in the future. I will eschew deep technical details and gloss over low-level DNS implementation in favour of the broad strokes.

In the beginning

So first, a bit of history: In the beginning, we ran our own DNS on-premises using artisanally crafted zone files with BIND. It was fast enough when we were doing only a few hundred million hits a month, but eventually hand-crafted zonefiles were too much hassle to maintain reliably. When we moved to Cloudflare as our CDN, their service is intimately coupled with DNS, so we demoted our BIND boxes out of production and handed off DNS to Cloudflare.

The search for a new provider

Fast forward to early 2016 and we moved our CDN to Fastly. Fastly doesn’t provide DNS service, so we were back on our own in that regards and our search for a new DNS provider began. We made a list of every DNS provider we could think of, and ended up with a shortlist of 10:

  • Dyn
  • NS1
  • Amazon Route 53
  • Google Cloud DNS
  • Azure DNS (beta)
  • DNSimple
  • Godaddy
  • EdgeCast (Verizon)
  • Hurricane Electric
  • DNS Made Easy

From this list of 10 providers, we did our initial investigations into their service offerings, and started eliminating services that were either not suited to our needs, outrageously expensive, had insufficient SLAs, or didn’t offer services that we required (such as a fully featured API). Then we started performance testing. We did this by embedding a hidden iFrame on 5% of the visitors to stackoverflow.com, which forced a request to a different DNS provider. We did this for each provider until we had some pretty solid performance numbers.

Using some basic analytics, we were able to measure the real-world performance, as seen by our real-world users, broken down into geographical area. We built some box plots based on these tests which allowed us to visualise the different impact each provider had.

If you don’t know how to interpret a boxplot, here’s a brief primer for you. For the data nerds, these were generated with R’s standard boxplot functions, which means the upper and lower whiskers are min(max(x), Q_3 + 1.5 * IQR) and max(min(x), Q_1 – 1.5 * IQR), where IQR = Q_3 – Q_1

This is the results of our tests as seen by our users in the United States:

DNS Performance in the United States

You can see that Hurricane Electric had a quarter of requests return in < 16ms and a median of 32ms, with the three “cloud” providers (Azure, Google Cloud DNS and Route 53) being slightly slower (being around 24ms first quarter and 45ms median), and DNS Made Easy coming in 2nd place (20ms first quarter, 39ms median).

You might wonder why the scale on that chart goes all the way to 700ms when the whiskers go nowhere near that high. This is because we have a worldwide audience, so just looking at data from the United States is not sufficient. If we look at data from New Zealand, we see a very different story:

DNS Performance in New Zealand

Here you can see that Route 53, DNS Made Easy and Azure all have healthy 1st quarters, but Hurricane Electric and Google have very poor 1st quarters. Try to remember this, as this becomes important later on.

We also have Stack Overflow in Portuguese, so let’s check the performance from Brazil:

DNS Performance in Brazil

Here we can see Hurricane Electric, Route 53 and Azure being favoured, with Google and DNS Made Easy being slower.

So how do you reach a decision about which DNS provider to choose, when your main goal is performance? It’s difficult, because regardless of which provider you end up with, you are going to be choosing a provider that is sub-optimal for part of your audience.

You know what would be awesome? If we could have two DNS providers, each one servicing the areas that they do best! Thankfully this is something that is possible to implement with DNS. However, time was short, so we had to put our dual-provider design on the back-burner and just go with a single provider for the time being.

Our initial rollout of DNS was using Amazon Route 53 as our provider: they had acceptable performance figures over a large number of regions and had very effective pricing (on that note Route 53, Azure DNS, and Google Cloud DNS are all priced identically for basic DNS services).

The DYN attack

Roll forwards to October 2016. Route 53 had proven to be a stable, fast, and cost-effective DNS provider. We still had dual DNS providers on our backlog of projects, but like a lot of good ideas it got put on the back-burner until we had more time.

Then the Internet ground to a halt. The DNS provider Dyn had come under attack, knocking a large number of authoritative DNS servers off the Internet, and causing widespread issues with connecting to major websites. All of a sudden DNS had our attention again. Stack Overflow and Stack Exchange were not affected by the Dyn outage, but this was pure luck.

We knew if a DDoS of this scale happened to our DNS provider, the solution would be to have two completely separate DNS providers. That way, if one provider gets knocked off the Internet, we still have a fully functioning second provider who can pick up the slack. But there were still questions to be answered and assumptions to be validated:

  • What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?
  • What is the performance impact for our users if one of the providers is offline?
  • What is the best number of nameservers to be using?
  • How are we going to keep our DNS providers in sync?

These were pretty serious questions – some of which we had hypothesis that needed to be checked and others that were answered in the DNS standards, but we know from experience that DNS providers in the wild do not always obey the DNS standards.

What is the performance impact for our users in having multiple DNS providers, when both providers are working properly?

This one should be fairly easy to test. We’ve already done it once, so let’s just do it again. We fired up our tests, as we did in early 2016, but this time we specified two DNS providers:

  • Route 53 & Google Cloud
  • Route 53 & Azure DNS
  • Route 53 & Our internal DNS

We did this simply by listing Name Servers from both providers in our domain registration (and obviously we set up the same records in the zones for both providers).

Running with Route 53 and Google or Azure was fairly common sense – Google and Azure had good coverage of the regions that Route 53 performed poorly in. Their pricing is identical to Route 53, which would make forecasting for the budget easy. As a third option, we decided to see what would happen if we took our formerly demoted, on-premises BIND servers and put them back into production as one of the providers. Let’s look at the data for the three regions from before: United States, New Zealand and Brazil:

United States
DNS Performance for dual providers in the United States

New Zealand
DNS Performance for dual providers in New Zealand

Brazil

DNS Performance for dual providers in Brazil

There is probably one thing you’ll notice immediately from these boxplots, but there’s also another not-so obvious change:

  1. Azure is not in there (the obvious one)
  2. Our 3rd quarters are measurably slower (the not-so obvious one).

Azure

Azure has a fatal flaw in their DNS offering, as of the writing of this blog post. They do not permit the modification of the NS records in the apex of your zone:

You cannot add to, remove, or modify the records in the automatically created NS record set at the zone apex (name = “@”). The only change that’s permitted is to modify the record set TTL.

These NS records are what your DNS provider says are authoritative DNS servers for a given domain. It’s very important that they are accurate and correct, because they will be cached by clients and DNS resolvers and are more authoritative than the records provided by your registrar.

Without going too much into the actual specifics of how DNS caching and NS records work (it would take me another 2,500 words to describe this in detail), what would happen is this: Whichever DNS provider you contact first would be the only DNS provider you could contact for that domain until your DNS cache expires. If Azure is contacted first, then only Azure’s nameservers will be cached and used. This defeats the purpose of having multiple DNS providers, as in the event that the provider you’ve landed on goes offline, which is roughly 50:50, you will have no other DNS provider to fall back to.

So until Azure adds the ability to modify the NS records in the apex of a zone, they’re off the table for a dual-provider setup.

The 3rd quarter

What the third quarter represents here is the impact of latency on DNS. You’ll notice that in the results for ExDNS (which is the internal name for our on-premises BIND servers) the box plot is much taller than the others. This is because those servers are located in New Jersey and Colorado – far, far away from where most of our visitors come from. So as expected, a service with only two points of presence in a single country (as opposed to dozens worldwide) performs very poorly for a lot of users.

Performance conclusions

So our choices were narrowed for us to Route 53 and Google Cloud, thanks to Azure’s lack of ability to modify critical NS records. Thankfully, we have the data to back up the fact that Route 53 combined with Google is a very acceptable combination.

Remember earlier, when I said that the performance of New Zealand was important? This is because Route 53 performed well, but Google Cloud performed poorly in that region. But look at the chart again. Don’t scroll up, I’ll show you another chart here:

Comparison for DNS performance data in New Zealand between single and dual providers

See how Google on its own performed very poorly in NZ (its 1st quarter is 164ms versus 27ms for Route 53)? However, when you combine Google and Route 53 together, the performance basically stays the same as when there was just Route 53.

Why is this? Well, it’s due to a technique called Smooth Round Trip Time. Basically, DNS resolvers (namely certain version of BIND and PowerDNS) keep track of which DNS servers respond faster, and weight queries towards those DNS servers. This means that the faster provider should be skewed to more often than the slower providers. There’s a nice presentation over here if you want to learn more about this. The short version is that if you have many DNS servers, DNS cache servers will favour the fastests ones. As a result, if one provider is fast in Auckland but slow in London, and another provider is the reverse, DNS cache servers in Auckland will favour the first provider and DNS cache servers in London will favor the other. This is a very little known feature of modern DNS servers but our testing shows that enough ISPs support it that we are confident we can rely on it.

What is the performance impact for our users if one of the providers is offline?

This is where having some on-premises DNS servers comes in very handy. What we can essentially do here is send a sample of our users to our on-premises servers, get a baseline performance measurement, then break one of the servers and run the performance measurements again. We can also measure in multiple places: We have our measurements as reported by our clients (what the end user actually experienced), and we can look at data from within our network to see what actually happened. For network analysis, we turned to our trusted network analysis tool, ExtraHop. This would allow us to look at the data on the wire, and get measurements from a broken DNS server (something you can’t do easily with a pcap on that server, because, you know. It’s broken).

Here’s what healthy performance looked like on the wire (as measured by ExtraHop), with two DNS servers, both of them fully operational, over a 24-hour period (this chart is additive for the two series):

DNS performance with two healthy name servers

Blue and brown are the two different, healthy DNS servers. As you can see, there’s a very even 50:50 split in request volume. Because both of the servers are located in the same datacenter, Smoothed Round Trip Time had no effect, and we had a nice even distribution – as we would expect.

Now, what happens when we take one of those DNS servers offline, to simulate a provider outage?

DNS performance with a broken nameserver

In this case, the blue DNS server was offline, and the brown DNS server was healthy. What we see here is that the blue, broken, DNS server received the same number of requests as it did when the DNS server was healthy, but the brown, healthy, DNS server saw twice as many requests. This is because those users who were hitting the broken server eventually retried their requests to the healthy server and started to favor it. So what does this look like in terms of actual client performance?

I’m only going to share one chart with you this time, because they were all essentially the same:

Comparison of healthy vs unhealthy DNS performance

What we see here is a substantial number of our visitors saw a performance decrease. For some it was minor, for others, quite major. This is because the 50% of visitors who hit the faulty server need to retry their request, and the amount of time it takes to retry that request seems to vary. You can see again a large increase in the long tail, which indicates that they are clients who took over 300 milliseconds to retry their request.

What does this tell us?

What this means is that in the event of a DNS provider going offline, we need to pull that DNS provider out of rotation to provide best performance, but until we do our users will still receive service. A non-trivial number of users will be seeing a large performance impact.

What is the best number of nameservers to be using?

Based on the previous performance testing, we can assume that the number of retries a client may have to make is N/2+1, where N is the number of nameservers listed. So if we list eight nameservers, with four from each provider, the client may potentially have to make 5 DNS requests before they finally get a successful message (the four failed requests, plus a final successful one). A statistician better than I would be able to tell you the exact probabilities of each scenario you would face, but the short answer here is:

Four.

We felt that based on our use case, and the performance penalty we were willing to take, we would be listing a total of four nameservers – two from each provider. This may not be the right decision for those who have a web presence orders of magnitudes larger than ours, but Facebook provide two nameservers on IPv4 and two on IPv6. Twitter provides eight, four from Dyn and four from Route 53. Google provides 4.

How are we going to keep our DNS providers in sync?

DNS has built in ways of keeping multiple servers in sync. You have domain transfers (IXFR, AXFR), which are usually triggered by a NOTIFY packet sent to all the servers listed as NS records in the zone. But these are not used in the wild very often, and have limited support from DNS providers. They also come with their own headaches, like maintaining an ACL IP Whitelist, of which there could be hundreds of potential servers (all the different points of presence from multiple providers), of which you do not control any. You also lose the ability to audit who changed which record, as they could be changed on any given server.

So we built a tool to keep our DNS in sync. We actually built this tool years ago, once our artisanally crafted zone files became too troublesome to edit by hand. The details of this tool are out of scope for this blog post though. If you want to learn about it, keep an eye out around March 2017 as we plan to open-source it. The tool lets us describe the DNS zone data in one place and push it to many different DNS providers.

So what did we learn?

The biggest takeaway from all of this, is that even if you have multiple DNS servers, DNS is still a single point of failure if they are all with the same provider and that provider goes offline. Until the Dyn attack this was pretty much “in theory” if you were using a large DNS provider, because until first the successful attack no large DNS provider had ever had an extended outage on all of its points of presence.

However, implementing multiple DNS providers is not entirely straightforward. There are performance considerations. You need to ensure that both of your zones are serving the same data. There can be such a thing as too many nameservers.

Lastly, we did all of this whilst following DNS best practices. We didn’t have to do any weird DNS trickery, or write our own DNS server to do non-standard things. When DNS was designed in 1987, I wonder if the authors knew the importance of what they were creating. I don’t know, but their design still stands strong and resilient today.

Attributions

  • Thanks to Camelia Nicollet for her work in R to produce the graphs in this blog post

Top Spotify Lawyer: Attracting Pirates is in Our DNA

Post Syndicated from Andy original https://torrentfreak.com/top-spotify-lawyer-attracting-pirates-is-in-our-dna-161226/

spotifyAlmost eight years ago and just months after its release, TF published an article which pondered whether the fledgling Spotify service could become a true alternative to Internet piracy.

From the beginning, one of the key software engineers at Spotify has been Ludvig Strigeus, the creator of uTorrent, so clearly the company already knew a lot about file-sharers. In the early days the company was fairly open about its aim to provide an alternative to piracy, but perhaps one of the earliest indications of growing success came when early invites were shared among users of private torrent sites.

Today Spotify is indeed huge. The service has an estimated 100 million users, many of them taking advantage of its ad-supported free tier. This is the gateway for many subscribers, including millions of former and even current pirates who augment their sharing with the desirable service.

Over the years, Spotify has made no secret of its desire to recruit more pirates to its service. In 2014, Spotify Australia managing director Kate Vale said it was one of their key aims.

“People that are pirating music and not paying for it, they are the ones we want on our platform. It’s important for us to be reaching these individuals that have never paid for music before in their life, and get them onto a service that’s legal and gives money back to the rights holders,” Vale said.

Now, in a new interview with The Journal on Sports and Entertainment Law, General Counsel of Spotify Horacio Gutierrez reveals just how deeply this philosophy runs in the company. It’s absolutely fundamental to its being, he explains.

“One of the things that inspired the creation of Spotify and is part of the DNA of the company from the day it launched (and remember the service was launched for the first time around 8 years ago) was addressing one of the biggest questions that everyone in the music industry had at the time — how would one tackle and combat online piracy in music?” Gutierrez says.

“Spotify was determined from the very beginning to provide a fully licensed, legal alternative for online music consumption that people would prefer over piracy.”

The signs that just might be possible came very early on. Just months after Spotify’s initial launch the quality of its service was celebrated on what was to become the world’s best music torrent site, What.cd.

“Honestly it’s going to be huge,” a What.cd user predicted in 2008.

“I’ve been browsing and playing from its seemingly endless music catalogue all afternoon, it loads as if it’s playing from local files, so fast, so easy. If it’s this great in such early beta stages then I can’t imagine where it’s going. I feel like buying another laptop to have permanently rigged.”

Of course, hardcore pirates aren’t always easily encouraged to part with their cash, so Spotify needed an equivalent to the no-cost approach of many torrent sites. That is still being achieved today via its ad-supported entry level, Gutierrez says.

“I think one just has to look at data to recognize that the freemium model for online music consumption works. Our free tier is a key to attracting users away from online piracy, and Spotify’s success is proof that the model works.

“We have data around the world that shows that it works, that in fact we are making inroads against piracy because we offer an ability for those users to have a better experience with higher quality content, variety richer catalogue, and a number of other user-minded features that make the experience much better for the user.”

Spotify’s general counsel says that the company is enjoying success, not only by bringing pirates onboard, but also by converting them to premium customers via a formula that benefits everyone in the industry.

“If you look at what has happened since the launch of the Spotify service, we have been incredibly successful on that score. Figures coming out the music industry show that after 15 years of revenue losses in music industry, the music industry is once again growing thanks to music streaming,” he concludes.

With the shutdown of What.cd in recent weeks, it’s likely that former users will be considering the Spotify option again this Christmas, if they aren’t customers already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.