Tag Archives: rover

FACT Threatens Users of ‘Pirate’ Kodi Add-Ons

Post Syndicated from Ernesto original https://torrentfreak.com/fact-threatens-users-of-pirate-kodi-add-ons-170628/

In the UK there’s a war going on against streaming pirates. At least, that’s what the local anti-piracy body FACT would like the public to know.

The popular media streaming platform Kodi is at the center of the controversy. While Kodi is perfectly legal, many people use it in conjunction with third party add-ons that offer pirated content.

FACT hopes to curb this trend. The group has already taken action against sellers of Kodi devices pre-loaded with these add-ons and they’re keeping a keen eye on developers of illicit add-ons too.

However, according to FACT, the ‘crackdown’ doesn’t stop there. Users of pirate add-ons are also at risk, they claim.

“And then we’ll also be looking at, at some point, the end user. The reason for end users to come into this is that they are committing criminal offences,” FACT’s chief executive Kieron Sharp told the Independent.

While people who stream pirated content are generally hard to track, since they don’t broadcast their IP-address to the public, FACT says that customer data could be obtained directly from sellers of fully-loaded Kodi boxes.

“When we’re working with the police against a company that’s selling IPTV boxes or illicit streaming devices on a large scale, they have records of who they’ve sold them to,” Sharp noted.

While the current legal efforts are focused on the supply side, including these sellers, the end users may also be targeted in the future.

“We have a number of cases coming before the courts in terms of those people who have been providing, selling and distributing illicit streaming devices. It’s something for the very near future, when we’ll consider whether we go any further than that, in terms of customers.”

The comments above make it clear that FACT wants users of these pirate devices to feel vulnerable and exposed. But threatening talk is much easier than action.

It will be very hard to get someone convicted, simply because they bought a device that can access both legal and illegal content. A receipt doesn’t prove intent, and even if it did, it’s pretty much impossible to prove that a person streamed specific pirated content.

But let’s say FACT was able to prove that someone bought a fully-loaded Kodi box and streamed content without permission. How would that result in a conviction? Contrary to claims in the mainstream press, watching a pirated stream isn’t an offense covered by the new Digital Economy Act.

In theory, there could be other ways, but given the complexity of the situation, one would think that FACT would be better off spending its efforts elsewhere.

If FACT was indeed interested in going after individuals then they could easily target people who use torrents. These people broadcast their IP-addresses to the public, which makes them easy to identify. In addition, you can see what they are uploading, and they would also be liable under the Digital Economy Act.

However, after FACT’s decades-long association with the MPAA ended, its main partner in the demonization of Kodi-enabled devices is now the Premier League, who are far more concerned about piracy of live broadcasts (streaming) than content made available after the fact via torrents.

So, given the challenges of having a meaningful criminal prosecution of an end-user as suggested, that leaves us with the probability of FACT sowing fear, uncertainty, and doubt. In other words, scaring the public to deter them from buying or using a fully-loaded Kodi box.

This would also fit in with FACT’s recent claims that some pirate devices are a fire hazard. While it’s kind of FACT to be concerned about the well-being of pirates, as an anti-piracy organization their warnings also serve as a deterrent.

This strategy could pay off to a degree but there’s also some risk involved. Every day new “Kodi” related articles appear in the UK tabloid press, many of them with comments from FACT. Some of these may scare prospective users, but the same headlines also make these boxes known to a much wider public.

In fact, in what is quite a serious backfire, some recent pieces published by the popular Trinity Mirror group (which include FACT comments) actually provide a nice list of pirate addons that are still operational following recent crackdowns.

So are we just sowing fear now or educating a whole new audience?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Isn’t Affected By Adverse Court Rulings – Everyone Else Is

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-isnt-affected-by-adverse-court-rulings-everyone-else-is-170618/

For more than a decade The Pirate Bay has been the world’s most controversial site. Delivering huge quantities of copyrighted content to the masses, the platform is revered and reviled across the copyright spectrum.

Its reputation is one of a defiant Internet swashbuckler, but due to changes in how the site has been run in more recent times, its current philosophy is more difficult to gauge. What has never been in doubt, however, is the site’s original intent to be as provocative as possible.

Through endless publicity stunts, some real, some just for the ‘lulz’, The Pirate Bay managed to attract a massive audience, all while incurring the wrath of every major copyright holder in the world.

Make no mistake, they all queued up to strike back, but every subsequent rightsholder action was met by a Pirate Bay middle finger, two fingers, or chin flick, depending on the mood of the day. This only served to further delight the masses, who happily spread the word while keeping their torrents flowing.

This vicious circle of being targeted by the entertainment industries, mocking them, and then reaping the traffic benefits, developed into the cheapest long-term marketing campaign the Internet had ever seen. But nothing is ever truly for free and there have been consequences.

After taunting Hollywood and the music industry with its refusals to capitulate, endless legal action that the site would have ordinarily been forced to participate in largely took place without The Pirate Bay being present. It doesn’t take a law degree to work out what happened in each and every one of those cases, whatever complex route they took through the legal system. No defense, no win.

For example, the web-blocking phenomenon across the UK, Europe, Asia and Australia was driven by the site’s absolute resilience and although there would clearly have been other scapegoats had The Pirate Bay disappeared, the site was the ideal bogeyman the copyright lobby required to move forward.

Filing blocking lawsuits while bringing hosts, advertisers, and ISPs on board for anti-piracy initiatives were also made easier with the ‘evil’ Pirate Bay still online. Immune from every anti-piracy technique under the sun, the existence of the platform in the face of all onslaughts only strengthened the cases of those arguing for even more drastic measures.

Over a decade, this has meant a significant tightening of the sharing and streaming climate. Without any big legislative changes but plenty of case law against The Pirate Bay, web-blocking is now a walk in the park, ad hoc domain seizures are a fairly regular occurrence, and few companies want to host sharing sites. Advertisers and brands are also hesitant over where they place their ads. It’s a very different world to the one of 10 years ago.

While it would be wrong to attribute every tightening of the noose to the actions of The Pirate Bay, there’s little doubt that the site and its chaotic image played a huge role in where copyright enforcement is today. The platform set out to provoke and succeeded in every way possible, gaining supporters in their millions. It could also be argued it kicked a hole in a hornets’ nest, releasing the hell inside.

But perhaps the site’s most amazing achievement is the way it has managed to stay online, despite all the turmoil.

This week yet another ruling, this time from the powerful European Court of Justice, found that by offering links in the manner it does, The Pirate Bay and other sites are liable for communicating copyright works to the public. Of course, this prompted the usual swathe of articles claiming that this could be the final nail in the site’s coffin.

Wrong.

In common with every ruling, legal defeat, and legislative restriction put in place due to the site’s activities, this week’s decision from the ECJ will have zero effect on the Pirate Bay’s availability. For right or wrong, the site was breaking the law long before this ruling and will continue to do so until it decides otherwise.

What we have instead is a further tightened legal landscape that will have a lasting effect on everything BUT the site, including weaker torrent sites, Internet users, and user-uploaded content sites such as YouTube.

With The Pirate Bay carrying on regardless, that is nothing short of remarkable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Nintendo & BREIN Target Seller of ‘Pirate’ Retro Gaming System

Post Syndicated from Andy original https://torrentfreak.com/nintendo-brein-target-seller-of-pirate-retro-gaming-system-170610/

As millions of often younger gamers immerse themselves in the latest 3D romp-fests from the world’s leading games developers, huge numbers of people are reliving their youth through the wonders of emulation.

The majority of old gaming systems can be emulated on a decent PC these days, opening up the possibility of reanimating thousands of the greatest games to ever grace the planet. While that’s a great prospect, the good news doesn’t stop there. The games are all free – if you don’t mind pirating them.

While many people go the do-it-yourself route by downloading emulators and ROMs (the games) from the Internet, increasingly people are saving time by buying systems ready-made online. Some of these are hugely impressive, housed in full-size arcade machine cabinets and packing many thousands of games. They also have sizeable price tags to match, running in some cases to thousands of dollars. But there are other options.

The rise of affordable compact computers has opened up emulation and retro gaming to a whole new audience and inevitable some people have taken to selling these devices online with the games pre-bundled on SD cards. These systems can be obtained relatively cheaply but despite the games being old, companies like Nintendo still take a dim view of their sale.

That’s also the case in the Netherlands, where Nintendo and other companies are taking action against people involved in the sale of what are effectively pirate gaming systems. In a recent case, Dutch anti-piracy outfit BREIN took action against the operator of the Retrospeler (Retro Player) site, an outlet selling a ready-made retro gaming system.

Retro Player site (translated from Dutch)

As seen from the image above, for a little under 110 euros the player can buy a games machine with classics like Super Mario, Street Fighter, and Final Fantasy pre-installed. Add a TV via an HDMI lead and a joypad or two, and yesteryear gaming becomes reality today. Unfortunately, the fun didn’t last long and it was soon “Game Over” for Retro Player.

Speaking with TorrentFreak, BREIN chief Tim Kuik says that the system sold by Retro Player was based on the popular Raspberry Pi single-board computer. Although small and relatively cheap, the Pi is easily capable of running retro games via software such as RetroPie, but it’s unclear which product was installed on the version sold by Retro Player.

What is clear is that the device came pre-installed with a lot of games. The now-defunct Retro Player site listed 6,500 titles for a wide range of classic gaming systems, including Gameboy, Super Nintendo, Nintendo 64, Megadrive and Playstation. Kuik didn’t provide precise numbers but said that the machine came packaged with “a couple of thousand” titles.

BREIN says in this particular case it was acting on behalf of Nintendo, among others. However, it doesn’t appear that the case will be going to court. Like many other cases handled by the anti-piracy group, BREIN says it has reached a settlement with the operator of the Retro Player site for an unspecified amount.

The debate and controversy surrounding retro gaming and emulation is one that has been running for years. The thriving community sees little wrong with reanimating games for long-dead systems and giving them new life among a new audience. On the other hand, copyright holders such as Nintendo view their titles as their property, to be exploited in a time, place and manner of their choosing.

While that friction will continue for a long time to come, there will be few if any legal problems for those choosing to pursue their emulation fantasies in the privacy of their own home. Retro gaming is here to stay and as long as computing power continues to increase, the experience is only likely to improve.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building High-Throughput Genomics Batch Workflows on AWS: Workflow Layer (Part 4 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomics-batch-workflows-on-aws-workflow-layer-part-4-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the fourth in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackled the batch layer and built a scalable, elastic, and easily maintainable batch engine using AWS Batch. This solution took care of dynamically scaling your compute resources in response to the number of runnable jobs in your job queue length as well as managed job placement.

In part 4, you build out the workflow layer of your solution using AWS Step Functions and AWS Lambda. You then run an end-to-end genomic analysis―specifically known as exome secondary analysis―for many times at a cost of less than $1 per exome.

Step Functions makes it easy to coordinate the components of your applications using visual workflows. Building applications from individual components that each perform a single function lets you scale and change your workflow quickly. You can use the graphical console to arrange and visualize the components of your application as a series of steps, which simplify building and running multi-step applications. You can change and add steps without writing code, so you can easily evolve your application and innovate faster.

An added benefit of using Step Functions to define your workflows is that the state machines you create are immutable. While you can delete a state machine, you cannot alter it after it is created. For regulated workloads where auditing is important, you can be assured that state machines you used in production cannot be altered.

In this blog post, you will create a Lambda state machine to orchestrate your batch workflow. For more information on how to create a basic state machine, please see this Step Functions tutorial.

All code related to this blog series can be found in the associated GitHub repository here.

Build a state machine building block

To skip the following steps, we have provided an AWS CloudFormation template that can deploy your Step Functions state machine. You can use this in combination with the setup you did in part 3 to quickly set up the environment in which to run your analysis.

The state machine is composed of smaller state machines that submit a job to AWS Batch, and then poll and check its execution.

The steps in this building block state machine are as follows:

  1. A job is submitted.
    Each analytical module/job has its own Lambda function for submission and calls the batchSubmitJob Lambda function that you built in the previous blog post. You will build these specialized Lambda functions in the following section.
  2. The state machine queries the AWS Batch API for the job status.
    This is also a Lambda function.
  3. The job status is checked to see if the job has completed.
    If the job status equals SUCCESS, proceed to log the final job status. If the job status equals FAILED, end the execution of the state machine. In all other cases, wait 30 seconds and go back to Step 2.

Here is the JSON representing this state machine.

{
  "Comment": "A simple example that submits a Job to AWS Batch",
  "StartAt": "SubmitJob",
  "States": {
    "SubmitJob": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>::function:batchSubmitJob",
      "Next": "GetJobStatus"
    },
    "GetJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "Next": "CheckJobStatus",
      "InputPath": "$",
      "ResultPath": "$.status"
    },
    "CheckJobStatus": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.status",
          "StringEquals": "FAILED",
          "End": true
        },
        {
          "Variable": "$.status",
          "StringEquals": "SUCCEEDED",
          "Next": "GetFinalJobStatus"
        }
      ],
      "Default": "Wait30Seconds"
    },
    "Wait30Seconds": {
      "Type": "Wait",
      "Seconds": 30,
      "Next": "GetJobStatus"
    },
    "GetFinalJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "End": true
    }
  }
}

Building the Lambda functions for the state machine

You need two basic Lambda functions for this state machine. The first one submits a job to AWS Batch and the second checks the status of the AWS Batch job that was submitted.

In AWS Step Functions, you specify an input as JSON that is read into your state machine. Each state receives the aggregate of the steps immediately preceding it, and you can specify which components a state passes on to its children. Because you are using Lambda functions to execute tasks, one of the easiest routes to take is to modify the input JSON, represented as a Python dictionary, within the Lambda function and return the entire dictionary back for the next state to consume.

Building the batchSubmitIsaacJob Lambda function

For Step 1 above, you need a Lambda function for each of the steps in your analysis workflow. As you created a generic Lambda function in the previous post to submit a batch job (batchSubmitJob), you can use that function as the basis for the specialized functions you’ll include in this state machine. Here is such a Lambda function for the Isaac aligner.

from __future__ import print_function

import boto3
import json
import traceback

lambda_client = boto3.client('lambda')



def lambda_handler(event, context):
    try:
        # Generate output put
        bam_s3_path = '/'.join([event['resultsS3Path'], event['sampleId'], 'bam/'])

        depends_on = event['dependsOn'] if 'dependsOn' in event else []

        # Generate run command
        command = [
            '--bam_s3_folder_path', bam_s3_path,
            '--fastq1_s3_path', event['fastq1S3Path'],
            '--fastq2_s3_path', event['fastq2S3Path'],
            '--reference_s3_path', event['isaac']['referenceS3Path'],
            '--working_dir', event['workingDir']
        ]

        if 'cmdArgs' in event['isaac']:
            command.extend(['--cmd_args', event['isaac']['cmdArgs']])
        if 'memory' in event['isaac']:
            command.extend(['--memory', event['isaac']['memory']])

        # Submit Payload
        response = lambda_client.invoke(
            FunctionName='batchSubmitJob',
            InvocationType='RequestResponse',
            LogType='Tail',
            Payload=json.dumps(dict(
                dependsOn=depends_on,
                containerOverrides={
                    'command': command,
                },
                jobDefinition=event['isaac']['jobDefinition'],
                jobName='-'.join(['isaac', event['sampleId']]),
                jobQueue=event['isaac']['jobQueue']
            )))

        response_payload = response['Payload'].read()

        # Update event
        event['bamS3Path'] = bam_s3_path
        event['jobId'] = json.loads(response_payload)['jobId']
        
        return event
    except Exception as e:
        traceback.print_exc()
        raise e

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitIsaacJob and paste in the above code. Use the LambdaBatchExecutionRole that you created in the previous post. For more information, see Step 2.1: Create a Hello World Lambda Function.

This Lambda function reads in the inputs passed to the state machine it is part of, formats the data for the batchSubmitJob Lambda function, invokes that Lambda function, and then modifies the event dictionary to pass onto the subsequent states. You can repeat these for each of the other tools, which can be found in the tools//lambda/lambda_function.py script in the GitHub repo.

Building the batchGetJobStatus Lambda function

For Step 2 above, the process queries the AWS Batch DescribeJobs API action with jobId to identify the state that the job is in. You can put this into a Lambda function to integrate it with Step Functions.

In the Lambda console, create a new Python 2.7 function with the LambdaBatchExecutionRole IAM role. Name your function batchGetJobStatus and paste in the following code. This is similar to the batch-get-job-python27 Lambda blueprint.

from __future__ import print_function

import boto3
import json

print('Loading function')

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get jobId from the event
    job_id = event['jobId']

    try:
        response = batch_client.describe_jobs(
            jobs=[job_id]
        )
        job_status = response['jobs'][0]['status']
        return job_status
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Structuring state machine input

You have structured the state machine input so that general file references are included at the top-level of the JSON object, and any job-specific items are contained within a nested JSON object. At a high level, this is what the input structure looks like:

{
        "general_field_1": "value1",
        "general_field_2": "value2",
        "general_field_3": "value3",
        "job1": {},
        "job2": {},
        "job3": {}
}

Building the full state machine

By chaining these state machine components together, you can quickly build flexible workflows that can process genomes in multiple ways. The development of the larger state machine that defines the entire workflow uses four of the above building blocks. You use the Lambda functions that you built in the previous section. Rename each building block submission to match the tool name.

We have provided a CloudFormation template to deploy your state machine and the associated IAM roles. In the CloudFormation console, select Create Stack, choose your template (deploy_state_machine.yaml), and enter in the ARNs for the Lambda functions you created.

Continue through the rest of the steps and deploy your stack. Be sure to check the box next to "I acknowledge that AWS CloudFormation might create IAM resources."

Once the CloudFormation stack is finished deploying, you should see the following image of your state machine.

In short, you first submit a job for Isaac, which is the aligner you are using for the analysis. Next, you use parallel state to split your output from "GetFinalIsaacJobStatus" and send it to both your variant calling step, Strelka, and your QC step, Samtools Stats. These then are run in parallel and you annotate the results from your Strelka step with snpEff.

Putting it all together

Now that you have built all of the components for a genomics secondary analysis workflow, test the entire process.

We have provided sequences from an Illumina sequencer that cover a region of the genome known as the exome. Most of the positions in the genome that we have currently associated with disease or human traits reside in this region, which is 1–2% of the entire genome. The workflow that you have built works for both analyzing an exome, as well as an entire genome.

Additionally, we have provided prebuilt reference genomes for Isaac, located at:

s3://aws-batch-genomics-resources/reference/

If you are interested, we have provided a script that sets up all of that data. To execute that script, run the following command on a large EC2 instance:

make reference REGISTRY=<your-ecr-registry>

Indexing and preparing this reference takes many hours on a large-memory EC2 instance. Be careful about the costs involved and note that the data is available through the prebuilt reference genomes.

Starting the execution

In a previous section, you established a provenance for the JSON that is fed into your state machine. For ease, we have auto-populated the input JSON for you to the state machine. You can also find this in the GitHub repo under workflow/test.input.json:

{
  "fastq1S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz",
  "fastq2S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
  "referenceS3Path": "s3://aws-batch-genomics-resources/reference/hg38.fa",
  "resultsS3Path": "s3://<bucket>/genomic-workflow/results",
  "sampleId": "NA12878_states_1",
  "workingDir": "/scratch",
  "isaac": {
    "jobDefinition": "isaac-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "referenceS3Path": "s3://aws-batch-genomics-resources/reference/isaac/"
  },
  "samtoolsStats": {
    "jobDefinition": "samtools_stats-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv"
  },
  "strelka": {
    "jobDefinition": "strelka-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "cmdArgs": " --exome "
  },
  "snpEff": {
    "jobDefinition": "snpeff-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv",
    "cmdArgs": " -t hg38 "
  }
}

You are now at the stage to run your full genomic analysis. Copy the above to a new text file, change paths and ARNs to the ones that you created previously, and save your JSON input as input.states.json.

In the CLI, execute the following command. You need the ARN of the state machine that you created in the previous post:

aws stepfunctions start-execution --state-machine-arn <your-state-machine-arn> --input file://input.states.json

Your analysis has now started. By using Spot Instances with AWS Batch, you can quickly scale out your workflows while concurrently optimizing for cost. While this is not guaranteed, most executions of the workflows presented here should cost under $1 for a full analysis.

Monitoring the execution

The output from the above CLI command gives you the ARN that describes the specific execution. Copy that and navigate to the Step Functions console. Select the state machine that you created previously and paste the ARN into the search bar.

The screen shows information about your specific execution. On the left, you see where your execution currently is in the workflow.

In the following screenshot, you can see that your workflow has successfully completed the alignment job and moved onto the subsequent steps, which are variant calling and generating quality information about your sample.

You can also navigate to the AWS Batch console and see that progress of all of your jobs reflected there as well.

Finally, after your workflow has completed successfully, check out the S3 path to which you wrote all of your files. If you run a ls –recursive command on the S3 results path, specified in the input to your state machine execution, you should see something similar to the following:

2017-05-02 13:46:32 6475144340 genomic-workflow/results/NA12878_run1/bam/sorted.bam
2017-05-02 13:46:34    7552576 genomic-workflow/results/NA12878_run1/bam/sorted.bam.bai
2017-05-02 13:46:32         45 genomic-workflow/results/NA12878_run1/bam/sorted.bam.md5
2017-05-02 13:53:20      68769 genomic-workflow/results/NA12878_run1/stats/bam_stats.dat
2017-05-02 14:05:12        100 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.tsv
2017-05-02 14:05:12        359 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.xml
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz.tbi
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz.tbi
2017-05-02 14:05:12   30783484 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz
2017-05-02 14:05:12    1566596 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz.tbi

Modifications to the workflow

You have now built and run your genomics workflow. While diving deep into modifications to this architecture are beyond the scope of these posts, we wanted to leave you with several suggestions of how you might modify this workflow to satisfy additional business requirements.

  • Job tracking with Amazon DynamoDB
    In many cases, such as if you are offering Genomics-as-a-Service, you might want to track the state of your jobs with DynamoDB to get fine-grained records of how your jobs are running. This way, you can easily identify the cost of individual jobs and workflows that you run.
  • Resuming from failure
    Both AWS Batch and Step Functions natively support job retries and can cover many of the standard cases where a job might be interrupted. There may be cases, however, where your workflow might fail in a way that is unpredictable. In this case, you can use custom error handling with AWS Step Functions to build out a workflow that is even more resilient. Also, you can build in fail states into your state machine to fail at any point, such as if a batch job fails after a certain number of retries.
  • Invoking Step Functions from Amazon API Gateway
    You can use API Gateway to build an API that acts as a "front door" to Step Functions. You can create a POST method that contains the input JSON to feed into the state machine you built. For more information, see the Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway blog post.

Conclusion

While the approach we have demonstrated in this series has been focused on genomics, it is important to note that this can be generalized to nearly any high-throughput batch workload. We hope that you have found the information useful and that it can serve as a jump-start to building your own batch workloads on AWS with native AWS services.

For more information about how AWS can enable your genomics workloads, be sure to check out the AWS Genomics page.

Other posts in this four-part series:

Please leave any questions and comments below.

Building High-Throughput Genomic Batch Workflows on AWS: Batch Layer (Part 3 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomic-batch-workflows-on-aws-batch-layer-part-3-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the third in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackle the batch layer and build a scalable, elastic, and easily maintainable batch engine using AWS Batch.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs that you submit. With AWS Batch, you do not need to install and manage your own batch computing software or server clusters, which allows you to focus on analyzing results, such as those of your genomic analysis.

Integrating applications into AWS Batch

If you are new to AWS Batch, we recommend reading Setting Up AWS Batch to ensure that you have the proper permissions and AWS environment.

After you have a working environment, you define several types of resources:

  • IAM roles that provide service permissions
  • A compute environment that launches and terminates compute resources for jobs
  • A custom Amazon Machine Image (AMI)
  • A job queue to submit the units of work and to schedule the appropriate resources within the compute environment to execute those jobs
  • Job definitions that define how to execute an application

After the resources are created, you’ll test the environment and create an AWS Lambda function to send generic jobs to the queue.

This genomics workflow covers the basic steps. For more information, see Getting Started with AWS Batch.

Creating the necessary IAM roles

AWS Batch simplifies batch processing by managing a number of underlying AWS services so that you can focus on your applications. As a result, you create IAM roles that give the service permissions to act on your behalf. In this section, deploy the AWS CloudFormation template included in the GitHub repository and extract the ARNs for later use.

To deploy the stack, go to the top level in the repo with the following command:

aws cloudformation create-stack --template-body file://batch/setup/iam.template.yaml --stack-name iam --capabilities CAPABILITY_NAMED_IAM

You can capture the output from this stack in the Outputs tab in the CloudFormation console:

Creating the compute environment

In AWS Batch, you will set up a managed compute environments. Managed compute environments automatically launch and terminate compute resources on your behalf based on the aggregate resources needed by your jobs, such as vCPU and memory, and simple boundaries that you define.

When defining your compute environment, specify the following:

  • Desired instance types in your environment
  • Min and max vCPUs in the environment
  • The Amazon Machine Image (AMI) to use
  • Percentage value for bids on the Spot Market and VPC subnets that can be used.

AWS Batch then provisions an elastic and heterogeneous pool of Amazon EC2 instances based on the aggregate resource requirements of jobs sitting in the RUNNABLE state. If a mix of CPU and memory-intensive jobs are ready to run, AWS Batch provisions the appropriate ratio and size of CPU and memory-optimized instances within your environment. For this post, you will use the simplest configuration, in which instance types are set to "optimal" allowing AWS Batch to choose from the latest C, M, and R EC2 instance families.

While you could create this compute environment in the console, we provide the following CLI commands. Replace the subnet IDs and key name with your own private subnets and key, and the image-id with the image you will build in the next section.

ACCOUNTID=<your account id>
SERVICEROLE=<from output in CloudFormation template>
IAMFLEETROLE=<from output in CloudFormation template>
JOBROLEARN=<from output in CloudFormation template>
SUBNETS=<comma delimited list of subnets>
SECGROUPS=<your security groups>
SPOTPER=50 # percentage of on demand
IMAGEID=<ami-id corresponding to the one you created>
INSTANCEROLE=<from output in CloudFormation template>
REGISTRY=${ACCOUNTID}.dkr.ecr.us-east-1.amazonaws.com
KEYNAME=<your key name>
MAXCPU=1024 # max vCPUs in compute environment
ENV=myenv

# Creates the compute environment
aws batch create-compute-environment --compute-environment-name genomicsEnv-$ENV --type MANAGED --state ENABLED --service-role ${SERVICEROLE} --compute-resources type=SPOT,minvCpus=0,maxvCpus=$MAXCPU,desiredvCpus=0,instanceTypes=optimal,imageId=$IMAGEID,subnets=$SUBNETS,securityGroupIds=$SECGROUPS,ec2KeyPair=$KEYNAME,instanceRole=$INSTANCEROLE,bidPercentage=$SPOTPER,spotIamFleetRole=$IAMFLEETROLE

Creating the custom AMI for AWS Batch

While you can use default Amazon ECS-optimized AMIs with AWS Batch, you can also provide your own image in managed compute environments. We will use this feature to provision additional scratch EBS storage on each of the instances that AWS Batch launches and also to encrypt both the Docker and scratch EBS volumes.

AWS Batch has the same requirements for your AMI as Amazon ECS. To build the custom image, modify the default Amazon ECS-Optimized Amazon Linux AMI in the following ways:

  • Attach a 1 TB scratch volume to /dev/sdb
  • Encrypt the Docker and new scratch volumes
  • Mount the scratch volume to /docker_scratch by modifying /etcfstab

The first two tasks can be addressed when you create the custom AMI in the console. Spin up a small t2.micro instance, and proceed through the standard EC2 instance launch.

After your instance has launched, record the IP address and then SSH into the instance. Copy and paste the following code:

sudo yum -y update
sudo parted /dev/xvdb mklabel gpt
sudo parted /dev/xvdb mkpart primary 0% 100%
sudo mkfs -t ext4 /dev/xvdb1
sudo mkdir /docker_scratch
sudo echo -e '/dev/xvdb1\t/docker_scratch\text4\tdefaults\t0\t0' | sudo tee -a /etc/fstab
sudo mount -a

This auto-mounts your scratch volume to /docker_scratch, which is your scratch directory for batch processing. Next, create your new AMI and record the image ID.

Creating the job queues

AWS Batch job queues are used to coordinate the submission of batch jobs. Your jobs are submitted to job queues, which can be mapped to one or more compute environments. Job queues have priority relative to each other. You can also specify the order in which they consume resources from your compute environments.

In this solution, use two job queues. The first is for high priority jobs, such as alignment or variant calling. Set this with a high priority (1000) and map back to the previously created compute environment. Next, set a second job queue for low priority jobs, such as quality statistics generation. To create these compute environments, enter the following CLI commands:

aws batch create-job-queue --job-queue-name highPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1000 --state ENABLED
aws batch create-job-queue --job-queue-name lowPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1 --state ENABLED

Creating the job definitions

To run the Isaac aligner container image locally, supply the Amazon S3 locations for the FASTQ input sequences, the reference genome to align to, and the output BAM file. For more information, see tools/isaac/README.md.

The Docker container itself also requires some information on a suitable mountable volume so that it can read and write files temporary files without running out of space.

Note: In the following example, the FASTQ files as well as the reference files to run are in a publicly available bucket.

FASTQ1=s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz
FASTQ2=s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz
REF=s3://aws-batch-genomics-resources/reference/isaac/
BAM=s3://mybucket/genomic-workflow/test_results/bam/

mkdir ~/scratch

docker run --rm -ti -v $(HOME)/scratch:/scratch $REPO_URI --bam_s3_folder_path $BAM \
--fastq1_s3_path $FASTQ1 \
--fastq2_s3_path $FASTQ2 \
--reference_s3_path $REF \
--working_dir /scratch 

Locally running containers can typically expand their CPU and memory resource headroom. In AWS Batch, the CPU and memory requirements are hard limits and are allocated to the container image at runtime.

Isaac is a fairly resource-intensive algorithm, as it creates an uncompressed index of the reference genome in memory to match the query DNA sequences. The large memory space is shared across multiple CPU threads, and Isaac can scale almost linearly with the number of CPU threads given to it as a parameter.

To fit these characteristics, choose an optimal instance size to maximize the number of CPU threads based on a given large memory footprint, and deploy a Docker container that uses all of the instance resources. In this case, we chose a host instance with 80+ GB of memory and 32+ vCPUs. The following code is example JSON that you can pass to the AWS CLI to create a job definition for Isaac.

aws batch register-job-definition --job-definition-name isaac-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/isaac",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":80000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

You can copy and paste the following code for the other three job definitions:

aws batch register-job-definition --job-definition-name strelka-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/strelka",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":32000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name snpeff-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/snpeff",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name samtoolsStats-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/samtools_stats",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

The value for "image" comes from the previous post on creating a Docker image and publishing to ECR. The value for jobRoleArn you can find from the output of the CloudFormation template that you deployed earlier. In addition to providing the number of CPU cores and memory required by Isaac, you also give it a storage volume for scratch and staging. The volume comes from the previously defined custom AMI.

Testing the environment

After you have created the Isaac job definition, you can submit the job using the AWS Batch submitJob API action. While the base mappings for Docker run are taken care of in the job definition that you just built, the specific job parameters should be specified in the container overrides section of the API call. Here’s what this would look like in the CLI, using the same parameters as in the bash commands shown earlier:

aws batch submit-job --job-name testisaac --job-queue highPriority-${ENV} --job-definition isaac-${ENV}:1 --container-overrides '{
"command": [
			"--bam_s3_folder_path", "s3://mybucket/genomic-workflow/test_batch/bam/",
            "--fastq1_s3_path", "s3://aws-batch-genomics-resources/fastq/ SRR1919605_1.fastq.gz",
            "--fastq2_s3_path", "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
            "--reference_s3_path", "s3://aws-batch-genomics-resources/reference/isaac/",
            "--working_dir", "/scratch",
			"—cmd_args", " --exome ",]
}'

When you execute a submitJob call, jobId is returned. You can then track the progress of your job using the describeJobs API action:

aws batch describe-jobs –jobs <jobId returned from submitJob>

You can also track the progress of all of your jobs in the AWS Batch console dashboard.

To see exactly where a RUNNING job is at, use the link in the AWS Batch console to direct you to the appropriate location in CloudWatch logs.

Completing the batch environment setup

To finish, create a Lambda function to submit a generic AWS Batch job.

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitJob. Copy and paste the following code. This is similar to the batch-submit-job-python27 Lambda blueprint. Use the LambdaBatchExecutionRole that you created earlier. For more information about creating functions, see Step 2.1: Create a Hello World Lambda Function.

from __future__ import print_function

import json
import boto3

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get parameters for the SubmitJob call
    # http://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html
    job_name = event['jobName']
    job_queue = event['jobQueue']
    job_definition = event['jobDefinition']
    
    # containerOverrides, dependsOn, and parameters are optional
    container_overrides = event['containerOverrides'] if event.get('containerOverrides') else {}
    parameters = event['parameters'] if event.get('parameters') else {}
    depends_on = event['dependsOn'] if event.get('dependsOn') else []
    
    try:
        response = batch_client.submit_job(
            dependsOn=depends_on,
            containerOverrides=container_overrides,
            jobDefinition=job_definition,
            jobName=job_name,
            jobQueue=job_queue,
            parameters=parameters
        )
        
        # Log response from AWS Batch
        print("Response: " + json.dumps(response, indent=2))
        
        # Return the jobId
        event['jobId'] = response['jobId']
        return event
    
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Conclusion

In part 3 of this series, you successfully set up your data processing, or batch, environment in AWS Batch. We also provided a Python script in the corresponding GitHub repo that takes care of all of the above CLI arguments for you, as well as building out the job definitions for all of the jobs in the workflow: Isaac, Strelka, SAMtools, and snpEff. You can check the script’s README for additional documentation.

In Part 4, you’ll cover the workflow layer using AWS Step Functions and AWS Lambda.

Please leave any questions and comments below.

Pornhub Piracy Stopped Me Producing Porn, Jenna Haze Says

Post Syndicated from Andy original https://torrentfreak.com/pornhub-piracy-stopped-me-producing-porn-jenna-haze-says-170531/

Last week, adult ‘tube’ site Pornhub celebrated its 10th anniversary, and what a decade it was.

Six months after its May 2007 launch, the site was getting a million visitors every day. Six months after that, traffic had exploded five-fold. Such was the site’s success, by November 2008 Pornhub entered the ranks of the top 100 most-visited sites on the Internet.

As a YouTube-like platform, Pornhub traditionally relied on users to upload content to the site. Uploaders have to declare that they have the rights to do so but it’s clear that amid large quantities of fully licensed material, content exists on Pornhub that is infringing copyright.

Like YouTube, however, the site says it takes its legal responsibilities seriously by removing content whenever a valid DMCA notice is received. Furthermore, it also has a Content Partner Program which allows content owners to monetize their material on the platform.

But despite these overtures, Pornhub has remained a divisive operation. While some partners happily generate revenue from the platform and use it to drive valuable traffic to their own sites, others view it as a parasite living off their hard work. Today those critics were joined by one of the biggest stars the adult industry has ever known.

After ten years as an adult performer, starring in more than 600 movies (including one that marked her as the first adult performer to appear on Blu-ray format), in 2012 Jenna Haze decided on a change of pace. No longer interested in performing, she headed to the other side of the camera as a producer and director.

“Directing is where my heart is now. It’s allowed me to explore a creative side that is different from what performing has offered me,” she said in a statement.

“I am very satisfied with what I was able to accomplish in 10 years of performing, and now I’m enjoying the challenges of being on the other side of the camera and running my studio.”

But while Haze enjoyed success with 15 movies, it wasn’t to last. The former performer eventually backed away from both directing and producing adult content. This morning she laid the blame for that on Pornhub and similar sites.

It all began with a tweet from Conan O’Brien, who belatedly wished Pornhub a happy 10th anniversary.

In response to O’Brien apparently coming to the party late, a Twitter user informed him how he’d been missing out on Jenna Haze. That drew a response from Haze herself, who accused Pornhub of pirating her content.

“Please don’t support sites like porn hub,” she wrote. “They are a tube site that pirates content that other adult companies produce. It’s like Napster!”

In a follow-up, Haze went on to accuse Pornhub of theft and blamed the site for her exit from the business.

“Well they steal my content from my company, as do many other tube sites. It’s why I don’t produce or direct anymore,” Haze wrote.

“Maybe not all of their content is stolen, but I have definitely seen my content up there, as well as other people’s content.”

Of course, just like record companies can do with YouTube, there’s always the option for Haze to file a DMCA notice with Pornhub to have offending content taken down. However, it’s a route she claims to have taken already, but without much success.

“They take the videos down and put [them] back up. I’m not saying they don’t do legitimate business as well,” she said.

While Pornhub has its critics, the site does indeed do masses of legitimate business. The platform is owned by Mindgeek, whose websites receive a combined 115 million visitors per day, fueled in part by content supplied by Brazzers and Digital Playground, which Mindgeek owns. That being said, Mindgeek’s position in the market has always been controversial.

Three years ago, it became evident that Mindgeek had become so powerful in the adult industry that performers (some of whom felt their content was being exploited by the company) indicated they were scared to criticize it.

Adult actress and outspoken piracy critic Tasha Reign, who also had her videos uploaded to Pornhub without her permission, revealed she was in a particularly tight spot.

“It’s like we’re stuck between a rock and a hard place in a way, because if I want to shoot content then I kinda have to shoot for [Mindgeek] because that’s the company that books me because they own…almost…everything,” Reign said.

In 2017, Mindgeek’s dominance is clearly less of a problem for Haze, who is now concentrating on other things. But for those who remain in the industry, Mindgeek is a force to be reckoned with, so criticism will probably remain somewhat muted.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Who Are the Shadow Brokers?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/who_are_the_sha.html

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of NSA secrets. Since last summer, they’ve been dumping these secrets on the Internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: we don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits­ — vulnerabilities in common software — ­from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the US, but with no connection to the agency. NSA hackers find obscure corners of the Internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the Internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately­ — and publishing documents that discuss what the US is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the US. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the US. Country like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and­ — I’m out of ideas. And China is currently trying to make nice with the US.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the US knows the tools were stolen.

Sure, there’s a chance the attackers knew that the US knew that the attackers knew — ­and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents is Hal Martin, then we can speculate that a random hacker did in fact stumble on it — ­no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a Washington Post story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them — ­and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools­ — something they also tried last August­ — with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems — Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade­ — as it will be for the rest of us.

This essay previously appeared in the Atlantic, and is an update of this essay from Lawfare.

Huge Coalition Protests EU Mandatory Piracy Filter Proposals

Post Syndicated from Andy original https://torrentfreak.com/huge-coalition-protests-eu-mandatory-piracy-filter-proposals-170530/

Last September, EU Commission President Jean-Claude Juncker announced plans to modernize copyright law in Europe.

The proposals (pdf) are part of the Digital Single Market reforms, which have been under development for the past several years.

The proposals cover a broad range of copyright-related issues, but one stands out as being particularly controversial. Article 13 requires certain online service providers to become deeply involved in the detection and policing of allegedly infringing copyright works, uploaded to their platforms by users.

Although its effects will likely be more broad, the proposal is targeted at the so-called “value gap” (1,2,3), i.e the notion that platforms like YouTube are able to avoid paying expensive licensing fees (for music in particular) by exploiting the safe harbor protections of the DMCA and similar legislation.

To close this loophole using Article 13, services that provide access to “large amounts” of user-uploaded content would be required to cooperate with rightsholders to prevent infringing works being communicated to the public.

This means that platforms like YouTube would be forced to take measures to ensure that their deals with content providers to distribute official content are protected by aggressive anti-piracy mechanisms.

The legislation would see platforms forced to deploy content-recognition, filtering and blocking mechanisms, to ensure that only non-infringing content is uploaded in the first place, thus limiting the chances that unauthorized copyrighted content will be made available to end users.

Supporters argue that the resulting decrease in availability of infringing content will effectively close the “value gap” but critics see the measures as disproportionate, likely to result in censorship (no provision for fair use), and a restriction of fundamental freedoms. Indeed, there are already warnings that such a system would severely “restrict the way Europeans create, share, and communicate online.”

The proposals have predictably received widespread support from entertainment industry companies across the EU and the United States, but there are now clear signs that the battle lines are being drawn.

On one side are the major recording labels, movie studios, and other producers. On the other, companies and platforms that will suddenly become more liable for infringing content, accompanied by citizens and scholars who feel that freedoms will be restricted.

The latest sign of the scale of opposition to Article 13 manifests itself in an open letter to the European Parliament. Under the Copyright for Creativity (C4C) banner and signed by the EFF, Creative Commons, Wikimedia, Mozilla, EDRi, Open Rights Group plus sixty other organizations, the letter warns that the proposals will cause more problems than they solve.

“The European Commission’s proposal on copyright in the Digital Single Market failed to meet the expectations of European citizens and businesses. Instead of supporting Europeans in the digital economy, it is backward looking,” the groups say.

“We need European lawmakers to oppose the most damaging aspects of the proposal, but also to embrace a more ambitious agenda for positive reform.”

In addition to opposing Article 11 (the proposed Press Publishers’ Right), the groups ask the EU Parliament not to impose private censorship on EU citizens via Article 13.

“The provision on the so-called ‘value gap’ is designed to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they want to have any chance of staying in business,” the groups write.

“The Commission’s proposal misrepresents some European Court rulings and seeks to impose contradictory obligations on Member States. This is simply bad regulation.”

Calling for the wholesale removal of Article 13 from the copyright negotiations, the groups argue that the reforms should be handled in the appropriate contexts.

“We strenuously oppose such ill thought through experimentation with intermediary liability, which will hinder innovation and competition and will reduce the opportunities available to all European businesses and citizens,” they add.

C4C concludes by calling on lawmakers to oppose Article 13 while seeking avenues for positive reform.

The full letter can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sean Parker’s ‘Screening Room’ Patents Anti-Piracy Technologies

Post Syndicated from Ernesto original https://torrentfreak.com/sean-parkers-screening-room-patents-anti-piracy-technologies-170526/

Sean Parker is no stranger when it comes to online piracy.

The American entrepreneur, who co-founded the file-sharing application Napster, brought copyright infringement to the masses at the turn of the last century.

Fast forward two decades, during which he also served as Facebook’s first president, Parker is back with another controversial idea.

With his latest project, known as the Screening Room, he wants to pipe the latest blockbusters into homes on the day they’re released. For $50 per movie, people should be able to watch new films on their own screens, instead of going to a movie theater.

The project has been praised by some and criticized by others. Several movie industry insiders are skeptical because they believe movies should be seen on the big screen. Others fear that Screening Room will provide quick, quality content for pirate sites.

Given the Napster connection, Parker and his colleagues are particularly aware of these piracy fears. This is likely one of the reasons why they plan to ship their system with advanced anti-piracy technology.

Over the past several weeks, Screening Room Media, Inc. has submitted no less than eight patent applications related to its plans, all with some sort of anti-piracy angle.

For example, a patent titled “Presenting Sonic Signals to Prevent Digital Content Misuse” describes a technology where acoustic signals are regularly sent to mobile devices, to confirm that the user is near the set-top box and is authorized to play the content.

Similarly, the “Monitoring Nearby Mobile Computing Devices to Prevent Digital Content Misuse” patent, describes a system that detects the number of mobile devices near the client-side device, to make sure that too many people aren’t tuning in.

Screening Room patents

The patents are rather technical and can be applied to a wide variety of systems. It’s clear, however, that the setup Screening Room has in mind will have advanced anti-piracy capabilities.

The general technology outlined in the patents also includes forensic watermarking and a “P2P polluter.” The watermarking technology can be used to detect when pirated content spreads outside of the protected network onto the public Internet.

“At this point, the member’s movie accessing system will be shut off and quarantined. If the abuse or illicit activity is confirmed, the member and the household will be banned from the content distribution network,” the patent reads.

P2P polluter, and more

The P2P polluter will then begin to flood file-sharing networks with corrupted content if a movie leaks to the public.

“Therefore, immediately ‘diluting’ the infringement to a rate that would be extraordinarily frustrating, if not impossible, for further piracy of that copy to take place.”

As if that wasn’t enough, Screening Room’s system also comes with a wide range of other anti-piracy scans built in. Among other things, it regularly scans the Wi-Fi network to see which devices are connected, and Bluetooth is used to check what other devices are near.

All in all, it’s clear that Parker and co. are trying to do whatever they can to prevent content from leaking online.

Whether that’s good enough to convince the movie studios to offer their content alongside a simultaneous theatrical release has yet to be seen. But, with prominent shareholders such as J.J. Abrams, Martin Scorsese, and Steven Spielberg, there is plenty support on board already.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Copyright Troll Attorney John Steele Disbarred by Illinois Supreme Court

Post Syndicated from Ernesto original https://torrentfreak.com/copyright-troll-attorney-john-steele-disbarred-by-illinois-supreme-court-170522/

Over the years, copyright trolls have been accused of involvement in various dubious schemes and actions, but there’s one group that has gone above and beyond.

Prenda Law grabbed dozens of headlines, mostly surrounding negative court rulings over identity theft, misrepresentation and even deception.

Most controversial was the shocking revelation that Prenda uploaded their own torrents to The Pirate Bay, creating a honeypot for the people they later sued over pirated downloads.

The allegations also raised the interest of the US Department of Justice, which indicted Prenda principals John Steele and Paul Hansmeier late last year. The two stand accused of running a multi-million dollar fraud and extortion operation.

A few weeks ago Steele pleaded guilty, admitting among other things that they did indeed use The Pirate Bay to operate a honeypot for online pirates.

Following the guilty plea the Illinois Supreme Court, which started looking into the case long before the indictment, has now decided to disbar the attorney. This means that Steele no longer has the right to practice law.

The decision doesn’t really come as a surprise. Steele has admitted to two of the 18 counts listed in the indictment, including some of the allegations that were also listed by the Supreme Court.

In its conclusion, the Court lists a variety of misconduct including “conduct involving dishonesty, fraud, deceit, or misrepresentation, by conduct including filing lawsuits without supporting facts, under the names of entities like Ingenuity 13 and AF Holdings, which were created by Movant for purposes of exacting settlements.”

Also, Steele’s trolling operation was “using means that had no substantial purpose other than to embarrass or burden a third person, or using methods of obtaining evidence that violates the legal rights of such a person…,” the Supreme Court writes.

Steele was disbarred “on consent,” according to Cook County Record, which means that he agreed to have his Illinois law practice license revoked.

The disbarment is not unexpected considering Steele’s guilty plea. However, victims of the Prenda trolling scheme may still welcome it as a form of justice. Meanwhile, Steele has bigger problems to worry about.

The former Prenda attorney is still awaiting his sentencing in the criminal case. In theory, he faces a statutory maximum sentence of 40 years in prison as well as a criminal fine of hundreds of thousands of dollars. However, by signing a plea agreement, he likely gets a reduced sentence.

The Illnois Supreme Court conclusions are available here (pdf), courtesy of Fight Copyright Trolls.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kim Dotcom Says Family Trust Could Sue Mega Investor

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-says-family-trust-could-sue-mega-investor-170511/

One year after the raid on Megaupload and his sprawling mansion, Kim Dotcom fought back in grand fashion by launching new file-hosting site Mega.

It was a roaring success, signing up hundreds of thousands of users in the first few hours alone. Mega, it seemed, might soon be kicking at heels of the unprecedented traction of Megaupload.

While Mega continued to grow, in July 2015 Dotcom indicated that his previously warm connections with the site may have soured.

“I’m not involved in Mega anymore. Neither in a managing nor in a shareholder capacity,” he said.

Dotcom went on to claim that a then-unnamed Chinese investor (wanted in China for fraud) had used straw-men and businesses to accumulate more and more Mega shares, shares that were later seized as part of an investigation by the New Zealand government.

Mega bosses angrily denied that there had been any hostile takeover, noting that “those shareholders” who had decided not to subscribe to recent issues had “…been diluted accordingly. That has been their choice.”

But a year later and the war of words between Dotcom and Mega was still simmering, with the Chinese investor now being openly named as Bill Liu.

A notorious high-roller who allegedly gambled $293m at New Zealand’s SkyCity casino, Liu was soon being described by Dotcom as China’s “fifth most-wanted criminal” due to a huge investigation into the businessman’s dealings taking place back home.

Mega saw things a little differently, however.

“Mr Liu has a shareholding interest but has no management or board position so he certainly doesn’t control Mega,” the company insisted at the time.

Dotcom disagreed strongly with that assertion and this week, more than a year later, the topic has raised its head yet again.

“In a nutshell, Bill Liu has taken control of Mega by using straw men to buy shares for him, ultimately giving him the majority on the board,” Dotcom informs TF.

In common with the raid on Megaupload, the Mega/Liu backstory is like something out of a Hollywood movie.

This week the NZ Herald published an amazing report detailing Liu’s life since he first entered New Zealand in 2001. A section explains how he first got involved with Mega.

Tony Lentino, who was the founder of domain name registrar Instra, was also Mega’s first CEO. It’s reported that he later fell out with Dotcom and wanted to sell his shares in the company.

Bill Liu wanted to invest so Lentino went to meet him at his penthouse apartment on the 35th floor of the Metropolis tower in central Auckland.

Lentino later told police that Liu opened a bottle of Penfolds Grange wine during the meeting – no joke at $800 per bottle. That developed into a discussion about Liu buying Lentino’s stake in Mega and a somewhat interesting trip back home for Lentino.

“You want one of my cars to take home?” Liu allegedly asked Lentino.

The basement contained a Porsche, a Bentley and a Rolls-Royce – and Lentino was invited to take his pick. He took the NZ$400,000 Rolls as part of the NZ$4.2 million share in Mega he transferred to Liu.

Well, not quite to Liu, directly at least.

“When it came time to sign the deal, the shares were to be split into two parcels: one in the name of Zhao Wu Shen, a close friend of [Liu], and a trust company,” NZ Herald reports.

“It was the third transaction where Yan had been quietly buying into Mega – nothing was in his name, but he now controlled 18.8 per cent.”

It is not clear how much Liu currently owns but Lentino later told police (who believed that Liu was hiding his assets) that the Chinese businessman was the “invisible CEO” of Mega.

Speaking with TF this week, Dotcom says that Liu achieved his status by holding Mega back.

“Liu used his power to prevent Mega from monetizing its traffic via advertising sales or premium account sales and by doing so he created an artificial situation in which Mega had to raise more money to survive,” Dotcom says.

“He then pumped double-digit millions of dollars into the business via his straw men in order to dilute all other shareholders to almost zero.”

Dotcom says that Mega could’ve been “instantly profitable, ” but instead Liu intentionally forced the company into a loss-making situation, safe in the knowledge he could “turn on profitability at the push of a button.”

Dotcom says Liu chose not to do that until he directly or indirectly owned “almost all” of the shares in Mega. That, he says, came at the expense of his family, who had invested in Mega.

“The family trust that was setup for the benefit of my children owned the majority of Mega until Bill Liu entered the stage with his unlawful actions to take control of the company,” Dotcom says.

“He ran it at a loss when it could have been profitable, and then diluted other shareholders.”

According to Dotcom, the people behind his family trust are now considering their options, including legal action against Liu and others.

“The trustees of the family trust are now considering legal action against all parties involved in this dilution scam in light of the new information that has become public today from other court proceedings against Bill Liu,” Dotcom concludes.

It’s difficult to find a more colorful character than Dotcom, but Bill Liu certainly gives Dotcom a run for his money. His story can be found here, it’s almost unbelievable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

10 Years in Jail For Internet Pirates Now Reality in the UK

Post Syndicated from Andy original https://torrentfreak.com/10-years-in-jail-for-internet-pirates-now-reality-in-the-uk-170501/

In 2015, the UK Government announced a controversial plan to increase the maximum prison sentence for online copyright infringement from two to ten years.

The proposal followed a suggestion put forward in a study commissioned by the UK Intellectual Property Office (IPO). The study concluded that criminal sanctions for online copyright infringement available under the Copyright, Designs and Patents Act 1988 (CDPA 1988) should be harmonized with ‘offline’ penalties, such as those available for counterfeiting.

“By toughening penalties for commercial-scale online offending we are offering greater protections to businesses and sending a clear message to deter criminals,” then Intellectual Property Minister Baroness Neville-Rolfe said at the time.

In July 2016, the government published a new draft of its Digital Economy Bill which duly proposed an extension of the current prison term of two years to a maximum of ten.

Throughout the entire process of passing the legislation, the government has insisted that ‘regular’ members of the public would not be subjected to harsh punishments. However, that is not how the legislation reads.

As detailed in our earlier article, anyone who makes infringing content available to the public while merely putting a copyright holder at risk of loss, is now committing a criminal offense.

There are a number of variables, but this is the relevant part distilled down for the average file-sharer who downloads as well as uploads, using BitTorrent, for example.

A person…who infringes copyright in a work by communicating the work to the public commits an offense if [the person] knows or has reason to believe that [they are] infringing copyright in the work, and…knows or has reason to believe that communicating the work to the public will cause loss to the owner of the copyright, or will expose the owner of the copyright to a risk of loss.

Earlier this year, the Open Rights Group launched a campaign to try and make the government see sense. ORG did not dispute that there need to be penalties for online infringement but asked the government make amendments to target large-scale infringers while protecting the public.

“Our proposal is to set a threshold of ‘commercial scale loss’, and revising ‘risk of loss’ to ‘serious risk of commercial scale loss’. These are flexible rather than ‘specific’,” ORG said.

But the group’s appeals fell on deaf ears. No one in the law-making process was prepared to make this minor change to the Digital Economy Bill, even though legislation already exists for punishing even the smallest of copyright infringements through the civil courts.

As a result, the bill received royal assent last week which means that the country’s millions of small-time copyright infringers are now criminals in the eyes of the law.

Worst still, depending on the whims of copyright holders, any one could now be reported to the police for sharing even a single movie, an offense (as painted in our hypothetical piece in March) that could result in years in jail.

The government says that won’t be allowed. We’ll see.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Community Profile: Jillian Ogle

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/community-profile-jillian-ogle/

This column is from The MagPi issue 53. You can download a PDF of the full issue for free, or subscribe to receive the print edition in your mailbox or the digital edition on your tablet. All proceeds from the print and digital editions help the Raspberry Pi Foundation achieve its charitable goals.

Let’s Robot streams twice a week, Tuesdays and Thursdays, and allows the general public to control a team of robots within an interactive set, often consisting of mazes, clues, challenges, and even the occasional foe. Users work together via the Twitch.tv platform, sending instructions to the robots in order to navigate their terrain and complete the set objectives.

Let's Robot Raspberry Pi Jillian Ogle

Let’s Robot aims to change the way we interact with television, putting the viewer in the driving seat.

Aylobot, the first robot of the project, boasts a LEGO body, while Ninabot, the somewhat 2.0 upgrade of the two, has a gripper, allowing more interaction from users. Both robots have their own cameras that stream to Twitch, so that those in control can see what they’re up to on a more personal level; several new additions have joined the robot team since then, each with their own unique skill.

Let's Robot Raspberry Pi Jillian Ogle

Twice a week, the robots are controlled by the viewers, allowing them the chance to complete tasks such as force-feeding the intern, attempting to write party invitations, and battling in boss fights.

Jillian Ogle

Let’s Robot is the brainchild of Jillian Ogle, who originally set out to make “the world’s first interactive live show using telepresence robots collaboratively controlled by the audience”. However, Jill discovered quite quickly that the robots needed to complete the project simply didn’t exist to the standard required… and so Let’s Robot was born.

After researching various components for the task, Jill decided upon the Raspberry Pi, and it’s this small SBC that now exists within the bodies of Aylobot, Ninabot, and the rest of the Let’s Robot family.

Let's Robot Jillian Ogle Raspberry Pi

“Post-Its I drew for our #LetsRobot subscribers. We put these in the physical sets made for the robots. I still have a lot more to draw…”

In her previous life, Jill worked in art and game design, including a role as art director for Playdom, a subsidiary of Disney Interactive; she moved on to found Aylo Games in 2013 and Let’s Robot in 2015. The hardware side of the builds has been something of a recently discovered skill, with Jill admitting, “Anything I know about hardware I’ve picked up in the last two years while developing this project.”

This was my first ever drone flight, live on #twitch. I think it went well. #letsrobot #robot #robotics #robots #drone #drones #twitchtv #twitchcreative #twitchplays #fail #livestream #raspberrypi #arduino #hardware #mechatronics #mechanicalengineering #makersgonnamake #nailedit #make #electronics

73 Likes, 3 Comments – Jillian Ogle (@letsjill) on Instagram: “This was my first ever drone flight, live on #twitch. I think it went well. #letsrobot #robot…”

Social media funtimes

More recently, as Let’s Robot continues to grow, Jill can be found sharing the antics of the robots across social media, documenting their quests – such as the hilarious attempt to create party invites and the more recent Hillarybot vs Trumpbot balloon head battle, where robots with extendable pin-mounted arms fight to pop each other’s head.

Last night was the robot presidential debate, and here is an early version of candidate #Trump bot. #letsrobot #robotics #robot #raspberrypi #twitch #twitchtv #twitchplays #3dprinting #mechatronics #arduino #iot #robots #crafting #make #battlebots #hardware #twitchcreative #presidentialdebate2016 #donaldtrump #electronics #omgrobots #adafruit #silly

400 Likes, 2 Comments – Jillian Ogle (@letsjill) on Instagram: “Last night was the robot presidential debate, and here is an early version of candidate #Trump bot….”

Gotta catch ’em all

Alongside the robots, Jill has created several other projects that both add to the interactive experience of Let’s Robot and comment on other elements of social trends out in the world. Most notably, there is the Pokémon Go Robot, originally a robot arm that would simulate the throw of an on-screen Poké Ball. It later grew wheels and took to the outside world, hunting down its pocket monster prey.

Let's Robot Pokemon Go Raspberry Pi

Originally sitting on a desk, the Pokémon Go Robot earned itself a new upgrade, gaining the body of a rover to allow it to handle the terrain of the outside world. Paired with the Livestream Goggles, viewers can join in the fun.

It’s also worth noting other builds, such as the WiFi Livestream Goggles that Jill can be seen sporting across several social media posts. The goggles, with a Pi camera fitted between the wearer’s eyes, allow viewers to witness Jill’s work from her perspective. It’s a great build, especially given how open the Let’s Robot team are about their continued work and progression.

Let's Robot Pokemon Go Raspberry Pi

The WiFi-enabled helmet allows viewers the ability to see what Jill sees, offering a new perspective alongside the Let’s Robot bots. The Raspberry Pi camera fits perfectly between the eyes, bringing a true eye level to the viewer. She also created internet-controlled LED eyebrows… see the video!

And finally, one project we are eager to see completed is the ‘in production’ Pi-powered transparent HUD. By incorporating refractive acrylic, Jill aims to create a see-through display that allows her to read user comments via the Twitch live-stream chat, without having to turn her eyes to a separate monitor

Since the publication of this article in The MagPi magazine, Jill and the Let’s Robot team have continued to grow their project. There are some interesting and exciting developments ahead – we’ll cover their progress in a future blog.

The post Community Profile: Jillian Ogle appeared first on Raspberry Pi.

[$] The great leap backward

Post Syndicated from corbet original https://lwn.net/Articles/720924/rss

Sayre’s law
states: “In any dispute the intensity of feeling is inversely
proportional to the value of the issues at stake
“. In that context,
it is perhaps easy to understand why the discussion around the version
number for the next major openSUSE Leap release has gone on for hundreds of
sometimes vitriolic messages. While this change is controversial, the
openSUSE board hopes that it
will lead to more rational versioning in the long term — but the world has a
way of interfering with such plans.

The RIAA is Now Copyright Troll Rightscorp’s Biggest Customer

Post Syndicated from Andy original https://torrentfreak.com/the-riaa-is-now-copyright-troll-rightscorps-biggest-customer-170424/

Nurturing what appears to be a failing business model, anti-piracy outfit Rightscorp has been on life-support for a number of years, never making a cent while losing millions of dollars.

As a result, every annual report filed by the company is expected to reveal yet more miserable numbers. This year’s, filed two weeks late a few days ago, doesn’t break the trend. It is, however, a particularly interesting read.

For those out of the loop, Rightscorp generates revenue from monitoring BitTorrent networks, logging infringements, and sending warning notices to ISPs. It hopes those ISPs will forward notices to customers who are asked to pay $20 or $30 per offense. Once paid, Rightscorp splits this revenue with its copyright holder customers.

The company’s headline sales figures for 2016 are somewhat similar to those of the previous year. In 2015 the company generated $832,215 in revenue but in 2016 that had dropped to $778,215. While yet another reduction in revenue won’t be welcome, the company excelled in trimming its costs.

In 2015, Rightscorp’s total operating costs were almost $5.47m, something which led the company to a file an eye-watering $4.63 million operational loss.

In 2016, the company somehow managed to reduce its costs to ‘just’ $2.73m, a vast improvement over the previous year. But, despite the effort, Rightscorp still couldn’t make money in 2016. In its latest accounts, the company reveals an operational loss of $1.95m and little salvation on the bottom line.

“During the year ended December 31, 2016, the Company incurred a net loss of $1,355,747 and used cash in operations of $807,530, and at December 31, 2016, the Company had a stockholders’ deficit of $2,092,060,” the company reveals.

While a nose-diving Rightscorp has been a familiar story in recent years, there are some nuggets of information in 2016’s report that makes it stand out.

According to Rightscorp, in 2014 BMG Rights Management accounted for 76% of the company’s sales, with Warner Bros. Entertainment made up a token 13%. In 2015 it was a similar story, but during 2016, big developments took place with a brand new and extremely important customer.

“For the year ended December 31, 2016, our contract with Recording Industry Association of America accounted for approximately 44% of our sales, and our contract with BMG Rights Management accounted for 23% of our sales,” the company’s report reveals.

The fact that the RIAA is now Rightscorp’s biggest customer to the tune of $342,000 in business during 2016 is a pretty big reveal, not only for the future of the anti-piracy company but also the interests of millions of BitTorrent users around the United States.

While it’s certainly possible that the RIAA plans to start sending settlement demands to torrent users (Warner has already done so), there are very clear signs that the RIAA sees value in Rightscorp elsewhere. As shown in the table below, between 2015 and 2016 there has been a notable shift in how Rightscorp reports its revenue.

In 2015, all of Rightscorp’s revenue came from copyright settlements. In 2016, roughly 50% of its revenue (a little over the amount accounted for by the RIAA’s business) is listed as ‘consulting revenue’. It seems more than likely that the lion’s share of this revenue came from the RIAA, but why?

On Friday the RIAA filed a big lawsuit against Texas-based ISP Grande Communications. Detailed here, the multi-million suit accuses the ISP of failing to disconnect subscribers accused of infringement multiple times.

The data being used to prosecute that case was obtained by the RIAA from Rightscorp, who in turn collected that data from BitTorrent networks. The company obtained a patent under its previous Digital Rights Corp. guise which specifically covers repeat infringer identification. It has been used successfully in the ongoing case against another ISP, Cox Communications.

In short, the RIAA seems to be planning to do to Grande Communications what BMG and Rightscorp have already done to Cox. They will be seeking to show that Grande knew that its subscribers were multiple infringers yet failed to disconnect them from the Internet. This inaction, they will argue, means that Grande loses its protection from liability under the safe harbor provisions of the DMCA.

Winning the case against Grande Communications is extremely important for the RIAA and for reasons best understood by the parties involved, it clearly places value on the data held by Rightscorp. Whether the RIAA will pay another few hundred thousand dollars to the anti-piracy outfit in 2017 remans to be seen, but Rightscorp will be hoping so as it’s desperate for the cash.

The company’s year-end filing raises “substantial doubt about the Company’s ability to continue as a going concern” while noting that its management believes that the company will need at least another $500,000 to $1,000,000 to fund operations in 2017.

This new relationship between the RIAA and Rightscorp is an interesting one and one that’s likely to prove controversial. Grande Communications is being sued today, but the big question is which other ISPs will follow in the months and years to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

ISP Can’t Have Blanket Immunity From Pirating Subscribers, Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/isp-cant-have-blanket-immunity-from-pirating-subscribers-court-rules-170420/

Internet provider Windstream is among the companies that are gravely concerned about the verdict against fellow ISP Cox, which was held liable for pirating subscribers in 2015.

With more than a million subscribers nationwide, it is one of the larger Internet providers in the United States, and as such it regularly receives takedown notices targeting its subscribers.

Many of these notices come from music rights group BMG and its anti-piracy partner Rightscorp, which accused the ISP of being liable for the actions of its customers.

Windstream wasn’t happy with these accusations and the associated risk, filing a request for declaratory judgment at a New York District Court last year. It asked the court to rule that it’s not liable for the infringing actions of its subscribers under the DMCA’s safe harbor provisions.

For their part, BMG and Rightscorp protested the request and told the court that a lawsuit is premature, as the copyright holder hasn’t even officially filed an infringement complaint. Instead, they accused the ISPs of trying to get broad immunity without going into specifics, such as their repeat infringer policies.

In a motion to dismiss the case music rights group told the court that concrete actions and policies play a crucial role in determining liability, accusing Windstream of trying to escape this responsibility.

This week the court issued its final verdict in the case, which brings bad news for the Internet provider.

The court ruled that there is indeed no actual controversy and that it can’t issue a hypothetical and advisory opinion without concrete facts. As such, the case is dismissed for lack of jurisdiction.

“The amended complaint does not present such a controversy. Instead, Windstream seeks a blanket approval of its business model, without reference to any specific copyright held by BMG or any specific act of direct infringement by any Windstream subscriber,” the court writes.

“Windstream seeks the kind of hypothetical and advisory opinion, isolated from concrete facts, that cannot confer jurisdiction upon this Court,” the order adds (pdf).

The ISP hoped to get clarity on how to respond to the copyright infringement notices BMG sends, but the court says that it can’t decide on this without concrete examples.

This doesn’t mean that Windstream is liable, of course. The ISP may very well be protected by the DMCA’s safe harbor provisions, but this has to be decided on a case-by-case basis.

“Because Windstream seeks declarations untethered from any actual instances of copyright infringement or any mention of a specific copyrighted work, the complaint fails to identify an actual case or controversy and the declaratory judgment claims must be dismissed,” the court writes.

The order is a major disappointment for Windstream, which can still only guess whether it’s doing the right thing or not.

BMG and Rightscorp previously said that the ISP was liable for damages as high as $150,000 per infringed work, and with the current order this threat is still hanging over its head.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Police Say “Criminal Gangs” Are Selling Pirate Media Players

Post Syndicated from Andy original https://torrentfreak.com/police-say-criminal-gangs-selling-pirate-media-players-170419/

For the millions of purist ‘pirates’ out there, obtaining free content online is a puzzle to be solved at home. Discovering the best sites, services, and tools is all part of the challenge and in order to keep things tidy, these should come at no cost too.

But for every self-sufficient pirate, there are dozens of other individuals who prefer not to get into the nuts and bolts of the activity but still want to enjoy the content on offer. It is these people that are reportedly fueling a new crime wave sweeping the streets, from the United States, through Europe, and beyond.

IPTV – whether that’s a modified Kodi setup or a subscription service – is now considered by stakeholders to be a major piracy threat and when people choose to buy ready-built devices, they are increasingly enriching “criminal gangs” who have moved in to make money from the phenomenon.

That’s the claim from Police Scotland, who yesterday held a seminar at Scottish Police College to discuss emerging threats in intellectual property crime. The event was attended by experts from across Europe, including stakeholders, Trading Standards, HM Revenue & Customs, and the UK Intellectual Property Office.

“The illegal use of Internet protocol television has risen by 143% in the past year and is predominantly being carried out online. This involves the uploading of streams, server hosting and sales of pre-configured devices,” Scottish Police said in a statement.

The conference was billed as an “opportunity to share ideas, knowledge and investigative techniques” that address this booming area of intellectual property infringement, increasingly being exploited by people looking to make a quick buck. The organized sale of Android-style set-top boxes pre-configured for piracy is being seen as a prime example.

In addition to eBay and Amazon sales, hundreds of adverts are being placed both online and in traditional papers by people selling devices already setup with Kodi and the necessary addons.

“Crime groups and criminals around Scotland are diversifying into what’s seen as less risk areas,” Chief Inspector Mark Leonard explains.

It goes without saying that both police and copyright holders are alarmed by the rise in sales of these devices. However, even the people who help to keep the ‘pirate’ addons maintained and circulated have a problem with it too.

“In my opinion, the type of people attracted to selling something like a preloaded Kodi box aren’t very educated and generally lean towards crooked or criminal activity,” Eleazar of the hugely popular TVAddons repository informs TorrentFreak.

“These box sellers bring people to our community who should never have used Kodi in the first place, people who feel they are owed something, people who see Kodi only as a piracy tool, and people who don’t have the technical aptitude to maintain their Kodi device themselves.”

But for sellers of these devices, that’s exactly why they exist – to help out people who would otherwise struggle to get a Kodi-enabled box up and running. However, there are clear signs that these sellers are feeling the heat and slowly getting the message that their activities could attract police attention.

On several occasions TorrentFreak has contacted major sellers of these devices for comment but none wish to go on the record. Smaller operators, such as those selling a few boxes on eBay, are equally cautious. One individual, who is already on police radar, insists that it’s not his fault that business is booming.

“Sky and the Premier League charge too much. It’s that simple,” he told TF.

“Your average John gives you a few quid and takes [the device] and plugs it in. Job done. How is that different from getting a mate to do it for you, apart from the drink?”

To some extent, Internet piracy has traditionally been viewed as a somewhat ‘geeky’ activity, carried out by the tech-savvy individual with a little know-how. However, the shift from the bedroom to the living room – fueled by box suppliers – has introduced a whole new audience to the activity.

“This is now seen as being normalized,” says Chief Inspector Mark Leonard.

“A family will sit and watch one of these IPTV devices. There’s also a public perception that this is a commodity which is victimless. Prevention is a big part of this so we need to change attitudes and behaviours of people that this damages the creative industries in Scotland as well.”

As things stand, everything points to the controversy over these devices being set to continue. Despite being under attack from all sides, their convenience and bargain-basement pricing means they will remain a hit with fans. This is one piracy battle set to rage for some time.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tor exit node operator arrested in Russia (TorServers.net blog)

Post Syndicated from ris original https://lwn.net/Articles/720231/rss

On April 12 Dmitry Bogatov, a mathematician and Debian maintainer, was arrested
in Russia
for “incitation to terrorism” because of some messages that
went through his Tor exit node. “Though, the very nature of Bogatov
case is a controversial one, as it mixes technical and legal arguments, and
makes necessary both strong legal and technical expertise involved. Indeed,
as a Tor exit node operator, Dmitry does not have control and
responsibility on the content and traffic that passes through his node: it
would be the same as accusing someone who has a knife stolen from her house
for the murder committed with this knife by a stranger.
” The Debian
Project made a brief statement.

Shadow Brokers Releases the Rest of Their NSA Hacking Tools

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/shadow_brokers_.html

Last August, an unknown group called the Shadow Brokers released a bunch of NSA tools to the public. The common guesses were that the tools were discovered on an external staging server, and that the hack and release was the work of the Russians (back then, that wasn’t controversial). This was me:

Okay, so let’s think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it’s a signal to the Obama Administration: “Before you even think of sanctioning us for the DNC hack, know where we’ve been and what we can do to you.”

They published a second, encrypted, file. My speculation:

They claim to be auctioning off the rest of the data to the highest bidder. I think that’s PR nonsense. More likely, that second file is random nonsense, and this is all we’re going to get. It’s a lot, though.

I was wrong. On November 1, the Shadow Brokers released some more documents, and two days ago they released the key to that original encrypted archive:

EQGRP-Auction-Files is CrDj”(;[email protected])#>deB7mN

I don’t think their statement is worth reading for content. I still believe the Russia are more likely to be the perpetrator than China.

There’s not much yet on the contents of this dump of Top Secret NSA hacking tools, but it can’t be a fun weekend at Ft. Meade. I’m sure that by now they have enough information to know exactly where and when the data got stolen, and maybe even detailed information on who did it. My guess is that we’ll never see that information, though.

EDITED TO ADD (4/11): Seems like there’s not a lot here.

Blocking Pirate Sites Without a Trial is Allowed, Italian Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/blocking-pirate-sites-without-a-trial-is-allowed-italian-court-rules-170403/

Website blockades are becoming more common throughout Europe, with Italy taking a particularly active approach.

In recent months hundreds of domain names have been added to the nation’s pirate blocklist, based on complaints from a wide range of copyright holders.

It is not just the numbers that set Italy apart, the blocking mechanism itself is unique as well. To have a website blocked, rightsholders can ask the local telecoms watchdog AGCOM to issue an order, without need for a trial.

Instead of dealing with blockades in court, AGCOM has the power to grant injunctions without judicial overview, which it does on a regular basis.

The regulation hasn’t been without controversy. Soon after it was introduced several consumer rights groups and other organizations challenged it in court, arguing that it’s unconstitutional.

The case was initially rejected by the Constitutional Court in 2015, which referred it back to the administrative court of Lazio. Last week this court decided that the site blocking procedure is in line with both European and Italian law.

According to the court, the site-blocking regulation is compatible with the European Union’s E-Commerce Directive as well as the Italian Copyright Act. In addition, the procedure doesn’t violate the Italian constitution or fundamental rights in general, as opponents had argued.

Overall the case is seen as a significant victory for copyright holders. Not only can they continue with their site-blocking requests, but the court also clarified that all the blocking costs must be paid by Internet providers.

“This is a big win for rightsholders,” says Enzo Mazza, chief of the Italian music group FIMI, who says that they have plans to expand the current scope of the blocking efforts.

“Our future goal is now to increase the enforcement of AGCOM to also cover new forms of piracy such as live streaming, stream ripping and similar issues. In addition, we hope AGCOM will extend the blockades to the IP-address level as the Criminal Courts are using now,” Mazza tells TorrentFreak.

The consumer groups are disappointed, but lawyer Fulvio Sarzana tells TorrentFreak that this outcome was expected considering the previous stance of the judges. However, he also notes that the battle has only just begun and that the case will be appealed.

“It is important to know that there will be an appeal represented by the State Council and that, should it be confirmed in that case, there is always the possibility of acting in front of the greatest judicial order in Italy, the Court of Cassation,” Sarzana says.

For his part, FIMI’s boss is positive that the current verdict will be upheld in future cases. Meanwhile, Mazza and his organization will continue to push for more and broader blockades.

A copy of the verdict, in Italian, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.