Tag Archives: science

Sci-Hub Ordered to Pay $15 Million in Piracy Damages

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-ordered-to-pay-15-million-in-piracy-damages-170623/

Two years ago, academic publisher Elsevier filed a complaint against Sci-Hub and several related “pirate” sites.

It accused the websites of making academic papers widely available to the public, without permission.

While Sci-Hub is nothing like the average pirate site, it is just as illegal according to Elsevier’s legal team, who obtained a preliminary injunction from a New York District Court last fall.

The injunction ordered Sci-Hub’s founder Alexandra Elbakyan to quit offering access to any Elsevier content. However, this didn’t happen.

Instead of taking Sci-Hub down, the lawsuit achieved the opposite. Sci-Hub grew bigger and bigger up to a point where its users were downloading hundreds of thousands of papers per day.

Although Elbakyan sent a letter to the court earlier, she opted not engage in the US lawsuit any further. The same is true for her fellow defendants, associated with Libgen. As a result, Elsevier asked the court for a default judgment and a permanent injunction which were issued this week.

Following a hearing on Wednesday, the Court awarded Elsevier $15,000,000 in damages, the maximum statutory amount for the 100 copyrighted works that were listed in the complaint. In addition, the injunction, through which Sci-Hub and LibGen lost several domain names, was made permanent.

Sci-Hub founder Alexandra Elbakyan says that even if she wanted to pay the millions of dollars in revenue, she doesn’t have the money to do so.

“The money project received and spent in about six years of its operation do not add up to 15 million,” Elbakyan tells torrentFreak.

“More interesting, Elsevier says: the Sci-Hub activity ’causes irreparable injury to Elsevier, its customers and the public’ and US court agreed. That feels like a perfect crime. If you want to cause an irreparable injury to American public, what do you have to do? Now we know the answer: establish a website where they can read research articles for free,” she adds.

Previously, Elbakyan already confirmed to us that, lawsuit or not, the site is not going anywhere.

“The Sci-Hub will continue as usual. In case of problems with the domain names, users can rely on TOR scihub22266oqcxt.onion,” Elbakyan added.

Sci-Hub is regularly referred to as the “Pirate Bay for science,” and based on the site’s resilience and its response to legal threats, it can certainly live up to this claim.

The Association of American Publishers (AAP) is happy with the outcome of the case.

“As the final judgment shows, the Court has not mistaken illegal activity for a public good,” AAP President and CEO Maria A. Pallante says.

“On the contrary, it has recognized the defendants’ operation for the flagrant and sweeping infringement that it really is and affirmed the critical role of copyright law in furthering scientific research and the public interest.”

Matt McKay, a spokesperson for the International Association of Scientific, Technical and Medical Publishers (STM) in Oxford went even further, telling Nature that the site doesn’t offer any value to the scientific comunity.

“Sci-Hub does not add any value to the scholarly community. It neither fosters scientific advancement nor does it value researchers’ achievements. It is simply a place for someone to go to download stolen content and then leave.”

Hundreds of thousands of academics, who regularly use the site to download papers, might contest this though.

With no real prospect of recouping the damages and an ever-resilient Elbakyan, Elsevier’s legal battle could just be a win on paper. Sci-Hub and Libgen are not going anywhere, it seems, and the lawsuit has made them more popular than ever before.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

US Embassy Threatens to Close Domain Registry Over ‘Pirate Bay’ Domain

Post Syndicated from Andy original https://torrentfreak.com/us-embassy-threatens-to-close-domain-registry-over-pirate-bay-domain-170620/

Domains have become an integral part of the piracy wars and no one knows this better than The Pirate Bay.

The site has burned through numerous domains over the years, with copyright holders and authorities successfully pressurizing registries to destabilize the site.

The latest news on this front comes from the Central American country of Costa Rica, where the local domain registry is having problems with the United States government.

The drama is detailed in a letter to ICANN penned by Dr. Pedro León Azofeifa, President of the Costa Rican Academy of Science, which operates NIC Costa Rica, the registry in charge of local .CR domain names.

Azofeifa’s letter is addressed to ICANN board member Thomas Schneider and pulls no punches. It claims that for the past two years the United States Embassy in Costa Rica has been pressuring NIC Costa Rica to take action against a particular domain.

“Since 2015, the United Estates Embassy in Costa Rica, who represents the interests of the United States Department of Commerce, has frequently contacted our organization regarding the domain name thepiratebay.cr,” the letter to ICANN reads.

“These interactions with the United States Embassy have escalated with time and include great pressure since 2016 that is exemplified by several phone calls, emails, and meetings urging our ccTLD to take down the domain, even though this would go against our domain name policies.”

The letter states that following pressure from the US, the Costa Rican Ministry of Commerce carried out an investigation which concluded that not taking down the domain was in line with best practices that only require suspensions following a local court order. That didn’t satisfy the United States though, far from it.

“The representative of the United States Embassy, Mr. Kevin Ludeke, Economic Specialist, who claims to represent the interests of the US Department of
Commerce, has mentioned threats to close our registry, with repeated harassment
regarding our practices and operation policies,” the letter to ICANN reads.

Ludeke is indeed listed on the US Embassy site for Costa Rica. He’s also referenced in a 2008 diplomatic cable leaked previously by Wikileaks. Contacted via email, Ludeke did not immediately respond to TorrentFreak’s request for comment.

Extract from the letter to ICANN

Surprisingly, Azofeifa says the US representative then got personal, making negative comments towards his Executive Director, “based on no clear evidence or statistical data to support his claims, as a way to pressure our organization to take down the domain name without following our current policies.”

Citing the Tunis Agenda for the Information Society of 2005, Azofeifa asserts that “policy authority for Internet-related public policy issues is the sovereign right of the States,” which in Costa Rica’s case means that there must be “a final judgment from the Courts of Justice of the Republic of Costa Rica” before the registry will suspend a domain.

But it seems legal action was not the preferred route of the US Embassy. Demanding that NIC Costa Rica take unilateral action, Mr. Ludeke continued with “pressure and harassment to take down the domain name without its proper process and local court order.”

Azofeifa’s letter to ICANN, which is cc’d to Stafford Fitzgerald Haney, United States Ambassador to Costa Rica and various people in the Costa Rican Ministry of Commerce, concludes with a request for suggestions on how to deal with the matter.

While the response should prove very interesting, none of the parties involved appear to have noticed that ThePirateBay.cr isn’t officially connected to The Pirate Bay

The domain and associated site appeared in the wake of the December 2014 shut down of The Pirate Bay, claiming to be the real deal and even going as far as making fake accounts in the names of famous ‘pirate’ groups including ettv and YIFY.

Today it acts as an unofficial and unaffiliated reverse proxy to The Pirate Bay while presenting the site’s content as its own. It’s also affiliated with a fake KickassTorrents site, Kickass.cd, which to this day claims that it’s a reincarnation of the defunct torrent giant.

But perhaps the most glaring issue in this worrying case is the apparent willingness of the United States to call out Costa Rica for not doing anything about a .CR domain run by third parties, when the real Pirate Bay’s .org domain is under United States’ jurisdiction.

Registered by the Public Interest Registry in Reston, Virginia, ThePirateBay.org is the famous site’s main domain. TorrentFreak asked PIR if anyone from the US government had ever requested action against the domain but at the time of publication, we had received no response.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AIMS Desktop 2017.1 released

Post Syndicated from corbet original https://lwn.net/Articles/725712/rss

The AIMS desktop is a
Debian-derived distribution aimed at mathematical and scientific use. This
project’s first public release, based on Debian 9, is now available.
It is a GNOME-based distribution with a bunch of add-on software.
It is maintained by AIMS (The African Institute for Mathematical
Sciences), a pan-African network of centres of excellence enabling Africa’s
talented students to become innovators driving the continent’s scientific,
educational and economic self-sufficiency.

[$] Assembling the history of Unix

Post Syndicated from corbet original https://lwn.net/Articles/725297/rss

The moment when an antique operating system that has not run in decades
boots and presents a command prompt is thrilling for Warren Toomey, who
founded the Unix Heritage Society to
reconstruct the early history of the Unix operating system. Recently this
historical code has become much more accessible: we can now browse it in an
instant on GitHub
, thanks to the efforts of a computer science
professor at the Athens University of Economics and Business named Diomidis
Spinellis.

Click below (subscribers only) for a look at the Unix Heritage Society and
what it has accomplished.

Making Waves: print out sound waves with the Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/printed-sound-wave/

For fun, Eunice Lee, Matthew Zhang, and Bomani McClendon have worked together to create Waves, an audiovisual project that records people’s spoken responses to personal questions and prints them in the form of a sound wave as a gift for being truthful.

Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

What are you grateful for?

“I’m grateful for finishing this project,” admits maker Eunice Lee as she presses a button and speaks into the microphone that is part of the Waves project build. After a brief moment, her confession appears on receipt paper as a waveform, and she grins toward the camera, happy with the final piece.

Eunice testing Waves

Waves is a Raspberry Pi project centered around transforming the transience of the spoken word into something concrete and physical. In our setup, a user presses a button corresponding to an intimate question (ex: what’s your motto?) and answers it into a microphone while pressing down on the button.

Sound wave machine

Alongside a Raspberry Pi 3, the Waves device is comprised of four tactile buttons, a standard USB microphone, and a thermal receipt printer. This type of printer has become easily available for the maker movement from suppliers such as Adafruit and Pimoroni.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Definitely more fun than a polygraph test

The trio designed four colour-coded cards that represent four questions, each of which has a matching button on the breadboard. Press the button that belongs to the question to be answered, and Python code directs the Pi to record audio via the microphone. Releasing the button stops the audio recording. “Once the recording has been saved, the script viz.py is launched,” explains Lee. “This script takes the audio file and, using Python matplotlib magic, turns it into a nice little waveform image.”

From there, the Raspberry Pi instructs the thermal printer to produce a printout of the sound wave image along with the question.

Making for fun

Eunice, Bomani, and Matt, students of design and computer science at Northwestern University in Illinois, built Waves as a side project. They wanted to make something at the intersection of art and technology and were motivated by the pure joy of creating.

Eunice Lee, Matthew Zhang, Bomani McClendon - Sound Wave Raspberry Pi

Making makes people happy

They have noted improvements that can be made to increase the scope of their sound wave project. We hope to see many more interesting builds from these three, and in the meantime we invite you all to look up their code on Eunice’s GitHub to create your own Waves at home.

The post Making Waves: print out sound waves with the Raspberry Pi appeared first on Raspberry Pi.

Torrents Help Researchers Worldwide to Study Babies’ Brains

Post Syndicated from Ernesto original https://torrentfreak.com/torrents-help-researchers-worldwide-to-study-babies-brains-170603/

One of the core pillars of academic research is sharing.

By letting other researchers know what you do, ideas are criticized, improved upon and extended. In today’s digital age, sharing is easier than ever before, especially with help from torrents.

One of the leading scientific projects that has adopted BitTorrent is the developing Human Connectome Project, or dHCP for short. The goal of the project is to map the brain wiring of developing babies in the wombs of their mothers.

To do so, a consortium of researchers with expertise ranging from computer science, to MRI physics and clinical medicine, has teamed up across three British institutions: Imperial College London, King’s College London and the University of Oxford.

The collected data is extremely valuable for the neuroscience community and the project has received mainstream press coverage and financial backing from the European Union Research Council. Not only to build the dataset, but also to share it with researchers around the globe. This is where BitTorrent comes in.

Sharing more than 150 GB of data with researchers all over the world can be quite a challenge. Regular HTTP downloads are not really up to the task, and many other transfer options have a high failure rate.

Baby brain scan (Credit: Developing Human Connectome Project)

This is why Jonathan Passerat-Palmbach, Research Associate Department of Computing Imperial College London, came up with the idea to embrace BitTorrent instead.

“For me, it was a no-brainer from day one that we couldn’t rely on plain old HTTP to make this dataset available. Our first pilot release is 150GB, and I expect the next ones to reach a couple of TB. Torrents seemed like the de facto solution to share this data with the world’s scientific community.” Passerat-Palmbach says.

The researchers opted to go for the Academic Torrents tracker, which specializes in sharing research data. A torrent with the first batch of images was made available there a few weeks ago.

“This initial release contains 3,629 files accounting for 167.20GB of data. While this figure might not appear extremely large at the moment, it will significantly grow as the project aims to make the data of 1,000 subjects available by the time it has completed.”

Torrent of the first dataset

The download numbers are nowhere in the region of an average Hollywood blockbuster, of course. Thus far the tracker has registered just 28 downloads. That said, as a superior and open file-transfer protocol, BitTorrent does aid in critical research that helps researchers to discover more about the development of conditions such as ADHD and autism.

Interestingly, the biggest challenges of implementing the torrent solution were not of a technical nature. Most time and effort went into assuring other team members that this was the right solution.

“I had to push for more than a year for the adoption of torrents within the consortium. While my colleagues could understand the potential of the approach and its technical inputs, they remained skeptical as to the feasibility to implement such a solution within an academic context and its reception by the world community.

“However, when the first dataset was put together, amounting to 150GB, it became obvious all the HTTP and FTP fallback plans would not fit our needs,” Passerat-Palmbach adds.

Baby brain scans (Credit: Developing Human Connectome Project)

When the consortium finally agreed that BitTorrent was an acceptable way to share the data, local IT staff at the university had to give their seal of approval. Imperial College London doesn’t allow torrent traffic to flow freely across the network, so an exception had to be made.

“Torrents are blocked across the wireless and VPN networks at Imperial. Getting an explicit firewall exception created for our seeding machine was not a walk in the park. It was the first time they were faced with such a situation and we were clearly told that it was not to become the rule.”

Then, finally, the data could be shared around the world.

While BitTorrent is probably the most efficient way to share large files, there were other proprietary solutions that could do the same. However, Passerat-Palmbach preferred not to force other researchers to install “proprietary black boxes” on their machines.

Torrents are free and open, which is more in line with the Open Access approach more academics take today.

Looking back, it certainly wasn’t a walk in the park to share the data via BitTorrent. Passerat-Palmbach was frequently confronted with the piracy stigma torrents have amoung many of his peers, even among younger generations.

“Considering how hard it was to convince my colleagues within the project to actually share this dataset using torrents (‘isn’t it illegal?’ and other kinds of misconceptions…), I think there’s still a lot of work to do to demystify the use of torrents with the public.

“I was even surprised to see that these misconceptions spread out not only to more senior scientists but also to junior researchers who I was expecting to be more tech-aware,” Passerat-Palmbach adds.

That said, the hard work is done now and in the months and years ahead the neuroscience community will have access to Petabytes of important data, with help from BitTorrent. That is definitely worth the effort.

Finally, we thought it was fitting to end with Passerat-Palmbach’s “pledge to seed,” which he shared with his peers. Keep on sharing!


On the importance of seeding

Dear fellow scientist,

Thank for you very much for the interest you are showing in the dHCP dataset!

Once you start downloading the dataset, you’ll notice that your torrent client mentions a sharing / seeding ratio. It means that as soon as you start downloading the dataset, you become part of our community of sharers and contribute to making the dataset available to other researchers all around the world!

There’s no reason to be scared! It’s perfectly legal as long as you’re allowed to have a copy of the dataset (that’s the bit you need to forward to your lab’s IT staff if they’re blocking your ports).

You’re actually providing a tremendous contribution to dHCP by spreading the data, so thank you again for that!

With your help, we can make sure this data remains available and can be downloaded relatively fast in the future. Over time, the dataset will grow and your contribution will be more and more important so that each and everyone of you can still obtain the data in the smoothest possible way.

We cannot do it without you. By seeding, you’re actually saying “cheers!” to your peers whom you downloaded your data from. So leave your client open and stay tuned!

All this is made possible thanks to the amazing folks at academictorrents and their infrastructure, so kudos academictorrents!

You can learn more about their project here and get some help to get started with torrent downloading here.

Jonathan Passerat-Palmbach

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Building High-Throughput Genomics Batch Workflows on AWS: Workflow Layer (Part 4 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomics-batch-workflows-on-aws-workflow-layer-part-4-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the fourth in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackled the batch layer and built a scalable, elastic, and easily maintainable batch engine using AWS Batch. This solution took care of dynamically scaling your compute resources in response to the number of runnable jobs in your job queue length as well as managed job placement.

In part 4, you build out the workflow layer of your solution using AWS Step Functions and AWS Lambda. You then run an end-to-end genomic analysis―specifically known as exome secondary analysis―for many times at a cost of less than $1 per exome.

Step Functions makes it easy to coordinate the components of your applications using visual workflows. Building applications from individual components that each perform a single function lets you scale and change your workflow quickly. You can use the graphical console to arrange and visualize the components of your application as a series of steps, which simplify building and running multi-step applications. You can change and add steps without writing code, so you can easily evolve your application and innovate faster.

An added benefit of using Step Functions to define your workflows is that the state machines you create are immutable. While you can delete a state machine, you cannot alter it after it is created. For regulated workloads where auditing is important, you can be assured that state machines you used in production cannot be altered.

In this blog post, you will create a Lambda state machine to orchestrate your batch workflow. For more information on how to create a basic state machine, please see this Step Functions tutorial.

All code related to this blog series can be found in the associated GitHub repository here.

Build a state machine building block

To skip the following steps, we have provided an AWS CloudFormation template that can deploy your Step Functions state machine. You can use this in combination with the setup you did in part 3 to quickly set up the environment in which to run your analysis.

The state machine is composed of smaller state machines that submit a job to AWS Batch, and then poll and check its execution.

The steps in this building block state machine are as follows:

  1. A job is submitted.
    Each analytical module/job has its own Lambda function for submission and calls the batchSubmitJob Lambda function that you built in the previous blog post. You will build these specialized Lambda functions in the following section.
  2. The state machine queries the AWS Batch API for the job status.
    This is also a Lambda function.
  3. The job status is checked to see if the job has completed.
    If the job status equals SUCCESS, proceed to log the final job status. If the job status equals FAILED, end the execution of the state machine. In all other cases, wait 30 seconds and go back to Step 2.

Here is the JSON representing this state machine.

{
  "Comment": "A simple example that submits a Job to AWS Batch",
  "StartAt": "SubmitJob",
  "States": {
    "SubmitJob": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>::function:batchSubmitJob",
      "Next": "GetJobStatus"
    },
    "GetJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "Next": "CheckJobStatus",
      "InputPath": "$",
      "ResultPath": "$.status"
    },
    "CheckJobStatus": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.status",
          "StringEquals": "FAILED",
          "End": true
        },
        {
          "Variable": "$.status",
          "StringEquals": "SUCCEEDED",
          "Next": "GetFinalJobStatus"
        }
      ],
      "Default": "Wait30Seconds"
    },
    "Wait30Seconds": {
      "Type": "Wait",
      "Seconds": 30,
      "Next": "GetJobStatus"
    },
    "GetFinalJobStatus": {
      "Type": "Task",
      "Resource": "arn:aws:lambda:us-east-1:<account-id>:function:batchGetJobStatus",
      "End": true
    }
  }
}

Building the Lambda functions for the state machine

You need two basic Lambda functions for this state machine. The first one submits a job to AWS Batch and the second checks the status of the AWS Batch job that was submitted.

In AWS Step Functions, you specify an input as JSON that is read into your state machine. Each state receives the aggregate of the steps immediately preceding it, and you can specify which components a state passes on to its children. Because you are using Lambda functions to execute tasks, one of the easiest routes to take is to modify the input JSON, represented as a Python dictionary, within the Lambda function and return the entire dictionary back for the next state to consume.

Building the batchSubmitIsaacJob Lambda function

For Step 1 above, you need a Lambda function for each of the steps in your analysis workflow. As you created a generic Lambda function in the previous post to submit a batch job (batchSubmitJob), you can use that function as the basis for the specialized functions you’ll include in this state machine. Here is such a Lambda function for the Isaac aligner.

from __future__ import print_function

import boto3
import json
import traceback

lambda_client = boto3.client('lambda')



def lambda_handler(event, context):
    try:
        # Generate output put
        bam_s3_path = '/'.join([event['resultsS3Path'], event['sampleId'], 'bam/'])

        depends_on = event['dependsOn'] if 'dependsOn' in event else []

        # Generate run command
        command = [
            '--bam_s3_folder_path', bam_s3_path,
            '--fastq1_s3_path', event['fastq1S3Path'],
            '--fastq2_s3_path', event['fastq2S3Path'],
            '--reference_s3_path', event['isaac']['referenceS3Path'],
            '--working_dir', event['workingDir']
        ]

        if 'cmdArgs' in event['isaac']:
            command.extend(['--cmd_args', event['isaac']['cmdArgs']])
        if 'memory' in event['isaac']:
            command.extend(['--memory', event['isaac']['memory']])

        # Submit Payload
        response = lambda_client.invoke(
            FunctionName='batchSubmitJob',
            InvocationType='RequestResponse',
            LogType='Tail',
            Payload=json.dumps(dict(
                dependsOn=depends_on,
                containerOverrides={
                    'command': command,
                },
                jobDefinition=event['isaac']['jobDefinition'],
                jobName='-'.join(['isaac', event['sampleId']]),
                jobQueue=event['isaac']['jobQueue']
            )))

        response_payload = response['Payload'].read()

        # Update event
        event['bamS3Path'] = bam_s3_path
        event['jobId'] = json.loads(response_payload)['jobId']
        
        return event
    except Exception as e:
        traceback.print_exc()
        raise e

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitIsaacJob and paste in the above code. Use the LambdaBatchExecutionRole that you created in the previous post. For more information, see Step 2.1: Create a Hello World Lambda Function.

This Lambda function reads in the inputs passed to the state machine it is part of, formats the data for the batchSubmitJob Lambda function, invokes that Lambda function, and then modifies the event dictionary to pass onto the subsequent states. You can repeat these for each of the other tools, which can be found in the tools//lambda/lambda_function.py script in the GitHub repo.

Building the batchGetJobStatus Lambda function

For Step 2 above, the process queries the AWS Batch DescribeJobs API action with jobId to identify the state that the job is in. You can put this into a Lambda function to integrate it with Step Functions.

In the Lambda console, create a new Python 2.7 function with the LambdaBatchExecutionRole IAM role. Name your function batchGetJobStatus and paste in the following code. This is similar to the batch-get-job-python27 Lambda blueprint.

from __future__ import print_function

import boto3
import json

print('Loading function')

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get jobId from the event
    job_id = event['jobId']

    try:
        response = batch_client.describe_jobs(
            jobs=[job_id]
        )
        job_status = response['jobs'][0]['status']
        return job_status
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Structuring state machine input

You have structured the state machine input so that general file references are included at the top-level of the JSON object, and any job-specific items are contained within a nested JSON object. At a high level, this is what the input structure looks like:

{
        "general_field_1": "value1",
        "general_field_2": "value2",
        "general_field_3": "value3",
        "job1": {},
        "job2": {},
        "job3": {}
}

Building the full state machine

By chaining these state machine components together, you can quickly build flexible workflows that can process genomes in multiple ways. The development of the larger state machine that defines the entire workflow uses four of the above building blocks. You use the Lambda functions that you built in the previous section. Rename each building block submission to match the tool name.

We have provided a CloudFormation template to deploy your state machine and the associated IAM roles. In the CloudFormation console, select Create Stack, choose your template (deploy_state_machine.yaml), and enter in the ARNs for the Lambda functions you created.

Continue through the rest of the steps and deploy your stack. Be sure to check the box next to "I acknowledge that AWS CloudFormation might create IAM resources."

Once the CloudFormation stack is finished deploying, you should see the following image of your state machine.

In short, you first submit a job for Isaac, which is the aligner you are using for the analysis. Next, you use parallel state to split your output from "GetFinalIsaacJobStatus" and send it to both your variant calling step, Strelka, and your QC step, Samtools Stats. These then are run in parallel and you annotate the results from your Strelka step with snpEff.

Putting it all together

Now that you have built all of the components for a genomics secondary analysis workflow, test the entire process.

We have provided sequences from an Illumina sequencer that cover a region of the genome known as the exome. Most of the positions in the genome that we have currently associated with disease or human traits reside in this region, which is 1–2% of the entire genome. The workflow that you have built works for both analyzing an exome, as well as an entire genome.

Additionally, we have provided prebuilt reference genomes for Isaac, located at:

s3://aws-batch-genomics-resources/reference/

If you are interested, we have provided a script that sets up all of that data. To execute that script, run the following command on a large EC2 instance:

make reference REGISTRY=<your-ecr-registry>

Indexing and preparing this reference takes many hours on a large-memory EC2 instance. Be careful about the costs involved and note that the data is available through the prebuilt reference genomes.

Starting the execution

In a previous section, you established a provenance for the JSON that is fed into your state machine. For ease, we have auto-populated the input JSON for you to the state machine. You can also find this in the GitHub repo under workflow/test.input.json:

{
  "fastq1S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz",
  "fastq2S3Path": "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
  "referenceS3Path": "s3://aws-batch-genomics-resources/reference/hg38.fa",
  "resultsS3Path": "s3://<bucket>/genomic-workflow/results",
  "sampleId": "NA12878_states_1",
  "workingDir": "/scratch",
  "isaac": {
    "jobDefinition": "isaac-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "referenceS3Path": "s3://aws-batch-genomics-resources/reference/isaac/"
  },
  "samtoolsStats": {
    "jobDefinition": "samtools_stats-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv"
  },
  "strelka": {
    "jobDefinition": "strelka-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/highPriority-myenv",
    "cmdArgs": " --exome "
  },
  "snpEff": {
    "jobDefinition": "snpeff-myenv:1",
    "jobQueue": "arn:aws:batch:us-east-1:<account-id>:job-queue/lowPriority-myenv",
    "cmdArgs": " -t hg38 "
  }
}

You are now at the stage to run your full genomic analysis. Copy the above to a new text file, change paths and ARNs to the ones that you created previously, and save your JSON input as input.states.json.

In the CLI, execute the following command. You need the ARN of the state machine that you created in the previous post:

aws stepfunctions start-execution --state-machine-arn <your-state-machine-arn> --input file://input.states.json

Your analysis has now started. By using Spot Instances with AWS Batch, you can quickly scale out your workflows while concurrently optimizing for cost. While this is not guaranteed, most executions of the workflows presented here should cost under $1 for a full analysis.

Monitoring the execution

The output from the above CLI command gives you the ARN that describes the specific execution. Copy that and navigate to the Step Functions console. Select the state machine that you created previously and paste the ARN into the search bar.

The screen shows information about your specific execution. On the left, you see where your execution currently is in the workflow.

In the following screenshot, you can see that your workflow has successfully completed the alignment job and moved onto the subsequent steps, which are variant calling and generating quality information about your sample.

You can also navigate to the AWS Batch console and see that progress of all of your jobs reflected there as well.

Finally, after your workflow has completed successfully, check out the S3 path to which you wrote all of your files. If you run a ls –recursive command on the S3 results path, specified in the input to your state machine execution, you should see something similar to the following:

2017-05-02 13:46:32 6475144340 genomic-workflow/results/NA12878_run1/bam/sorted.bam
2017-05-02 13:46:34    7552576 genomic-workflow/results/NA12878_run1/bam/sorted.bam.bai
2017-05-02 13:46:32         45 genomic-workflow/results/NA12878_run1/bam/sorted.bam.md5
2017-05-02 13:53:20      68769 genomic-workflow/results/NA12878_run1/stats/bam_stats.dat
2017-05-02 14:05:12        100 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.tsv
2017-05-02 14:05:12        359 genomic-workflow/results/NA12878_run1/vcf/stats/runStats.xml
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.S1.vcf.gz.tbi
2017-05-02 14:05:12  507577928 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz
2017-05-02 14:05:12     723144 genomic-workflow/results/NA12878_run1/vcf/variants/genome.vcf.gz.tbi
2017-05-02 14:05:12   30783484 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz
2017-05-02 14:05:12    1566596 genomic-workflow/results/NA12878_run1/vcf/variants/variants.vcf.gz.tbi

Modifications to the workflow

You have now built and run your genomics workflow. While diving deep into modifications to this architecture are beyond the scope of these posts, we wanted to leave you with several suggestions of how you might modify this workflow to satisfy additional business requirements.

  • Job tracking with Amazon DynamoDB
    In many cases, such as if you are offering Genomics-as-a-Service, you might want to track the state of your jobs with DynamoDB to get fine-grained records of how your jobs are running. This way, you can easily identify the cost of individual jobs and workflows that you run.
  • Resuming from failure
    Both AWS Batch and Step Functions natively support job retries and can cover many of the standard cases where a job might be interrupted. There may be cases, however, where your workflow might fail in a way that is unpredictable. In this case, you can use custom error handling with AWS Step Functions to build out a workflow that is even more resilient. Also, you can build in fail states into your state machine to fail at any point, such as if a batch job fails after a certain number of retries.
  • Invoking Step Functions from Amazon API Gateway
    You can use API Gateway to build an API that acts as a "front door" to Step Functions. You can create a POST method that contains the input JSON to feed into the state machine you built. For more information, see the Implementing Serverless Manual Approval Steps in AWS Step Functions and Amazon API Gateway blog post.

Conclusion

While the approach we have demonstrated in this series has been focused on genomics, it is important to note that this can be generalized to nearly any high-throughput batch workload. We hope that you have found the information useful and that it can serve as a jump-start to building your own batch workloads on AWS with native AWS services.

For more information about how AWS can enable your genomics workloads, be sure to check out the AWS Genomics page.

Other posts in this four-part series:

Please leave any questions and comments below.

[$] The unexpected effectiveness of Python in science

Post Syndicated from jake original https://lwn.net/Articles/724255/rss

In a keynote on the first day of PyCon 2017,
Jake VanderPlas looked at the relationship between Python and science. Over
the last ten years or so, there has been a large rise in the amount of
Python code being used—and released—by scientists. There are reasons for
that, which VanderPlas described, but, perhaps more importantly, the
growing practice
of releasing all of this code can help solve one of the major problems facing science today:
reproducibility.

Building High-Throughput Genomic Batch Workflows on AWS: Batch Layer (Part 3 of 4)

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/building-high-throughput-genomic-batch-workflows-on-aws-batch-layer-part-3-of-4/

Aaron Friedman is a Healthcare and Life Sciences Partner Solutions Architect at AWS

Angel Pizarro is a Scientific Computing Technical Business Development Manager at AWS

This post is the third in a series on how to build a genomics workflow on AWS. In Part 1, we introduced a general architecture, shown below, and highlighted the three common layers in a batch workflow:

  • Job
  • Batch
  • Workflow

In Part 2, you built a Docker container for each job that needed to run as part of your workflow, and stored them in Amazon ECR.

In Part 3, you tackle the batch layer and build a scalable, elastic, and easily maintainable batch engine using AWS Batch.

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (for example, CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs that you submit. With AWS Batch, you do not need to install and manage your own batch computing software or server clusters, which allows you to focus on analyzing results, such as those of your genomic analysis.

Integrating applications into AWS Batch

If you are new to AWS Batch, we recommend reading Setting Up AWS Batch to ensure that you have the proper permissions and AWS environment.

After you have a working environment, you define several types of resources:

  • IAM roles that provide service permissions
  • A compute environment that launches and terminates compute resources for jobs
  • A custom Amazon Machine Image (AMI)
  • A job queue to submit the units of work and to schedule the appropriate resources within the compute environment to execute those jobs
  • Job definitions that define how to execute an application

After the resources are created, you’ll test the environment and create an AWS Lambda function to send generic jobs to the queue.

This genomics workflow covers the basic steps. For more information, see Getting Started with AWS Batch.

Creating the necessary IAM roles

AWS Batch simplifies batch processing by managing a number of underlying AWS services so that you can focus on your applications. As a result, you create IAM roles that give the service permissions to act on your behalf. In this section, deploy the AWS CloudFormation template included in the GitHub repository and extract the ARNs for later use.

To deploy the stack, go to the top level in the repo with the following command:

aws cloudformation create-stack --template-body file://batch/setup/iam.template.yaml --stack-name iam --capabilities CAPABILITY_NAMED_IAM

You can capture the output from this stack in the Outputs tab in the CloudFormation console:

Creating the compute environment

In AWS Batch, you will set up a managed compute environments. Managed compute environments automatically launch and terminate compute resources on your behalf based on the aggregate resources needed by your jobs, such as vCPU and memory, and simple boundaries that you define.

When defining your compute environment, specify the following:

  • Desired instance types in your environment
  • Min and max vCPUs in the environment
  • The Amazon Machine Image (AMI) to use
  • Percentage value for bids on the Spot Market and VPC subnets that can be used.

AWS Batch then provisions an elastic and heterogeneous pool of Amazon EC2 instances based on the aggregate resource requirements of jobs sitting in the RUNNABLE state. If a mix of CPU and memory-intensive jobs are ready to run, AWS Batch provisions the appropriate ratio and size of CPU and memory-optimized instances within your environment. For this post, you will use the simplest configuration, in which instance types are set to "optimal" allowing AWS Batch to choose from the latest C, M, and R EC2 instance families.

While you could create this compute environment in the console, we provide the following CLI commands. Replace the subnet IDs and key name with your own private subnets and key, and the image-id with the image you will build in the next section.

ACCOUNTID=<your account id>
SERVICEROLE=<from output in CloudFormation template>
IAMFLEETROLE=<from output in CloudFormation template>
JOBROLEARN=<from output in CloudFormation template>
SUBNETS=<comma delimited list of subnets>
SECGROUPS=<your security groups>
SPOTPER=50 # percentage of on demand
IMAGEID=<ami-id corresponding to the one you created>
INSTANCEROLE=<from output in CloudFormation template>
REGISTRY=${ACCOUNTID}.dkr.ecr.us-east-1.amazonaws.com
KEYNAME=<your key name>
MAXCPU=1024 # max vCPUs in compute environment
ENV=myenv

# Creates the compute environment
aws batch create-compute-environment --compute-environment-name genomicsEnv-$ENV --type MANAGED --state ENABLED --service-role ${SERVICEROLE} --compute-resources type=SPOT,minvCpus=0,maxvCpus=$MAXCPU,desiredvCpus=0,instanceTypes=optimal,imageId=$IMAGEID,subnets=$SUBNETS,securityGroupIds=$SECGROUPS,ec2KeyPair=$KEYNAME,instanceRole=$INSTANCEROLE,bidPercentage=$SPOTPER,spotIamFleetRole=$IAMFLEETROLE

Creating the custom AMI for AWS Batch

While you can use default Amazon ECS-optimized AMIs with AWS Batch, you can also provide your own image in managed compute environments. We will use this feature to provision additional scratch EBS storage on each of the instances that AWS Batch launches and also to encrypt both the Docker and scratch EBS volumes.

AWS Batch has the same requirements for your AMI as Amazon ECS. To build the custom image, modify the default Amazon ECS-Optimized Amazon Linux AMI in the following ways:

  • Attach a 1 TB scratch volume to /dev/sdb
  • Encrypt the Docker and new scratch volumes
  • Mount the scratch volume to /docker_scratch by modifying /etcfstab

The first two tasks can be addressed when you create the custom AMI in the console. Spin up a small t2.micro instance, and proceed through the standard EC2 instance launch.

After your instance has launched, record the IP address and then SSH into the instance. Copy and paste the following code:

sudo yum -y update
sudo parted /dev/xvdb mklabel gpt
sudo parted /dev/xvdb mkpart primary 0% 100%
sudo mkfs -t ext4 /dev/xvdb1
sudo mkdir /docker_scratch
sudo echo -e '/dev/xvdb1\t/docker_scratch\text4\tdefaults\t0\t0' | sudo tee -a /etc/fstab
sudo mount -a

This auto-mounts your scratch volume to /docker_scratch, which is your scratch directory for batch processing. Next, create your new AMI and record the image ID.

Creating the job queues

AWS Batch job queues are used to coordinate the submission of batch jobs. Your jobs are submitted to job queues, which can be mapped to one or more compute environments. Job queues have priority relative to each other. You can also specify the order in which they consume resources from your compute environments.

In this solution, use two job queues. The first is for high priority jobs, such as alignment or variant calling. Set this with a high priority (1000) and map back to the previously created compute environment. Next, set a second job queue for low priority jobs, such as quality statistics generation. To create these compute environments, enter the following CLI commands:

aws batch create-job-queue --job-queue-name highPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1000 --state ENABLED
aws batch create-job-queue --job-queue-name lowPriority-${ENV} --compute-environment-order order=0,computeEnvironment=genomicsEnv-${ENV}  --priority 1 --state ENABLED

Creating the job definitions

To run the Isaac aligner container image locally, supply the Amazon S3 locations for the FASTQ input sequences, the reference genome to align to, and the output BAM file. For more information, see tools/isaac/README.md.

The Docker container itself also requires some information on a suitable mountable volume so that it can read and write files temporary files without running out of space.

Note: In the following example, the FASTQ files as well as the reference files to run are in a publicly available bucket.

FASTQ1=s3://aws-batch-genomics-resources/fastq/SRR1919605_1.fastq.gz
FASTQ2=s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz
REF=s3://aws-batch-genomics-resources/reference/isaac/
BAM=s3://mybucket/genomic-workflow/test_results/bam/

mkdir ~/scratch

docker run --rm -ti -v $(HOME)/scratch:/scratch $REPO_URI --bam_s3_folder_path $BAM \
--fastq1_s3_path $FASTQ1 \
--fastq2_s3_path $FASTQ2 \
--reference_s3_path $REF \
--working_dir /scratch 

Locally running containers can typically expand their CPU and memory resource headroom. In AWS Batch, the CPU and memory requirements are hard limits and are allocated to the container image at runtime.

Isaac is a fairly resource-intensive algorithm, as it creates an uncompressed index of the reference genome in memory to match the query DNA sequences. The large memory space is shared across multiple CPU threads, and Isaac can scale almost linearly with the number of CPU threads given to it as a parameter.

To fit these characteristics, choose an optimal instance size to maximize the number of CPU threads based on a given large memory footprint, and deploy a Docker container that uses all of the instance resources. In this case, we chose a host instance with 80+ GB of memory and 32+ vCPUs. The following code is example JSON that you can pass to the AWS CLI to create a job definition for Isaac.

aws batch register-job-definition --job-definition-name isaac-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/isaac",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":80000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

You can copy and paste the following code for the other three job definitions:

aws batch register-job-definition --job-definition-name strelka-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/strelka",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":32000,
"vcpus":32,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name snpeff-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/snpeff",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

aws batch register-job-definition --job-definition-name samtoolsStats-${ENV} --type container --retry-strategy attempts=3 --container-properties '
{"image": "'${REGISTRY}'/samtools_stats",
"jobRoleArn":"'${JOBROLEARN}'",
"memory":10000,
"vcpus":4,
"mountPoints": [{"containerPath": "/scratch", "readOnly": false, "sourceVolume": "docker_scratch"}],
"volumes": [{"name": "docker_scratch", "host": {"sourcePath": "/docker_scratch"}}]
}'

The value for "image" comes from the previous post on creating a Docker image and publishing to ECR. The value for jobRoleArn you can find from the output of the CloudFormation template that you deployed earlier. In addition to providing the number of CPU cores and memory required by Isaac, you also give it a storage volume for scratch and staging. The volume comes from the previously defined custom AMI.

Testing the environment

After you have created the Isaac job definition, you can submit the job using the AWS Batch submitJob API action. While the base mappings for Docker run are taken care of in the job definition that you just built, the specific job parameters should be specified in the container overrides section of the API call. Here’s what this would look like in the CLI, using the same parameters as in the bash commands shown earlier:

aws batch submit-job --job-name testisaac --job-queue highPriority-${ENV} --job-definition isaac-${ENV}:1 --container-overrides '{
"command": [
			"--bam_s3_folder_path", "s3://mybucket/genomic-workflow/test_batch/bam/",
            "--fastq1_s3_path", "s3://aws-batch-genomics-resources/fastq/ SRR1919605_1.fastq.gz",
            "--fastq2_s3_path", "s3://aws-batch-genomics-resources/fastq/SRR1919605_2.fastq.gz",
            "--reference_s3_path", "s3://aws-batch-genomics-resources/reference/isaac/",
            "--working_dir", "/scratch",
			"—cmd_args", " --exome ",]
}'

When you execute a submitJob call, jobId is returned. You can then track the progress of your job using the describeJobs API action:

aws batch describe-jobs –jobs <jobId returned from submitJob>

You can also track the progress of all of your jobs in the AWS Batch console dashboard.

To see exactly where a RUNNING job is at, use the link in the AWS Batch console to direct you to the appropriate location in CloudWatch logs.

Completing the batch environment setup

To finish, create a Lambda function to submit a generic AWS Batch job.

In the Lambda console, create a Python 2.7 Lambda function named batchSubmitJob. Copy and paste the following code. This is similar to the batch-submit-job-python27 Lambda blueprint. Use the LambdaBatchExecutionRole that you created earlier. For more information about creating functions, see Step 2.1: Create a Hello World Lambda Function.

from __future__ import print_function

import json
import boto3

batch_client = boto3.client('batch')

def lambda_handler(event, context):
    # Log the received event
    print("Received event: " + json.dumps(event, indent=2))
    # Get parameters for the SubmitJob call
    # http://docs.aws.amazon.com/batch/latest/APIReference/API_SubmitJob.html
    job_name = event['jobName']
    job_queue = event['jobQueue']
    job_definition = event['jobDefinition']
    
    # containerOverrides, dependsOn, and parameters are optional
    container_overrides = event['containerOverrides'] if event.get('containerOverrides') else {}
    parameters = event['parameters'] if event.get('parameters') else {}
    depends_on = event['dependsOn'] if event.get('dependsOn') else []
    
    try:
        response = batch_client.submit_job(
            dependsOn=depends_on,
            containerOverrides=container_overrides,
            jobDefinition=job_definition,
            jobName=job_name,
            jobQueue=job_queue,
            parameters=parameters
        )
        
        # Log response from AWS Batch
        print("Response: " + json.dumps(response, indent=2))
        
        # Return the jobId
        event['jobId'] = response['jobId']
        return event
    
    except Exception as e:
        print(e)
        message = 'Error getting Batch Job status'
        print(message)
        raise Exception(message)

Conclusion

In part 3 of this series, you successfully set up your data processing, or batch, environment in AWS Batch. We also provided a Python script in the corresponding GitHub repo that takes care of all of the above CLI arguments for you, as well as building out the job definitions for all of the jobs in the workflow: Isaac, Strelka, SAMtools, and snpEff. You can check the script’s README for additional documentation.

In Part 4, you’ll cover the workflow layer using AWS Step Functions and AWS Lambda.

Please leave any questions and comments below.

So You Want To Be An Internet Piracy Investigator?

Post Syndicated from Andy original https://torrentfreak.com/so-you-want-to-be-an-internet-piracy-investigator-170528/

While the authorities would like to paint a picture of Internet pirates as thoughtless thieves only interested in the theft of intellectual property, the truth is more nuanced.

Like every other online and indeed offline location, pirate sites are filled with people from all corners of society, from rich to poor, and from the basically educated to the borderline genius.

What is especially interesting is the extremely thin line between poacher and gamekeeper, between those who want to exploit intellectual property and those who want to protect it. Indeed, it is far from uncommon to find former pirates and renegade coders “going straight” by working for their former enemies.

While a repellent thought to some, it makes perfect sense. Anyone who knows the piracy scene back to front could be a valuable asset to the other side, under the right circumstances. But what does it really take to be an anti-piracy investigator?

As it happens, the UK’s Federation Against Copyright Theft is currently trying to fill exactly such a position. The job of “Internet Investigator” is based in the UK and the successful applicant will report to a manager. While that tends to suggest a lower pay grade, FACT are insistent that applicants meet stringent criteria.

“Working as a proactive member of the investigatory team to support the strategic objectives of FACT. Responsible for the detection, investigation, and protection of clients Intellectual Property whether physical or digital as directed by the Investigations Manager,” the listing reads.

More specifically, FACT is looking for someone with a “strong aptitude for investigation” who is capable of working under minimal supervision. The candidate is also required to have a proven record of liaising with “industry and enforcement organizations”, presumably including entertainment companies and the police.

At this point, things get pretty interesting. FACT says that the job involves assessing and investigating “individuals and entities” responsible for “illegal or infringing activity related to Intellectual Property.” Think torrent, streaming and IPTV site operators and staff, release group members, ‘Kodi Box’ sellers, infringing addon developers, even people flogging dodgy DVDs down the market.

When these investigations are being carried out, FACT expects evidence and intelligence to be gathered “ethically and in accordance with criminal procedure rules”, presumably so that cases don’t collapse when they end up in court. Which they often do.

Also of interest is how closely FACT appears to align its practices with those of the police. While the candidate is expected to liaise with law enforcement, they will also be expected to take part in briefings, seizure of evidence and prosecution support, all while “managing risks” and acting in accordance with UK legislation.

Another aspect of the job is a little cryptic, in that it requires the candidate to “locate offenders” and then undertake action “with an alternative approach to a proportionate solution.” That’s open to interpretation but it sounds very much like the home visits FACT has been known to make to site operators, who are asked to cease and desist while handing over their domains.

Unsurprisingly, FACT are looking for someone with a computer science degree or similar, and good organizational skills. Above that, it’s fairly obvious they’re seeking someone with a legal background, perhaps a law graduate or even a former police officer.

In addition to familiarity with the rules laid down in the Management of Police Information (MOPI) 2010, the candidate will be required to attend court hearings to give evidence. They’ll also need to conduct “intrusive surveillance” in accordance with the Regulation of Investigatory Powers Act 2000 (RIPA) and have knowledge of:

– European Convention on Human Rights Act 2000
– Police and Criminal Evidence Act 1984
– Regulation of Investigatory Powers Act 2000
– Data Protection Act 1998
– Proceeds of Crime Act 2002
– Fraud Act 2006
– Serious Crime Act 2007
– Copyright Designs & Patents Act 1988 and Trade Marks Act 1994
– Computer Misuse Act 1990
– Other applicable legislation

The window to apply has almost run out but given the laundry list of qualities above, it seems unlikely that FACT will be swamped with perfectly suitable candidates right off the bat.

Finally, it’s probably worth mentioning that former torrent site operators and release group members keen to branch out are not specifically mentioned as primary candidates, so the poacher-turned-gamekeeper applicant might want to keep that part under their hat, at least until later.

Otherwise, FACT might just slap the cuffs on there and then, in line with UK legislation and procedure, of course.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Was The Disney Movie ‘Hacking Ransom’ a Giant Hoax?

Post Syndicated from Andy original https://torrentfreak.com/was-the-disney-movie-hacking-ransom-a-giant-hoax-170524/

Last Monday, during a town hall meeting in New York, Disney CEO Bob Iger informed a group of ABC employees that hackers had stolen one of the company’s movies.

The hackers allegedly said they’d keep the leak private if Disney paid them a ransom. In response, Disney indicated that it had no intention of paying. Setting dangerous precedents in this area is unwise, the company no doubt figured.

After Hollywood Reporter broke the news, Deadline followed up with a report which further named the movie as ‘Pirates of the Caribbean: Dead Men Tell No Tales’, a fitting movie to parallel an emerging real-life swashbuckling plot, no doubt.

What the Deadline article didn’t do was offer any proof that Pirates 5 was the movie in question. Out of the blue, however, it did mention that a purported earlier leak of The Last Jedi had been revealed by “online chatter” to be a fake. Disney refused to comment.

Armed with this information, TF decided to have a dig around. Was Pirates 5 being discussed within release groups as being available, perhaps? Initially, our inquiries drew a complete blank but then out of the blue we found ourselves in conversation with the person claiming to be the Disney ‘hacker’.

“I can provide the original emails sent to Disney as well as some other unknown details,” he told us via encrypted mail.

We immediately asked several questions. Was the movie ‘Pirates 5’? How did he obtain the movie? How much did he try to extort from Disney? ‘EMH,’ as we’ll call him, quickly replied.

“It’s The Last Jedi. Bob Iger never made public the title of the film, Deadline was just going off and naming the next film on their release slate,” we were told. “We demanded 2BTC per month until September.”

TF was then given copies of correspondence that EMH had been having with numerous parties about the alleged leak. They included discussions with various release groups, a cyber-security expert, and Disney.

As seen in the screenshot, the email was purportedly sent to Disney on May 1. The Hollywood Reporter article, published two weeks later, noted the following;

“The Disney chief said the hackers demanded that a huge sum be paid in Bitcoin. They said they would release five minutes of the film at first, and then in 20-minute chunks until their financial demands are met,” HWR wrote.

While the email to Disney looked real enough, the proof of any leaked pudding is in the eating. We asked EMH how he had demonstrated to Disney that he actually has the movie in his possession. Had screenshots or clips been sent to the company? We were initially told they had not (plot twists were revealed instead) so this immediately raised suspicions.

Nevertheless, EMH then went on to suggest that release groups had shown interest in the copy and he proved that by forwarding his emails with them to TF.

“Make sure they know there is still work to be done on the CGI characters. There are little dots on their faces that are visible. And the colour grading on some scenes looks a little off,” EMH told one group, who said they understood.

“They all understand its not a completed workprint.. that is why they are sought after by buyers.. exclusive stuff nobody else has or can get,” they wrote back.

“That why they pay big $$$ for it.. a completed WP could b worth $25,000,” the group’s unedited response reads.

But despite all the emails and discussion, we were still struggling to see how EMH had shown to anyone that he really had The Last Jedi. We then learned, however, that screenshots had been sent to blogger Sam Braidley, a Cyber Security MSc and Computer Science BSc Graduate.

Since the information sent to us by EMH confirmed discussion had taken place with Braidley concerning the workprint, we contacted him directly to find out what he knew about the supposed Pirates 5 and/or The Last Jedi leak. He was very forthcoming.

“A user going by the username of ‘Darkness’ commented on my blog about having a leaked copy of The Last Jedi from a contact he knew from within Lucas Films. Of course, this garnered a lot of interest, although most were cynical of its authenticity,” Braidley explained.

The claim that ‘Darkness’ had obtained the copy from a contact within Lucas was certainly of interest ,since up to now the press narrative had been that Disney or one of its affiliates had been ‘hacked.’

After confirming that ‘Darkness’ used the same email as our “EMH,” we asked EMH again. Where had the movie been obtained from?

“Wasn’t hacked. Was given to me by a friend who works at a post production company owned by [Lucasfilm],” EMH said. After further prompting he reiterated: “As I told you, we obtained it from an employee.”

If they weren’t ringing loudly enough already, alarm bells were now well and truly clanging. Who would reveal where they’d obtained a super-hot leaked movie from when the ‘friend’ is only one step removed from the person attempting the extortion? Who would take such a massive risk?

Braidley wasn’t buying it either.

“I had my doubts following the recent [Orange is the New Black] leak from ‘The Dark Overlord,’ it seemed like someone trying to live off the back of its press success,” he said.

Braidley told TF that Darkness/EMH seemed keen for him to validate the release, as a member of a well-known release group didn’t believe that it was real, something TF confirmed with the member. A screenshot was duly sent over to Braidley for his seal of approval.

“The quality was very low and the scene couldn’t really show that it was in fact Star Wars, let alone The Last Jedi,” Braidley recalls, noting that other screenshots were considered not to be from the movie in question either.

Nevertheless, Darkness/EMH later told Braidley that another big release group had only declined to release the movie due to the possiblity of security watermarks being present in the workprint.

Since no groups had heard of a credible Pirates 5 leak, the claims that release groups were in discussion over the leaking of The Last Jedi intrigued us. So, through trusted sources and direct discussion with members, we tried to learn more.

While all groups admitted being involved or at least being aware of discussions taking place, none appeared to believe that a movie had been obtained from Disney, was being held for ransom, or would ever be leaked.

“Bullshit!” one told us. “Fake news,” said another.

With not even well-known release groups believing that leaks of The Last Jedi or Pirates 5 are anywhere on the horizon, that brought us full circle to the original statement by Disney chief Bob Iger claiming that a movie had been stolen.

What we do know for sure is that everything reported initially by Hollywood Reporter about a ransom demand matches up with statements made by Darkness/EMH to TorrentFreak, Braidley, and several release groups. We also know from copy emails obtained by TF that the discussions with the release groups took place well before HWR broke the story.

With Disney not commenting on the record to either HWR or Deadline (publications known to be Hollywood-friendly) it seemed unlikely that TF would succeed where they had failed.

So, without comprimising any of our sources, we gave a basic outline of our findings to a previously receptive Disney contact, in an effort to tie Darkness/EMH with the email address that he told us Disney already knew. Predictably, perhaps, we received no response.

At this point one has to wonder. If no credible evidence of a leak has been made available and the threats to leak the movie haven’t been followed through on, what was the point of the whole affair?

Money appears to have been the motive, but it seems likely that none will be changing hands. But would someone really bluff the leaking of a movie to a company like Disney in order to get a ‘ransom’ payment or scam a release group out of a few dollars? Perhaps.

Braidley informs TF that Darkness/EMH recently claimed that he’d had the copy of The Last Jedi since March but never had any intention of leaking it. He did, however, need money for a personal matter involving a family relative.

With this in mind, we asked Darkness/EMH why he’d failed to carry through with his threats to leak the movie, bit by bit, as his email to Disney claimed. He said there was never any intention of leaking the movie “until we are sure it wont be traced back” but “if the right group comes forward and meets our strict standards then the leak could come as soon as 2-3 weeks.”

With that now seeming increasingly unlikely (but hey, you never know), this might be the final chapter in what turns out to be the famous hacking of Disney that never was. Or, just maybe, undisclosed aces remain up sleeves.

“Just got another comment on my blog from [Darkness],” Braidley told TF this week. “He now claims that the Emoji movie has been leaked and is being held to ransom.”

Simultaneously he was telling TF the same thing. ‘Hacking’ announcement from Sony coming soon? Stay tuned…..

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Hello World issue 2: celebrating ten years of Scratch

Post Syndicated from Carrie Anne Philbin original https://www.raspberrypi.org/blog/hello-world-issue-2/

We are very excited to announce that issue 2 of Hello World is out today! Hello World is our magazine about computing and digital making, written by educators, for educators. It  is a collaboration between the Raspberry Pi Foundation and Computing at School, part of the British Computing Society.

We’ve been extremely fortunate to be granted an exclusive interview with Mitch Resnick, Leader of the Scratch Team at MIT, and it’s in the latest issue. All around the world, educators and enthusiasts are celebrating ten years of Scratch, MIT’s block-based programming language. Scratch has helped millions of people to learn the building blocks of computer programming through play, and is our go-to tool at Code Clubs everywhere.

Cover of issue 2 of hello world magazine

A magazine by educators, for educators.

This packed edition of Hello World also includes news, features, lesson activities, research and opinions from Computing At School Master Teachers, Raspberry Pi Certified Educators, academics, informal learning leaders and brilliant classroom teachers. Highlights (for me) include:

  • A round-up of digital making research from Oliver Quinlan
  • Safeguarding children online by Penny Patterson
  • Embracing chaos inside and outside the classroom with Code Club’s Rik Cross, Raspberry Jam-maker-in-chief Ben Nuttall, Raspberry Pi Certified Educator Sway Grantham, and CPD trainer Alan O’Donohoe
  • How MicroPython on the Micro:bit is inspiring a generation, by Nicholas Tollervey
  • Incredibly useful lesson activities on programming graphical user interfaces (GUI) with guizero, simulating logic gates in Minecraft, and introducing variables through story telling.
  • Exploring computing and gender through Girls Who Code, Cyber First Girls, the BCSLovelace Colloqium, and Computing At School’s #include initiative
  • A review of browser based IDEs

Get your copy

Hello World is available as a free Creative Commons download for anyone around the world who is interested in Computer Science and digital making education. Grab the latest issue straight from the Hello World website.

Thanks to the very generous support of our sponsors BT, we are able to offer a free printed version of the magazine to serving educators in the UK. It’s for teachers, Code Club volunteers, teaching assistants, teacher trainers, and others who help children and young people learn about computing and digital making. Remember to subscribe to receive your free copy, posted directly to your home.

Get involved

Are you an educator? Then Hello World needs you! As a magazine for educators by educators, we want to hear about your experiences in teaching technology. If you hear a little niggling voice in your head say “I’m just a teacher, why would my contributions be useful to anyone else?” stop immediately. We want to hear from you, because you are amazing!

Get in touch: [email protected] with your ideas, and we can help get them published.

 

The post Hello World issue 2: celebrating ten years of Scratch appeared first on Raspberry Pi.

Join Us at the 10th Annual Hadoop Summit / DataWorks Summit, San Jose (Jun 13-15)

Post Syndicated from mikesefanov original https://yahooeng.tumblr.com/post/160966148886

yahoohadoop:

image

We’re excited to co-host the 10th Annual Hadoop Summit, the leading conference for the Apache Hadoop community, taking place on June 13 – 15 at the San Jose Convention Center. In the last few years, the Hadoop Summit has expanded to cover all things data beyond just Apache Hadoop – such as data science, cloud and operations, IoT and applications – and has been aptly renamed the DataWorks Summit. The three-day program is bursting at the seams! Here are just a few of the reasons why you cannot miss this must-attend event:

  • Familiarize yourself with the cutting edge in Apache project developments from the committers
  • Learn from your peers and industry experts about innovative and real-world use cases, development and administration tips and tricks, success stories and best practices to leverage all your data – on-premise and in the cloud – to drive predictive analytics, distributed deep-learning and artificial intelligence initiatives
  • Attend one of our more than 170 technical deep dive breakout sessions from nearly 200 speakers across eight tracks
  • Check out our keynotes, meetups, trainings, technical crash courses, birds-of-a-feather sessions, Women in Big Data and more
  • Attend the community showcase where you can network with sponsors and industry experts, including a host of startups and large companies like Microsoft, IBM, Oracle, HP, Dell EMC and Teradata

Similar to previous years, we look forward to continuing Yahoo’s decade-long tradition of thought leadership at this year’s summit. Join us for an in-depth look at Yahoo’s Hadoop culture and for the latest in technologies such as Apache Tez, HBase, Hive, Data Highway Rainbow, Mail Data Warehouse and Distributed Deep Learning at the breakout sessions below. Or, stop by Yahoo kiosk #700 at the community showcase.

Also, as a co-host of the event, Yahoo is pleased to offer a 20% discount for the summit with the code MSPO20. Register here for Hadoop Summit, San Jose, California!


DAY 1. TUESDAY June 13, 2017


12:20 – 1:00 P.M. TensorFlowOnSpark – Scalable TensorFlow Learning On Spark Clusters

Andy Feng – VP Architecture, Big Data and Machine Learning

Lee Yang – Sr. Principal Engineer

In this talk, we will introduce a new framework, TensorFlowOnSpark, for scalable TensorFlow learning, that was open sourced in Q1 2017. This new framework enables easy experimentation for algorithm designs, and supports scalable training & inferencing on Spark clusters. It supports all TensorFlow functionalities including synchronous & asynchronous learning, model & data parallelism, and TensorBoard. It provides architectural flexibility for data ingestion to TensorFlow and network protocols for server-to-server communication. With a few lines of code changes, an existing TensorFlow algorithm can be transformed into a scalable application.

2:10 – 2:50 P.M. Handling Kernel Upgrades at Scale – The Dirty Cow Story

Samy Gawande – Sr. Operations Engineer

Savitha Ravikrishnan – Site Reliability Engineer

Apache Hadoop at Yahoo is a massive platform with 36 different clusters spread across YARN, Apache HBase, and Apache Storm deployments, totaling 60,000 servers made up of 100s of different hardware configurations accumulated over generations, presenting unique operational challenges and a variety of unforeseen corner cases. In this talk, we will share methods, tips and tricks to deal with large scale kernel upgrade on heterogeneous platforms within tight timeframes with 100% uptime and no service or data loss through the Dirty COW use case (privilege escalation vulnerability found in the Linux Kernel in late 2016).

5:00 – 5:40 P.M. Data Highway Rainbow –  Petabyte Scale Event Collection, Transport, and Delivery at Yahoo

Nilam Sharma – Sr. Software Engineer

Huibing Yin – Sr. Software Engineer

This talk presents the architecture and features of Data Highway Rainbow, Yahoo’s hosted multi-tenant infrastructure which offers event collection, transport and aggregated delivery as a service. Data Highway supports collection from multiple data centers & aggregated delivery in primary Yahoo data centers which provide a big data computing cluster. From a delivery perspective, Data Highway supports endpoints/sinks such as HDFS, Storm and Kafka; with Storm & Kafka endpoints tailored towards latency sensitive consumers.


DAY 2. WEDNESDAY June 14, 2017


9:05 – 9:15 A.M. Yahoo General Session – Shaping Data Platform for Lasting Value

Sumeet Singh  – Sr. Director, Products

With a long history of open innovation with Hadoop, Yahoo continues to invest in and expand the platform capabilities by pushing the boundaries of what the platform can accomplish for the entire organization. In the last 11 years (yes, it is that old!), the Hadoop platform has shown no signs of giving up or giving in. In this talk, we explore what makes the shared multi-tenant Hadoop platform so special at Yahoo.

12:20 – 1:00 P.M. CaffeOnSpark Update – Recent Enhancements and Use Cases

Mridul Jain – Sr. Principal Engineer

Jun Shi – Principal Engineer

By combining salient features from deep learning framework Caffe and big-data frameworks Apache Spark and Apache Hadoop, CaffeOnSpark enables distributed deep learning on a cluster of GPU and CPU servers. We released CaffeOnSpark as an open source project in early 2016, and shared its architecture design and basic usage at Hadoop Summit 2016. In this talk, we will update audiences about the recent development of CaffeOnSpark. We will highlight new features and capabilities: unified data layer which multi-label datasets, distributed LSTM training, interleave testing with training, monitoring/profiling framework, and docker deployment.

12:20 – 1:00 P.M. Tez Shuffle Handler – Shuffling at Scale with Apache Hadoop

Jon Eagles – Principal Engineer  

Kuhu Shukla – Software Engineer

In this talk we introduce a new Shuffle Handler for Tez, a YARN Auxiliary Service, that addresses the shortcomings and performance bottlenecks of the legacy MapReduce Shuffle Handler, the default shuffle service in Apache Tez. The Apache Tez Shuffle Handler adds composite fetch which has support for multi-partition fetch to mitigate performance slow down and provides deletion APIs to reduce disk usage for long running Tez sessions. As an emerging technology we will outline future roadmap for the Apache Tez Shuffle Handler and provide performance evaluation results from real world jobs at scale.

2:10 – 2:50 P.M. Achieving HBase Multi-Tenancy with RegionServer Groups and Favored Nodes

Thiruvel Thirumoolan – Principal Engineer

Francis Liu – Sr. Principal Engineer

At Yahoo! HBase has been running as a hosted multi-tenant service since 2013. In a single HBase cluster we have around 30 tenants running various types of workloads (ie batch, near real-time, ad-hoc, etc). We will walk through multi-tenancy features explaining our motivation, how they work as well as our experiences running these multi-tenant clusters. These features will be available in Apache HBase 2.0.

2:10 – 2:50 P.M. Data Driving Yahoo Mail Growth and Evolution with a 50 PB Hadoop Warehouse

Nick Huang – Director, Data Engineering, Yahoo Mail  

Saurabh Dixit – Sr. Principal Engineer, Yahoo Mail

Since 2014, the Yahoo Mail Data Engineering team took on the task of revamping the Mail data warehouse and analytics infrastructure in order to drive the continued growth and evolution of Yahoo Mail. Along the way we have built a 50 PB Hadoop warehouse, and surrounding analytics and machine learning programs that have transformed the way data plays in Yahoo Mail. In this session we will share our experience from this 3 year journey, from the system architecture, analytics systems built, to the learnings from development and drive for adoption.

DAY3. THURSDAY June 15, 2017


2:10 – 2:50 P.M. OracleStore – A Highly Performant RawStore Implementation for Hive Metastore

Chris Drome – Sr. Principal Engineer  

Jin Sun – Principal Engineer

Today, Yahoo uses Hive in many different spaces, from ETL pipelines to adhoc user queries. Increasingly, we are investigating the practicality of applying Hive to real-time queries, such as those generated by interactive BI reporting systems. In order for Hive to succeed in this space, it must be performant in all aspects of query execution, from query compilation to job execution. One such component is the interaction with the underlying database at the core of the Metastore. As an alternative to ObjectStore, we created OracleStore as a proof-of-concept. Freed of the restrictions imposed by DataNucleus, we were able to design a more performant database schema that better met our needs. Then, we implemented OracleStore with specific goals built-in from the start, such as ensuring the deduplication of data. In this talk we will discuss the details behind OracleStore and the gains that were realized with this alternative implementation. These include a reduction of 97%+ in the storage footprint of multiple tables, as well as query performance that is 13x faster than ObjectStore with DirectSQL and 46x faster than ObjectStore without DirectSQL.

3:00 P.M. – 3:40 P.M. Bullet – A Real Time Data Query Engine

Akshai Sarma – Sr. Software Engineer

Michael Natkovich – Director, Engineering

Bullet is an open sourced, lightweight, pluggable querying system for streaming data without a persistence layer implemented on top of Storm. It allows you to filter, project, and aggregate on data in transit. It includes a UI and WS. Instead of running queries on a finite set of data that arrived and was persisted or running a static query defined at the startup of the stream, our queries can be executed against an arbitrary set of data arriving after the query is submitted. In other words, it is a look-forward system. Bullet is a multi-tenant system that scales independently of the data consumed and the number of simultaneous queries. Bullet is pluggable into any streaming data source. It can be configured to read from systems such as Storm, Kafka, Spark, Flume, etc. Bullet leverages Sketches to perform its aggregate operations such as distinct, count distinct, sum, count, min, max, and average.

3:00 P.M. – 3:40 P.M. Yahoo – Moving Beyond Running 100% of Apache Pig Jobs on Apache Tez

Rohini Palaniswamy – Sr. Principal Engineer

Last year at Yahoo, we spent great effort in scaling, stabilizing and making Pig on Tez production ready and by the end of the year retired running Pig jobs on Mapreduce. This talk will detail the performance and resource utilization improvements Yahoo achieved after migrating all Pig jobs to run on Tez. After successful migration and the improved performance we shifted our focus to addressing some of the bottlenecks we identified and new optimization ideas that we came up with to make it go even faster. We will go over the new features and work done in Tez to make that happen like custom YARN ShuffleHandler, reworking DAG scheduling order, serialization changes, etc. We will also cover exciting new features that were added to Pig for performance such as bloom join and byte code generation.

4:10 P.M. – 4:50 P.M. Leveraging Docker for Hadoop Build Automation and Big Data Stack Provisioning

Evans Ye,  Software Engineer

Apache Bigtop as an open source Hadoop distribution, focuses on developing packaging, testing and deployment solutions that help infrastructure engineers to build up their own customized big data platform as easy as possible. However, packages deployed in production require a solid CI testing framework to ensure its quality. Numbers of Hadoop component must be ensured to work perfectly together as well. In this presentation, we’ll talk about how Bigtop deliver its containerized CI framework which can be directly replicated by Bigtop users. The core revolution here are the newly developed Docker Provisioner that leveraged Docker for Hadoop deployment and Docker Sandbox for developer to quickly start a big data stack. The content of this talk includes the containerized CI framework, technical detail of Docker Provisioner and Docker Sandbox, a hierarchy of docker images we designed, and several components we developed such as Bigtop Toolchain to achieve build automation.

Register here for Hadoop Summit, San Jose, California with a 20% discount code MSPO20

Questions? Feel free to reach out to us at [email protected] Hope to see you there!

Elsevier Wants $15 Million Piracy Damages From Sci-Hub and Libgen

Post Syndicated from Ernesto original https://torrentfreak.com/elsevier-wants-15-million-piracy-damages-from-sci-hub-and-libgen-170518/

Two years ago, academic publisher Elsevier filed a complaint against Sci-Hub, Libgen and several related “pirate” sites.

The publisher accused the websites of making academic papers widely available to the public, without permission.

While Sci-Hub and Libgen are nothing like the average pirate site, they are just as illegal according to Elsevier’s legal team, which swiftly obtained a preliminary injunction from a New York District Court.

The injunction ordered Sci-Hub’s founder Alexandra Elbakyan, who is the only named defendant, to quit offering access to any Elsevier content. This didn’t happen, however.

Sci-Hub and the other websites lost control over several domain names, but were quick to bounce back. They remain operational today and have no intention of shutting down, despite pressure from the Court.

This prompted Elsevier to request a default judgment and a permanent injunction against the Sci-Hub and Libgen defendants. In a motion filed this week, Elsevier’s legal team describes the sites as pirate havens.

“Defendants’ websites exist for the sole purpose of providing unauthorized and unlawful access to the copyrighted works of Elsevier and other scientific publishers. Collectively, Defendants are responsible for the piracy of millions of Elseviers’ copyrighted works as well as millions of works published by others.”

As compensation for the losses it has suffered, Elsevier is now demanding $15 million in damages. The publisher lists 100 works as evidence and argues that the maximum amount of $150,000 in statutory damages is warranted in this case.

“Here, Defendants’ willful conduct rises to the level of truly egregious conduct, justifying the maximum statutory damages of $150,000 per infringed work,” Elsevier’s team writes (pdf).

“The Preliminary Injunction constituted such notice by a court, and Defendants have flagrantly disregarded the Preliminary Injunction by continuing to operate their piracy enterprises.”

Not only did the defendants ignore the preliminary injunction by continuing to operate their websites, Sci-Hub’s operator stated that she chose to willingly disregard the court order.

“Moreover, Elbakyan has publicly stated that she is aware that Sci-Hub’s actions are unlawful and that this Court has enjoined her infringing activities, but that she intends to continue to defy the Court’s Order.”

The amount is also justified based on the scale of infringement, Elsevier stresses. The sites in question offer dozens of millions of copyrighted works which are downloaded hundreds of thousands of times per day.

A good chunk of these papers are copyrighted, many by Elsevier. In fact, when the original complaint was filed, Elsevier had trouble locating ScienceDirect-hosted articles that were not available through Libgen.

“Here, the scale of Defendants’ infringement is so staggering that a reasonable estimate of appropriate damages, even if based on a lower, license- fee-based metric, would be difficult, if not impossible, to calculate,” Elsevier’s legal team writes.

Since the court’s clerk has already entered a default against the defendants, it’s likely that Elsevier will win the case. As a result, Sci-Hub and Libgen will likely have to relocate again. Whether Elsevier will see any damages from the defendants has yet to be seen.

Sci-Hub founder Alexandra Elbakyan wasn’t really sure how to comment on the million dollar claims. She described Elsevier’s requests as “funny” and “ridiculous,” while confirming that the site is not going anywhere.

“The Sci-Hub will continue as usual. In case of problems with the domain names, users can rely on TOR scihub22266oqcxt.onion,” Elbakyan tells us.

In hindsight, Elsevier may regret its decision to take legal action.

Instead of taking Sci-Hub and Libgen down, the lawsuit and the associated media attention only helped them grow. Last year we reported that its users were downloading hundreds of thousands of papers per day from Sci-Hub, a number that has likely increased since.

Also, Elbakyan is now seen as a hero by several prominent academics, illustrated by the fact that the prestigious publication Nature listed her as one of the top ten people that mattered in science last year.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Crash Course Computer Science with Carrie Anne Philbin

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/crash-course-carrie-anne-philbin/

Get your teeth into the history of computer science with our Director of Education, Carrie Anne Philbin, and the team at YouTube’s incredible Crash Course channel.

Crash Course Computer Science Preview

Starting February 22nd, Carrie Anne Philbin will be hosting Crash Course Computer Science! In this series, we’re going to trace the origins of our modern computers, take a closer look at the ideas that gave us our current hardware and software, discuss how and why our smart devices just keep getting smarter, and even look towards the future!

The brainchild of Hank and John Green (the latter of whom is responsible for books such as The Fault in Our Stars and all of my resultant heartbroken tears), Crash Course is an educational YouTube channel specialising in courses for school-age tuition support.

As part of the YouTube Orginal Channel Initiative, and with their partners PBS Digital Studios, the team has completed courses in subjects such as physics, hosted by Dr. Shini Somara, astronomy with Phil Plait, and sociology with Nicole Sweeney.

Raspberry Pi Carrie Anne Philbin Crash Course

Oh, and they’ve recently released a new series on computer science with Carrie Anne Philbin , whom you may know as Raspberry Pi’s Director of Education and the host of YouTube’s Geek Gurl Diaries.

Computer Science with Carrie Anne

Covering topics such as RAM, Boolean logic, CPU design , and binary, the course is currently up to episode twelve of its run. Episodes are released every Tuesday, and there are lots more to come.

Crash Course Carrie Anne Philbin Raspberry Pi

Following the fast-paced, visual style of the Crash Course brand, Carrie Anne takes her viewers on a journey from early computing with Lovelace and Babbage through to the modern-day electronics that power our favourite gadgets such as tablets, mobile phones, and small single-board microcomputers…

The response so far

A few members of the Raspberry Pi team recently attended VidCon Europe in Amsterdam to learn more about making video content for our community – and also so I could exist in the same space as the Holy Trinity, albeit briefly.

At VidCon, Carrie Anne took part in an engaging and successful Women in Science panel with Sally Le Page, Viviane Lalande, Hana Shoib, Maddie Moate, and fellow Crash Course presenter Dr. Shini Somara. I could see that Crash Course Computer Science was going down well from the number of people who approached Carrie Anne to thank her for the course, from those who were learning for the first time to people who were rediscovering the subject.

Crash Course Carrie Anne Philbin Raspberry Pi

Take part in the conversation

Join in the conversation! Head over to YouTube, watch Crash Course Computer Science, and join the discussion in the comments.

Crash Course Carrie Anne Philbin Raspberry Pi

You can also follow Crash Course on Twitter for release updates, and subscribe on YouTube to get notifications of new content.

Oh, and who can spot the sneaky Raspberry Pi in the video introduction?

“Cheers!”

Crash Course Computer Science Outtakes

In which Carrie Anne presents a new sing-a-long format and faces her greatest challenge yet – signing off an episode. Want to find Crash Course elsewhere on the internet? Facebook – http://www.facebook.com/YouTubeCrashCourse Twitter – http://www.twitter.com/TheCrashCourse Tumblr – http://thecrashcourse.tumblr.com Support Crash Course on Patreon: http://patreon.com/crashcourse CC Kids: http://www.youtube.com/crashcoursekids Produced in collaboration with PBS Digital Studios: http://youtube.com/pbsdigitalstudios The Latest from PBS Digital Studios: https://www.youtube.com/playlist?list=PL1mtdjDVOoOqJzeaJAV15Tq0tZ1vKj7ZV We’ve got merch!

The post Crash Course Computer Science with Carrie Anne Philbin appeared first on Raspberry Pi.

Build a Healthcare Data Warehouse Using Amazon EMR, Amazon Redshift, AWS Lambda, and OMOP

Post Syndicated from Ryan Hood original https://aws.amazon.com/blogs/big-data/build-a-healthcare-data-warehouse-using-amazon-emr-amazon-redshift-aws-lambda-and-omop/

In the healthcare field, data comes in all shapes and sizes. Despite efforts to standardize terminology, some concepts (e.g., blood glucose) are still often depicted in different ways. This post demonstrates how to convert an openly available dataset called MIMIC-III, which consists of de-identified medical data for about 40,000 patients, into an open source data model known as the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). It describes the architecture and steps for analyzing data across various disconnected sources of health datasets so you can start applying Big Data methods to health research.

Note: If you arrived at this page looking for more info on the movie Mimic 3: Sentinel, you might not enjoy this post.

OMOP overview

The OMOP CDM helps standardize healthcare data and makes it easier to analyze outcomes at a large scale. The CDM is gaining a lot of traction in the health research community, which is deeply involved in developing and adopting a common data model. Community resources are available for converting datasets, and there are software tools to help unlock your data after it’s in the OMOP format. The great advantage of converting data sources into a standard data model like OMOP is that it allows for streamlined, comprehensive analytics and helps remove the variability associated with analyzing health records from different sources.

OMOP ETL with Apache Spark

Observational Health Data Sciences and Informatics (OHDSI) provides the OMOP CDM in a variety of formats, including Apache Impala, Oracle, PostgreSQL, and SQL Server. (See the OHDSI Common Data Model repo in GitHub.) In this scenario, the data is moved to AWS to take advantage of the unbounded scale of Amazon EMR and serverless technologies, and the variety of AWS services that can help make sense of the data in a cost-effective way—including Amazon Machine Learning, Amazon QuickSight, and Amazon Redshift.

This example demonstrates an architecture that can be used to run SQL-based extract, transform, load (ETL) jobs to map any data source to the OMOP CDM. It uses MIMIC ETL code provided by Md. Shamsuzzoha Bayzid. The code was modified to run in Amazon Redshift.

Getting access to the MIMIC-III data

Before you can retrieve the MIMIC-III data, you must request access on the PhysioNet website, which is hosted on Amazon S3 as part of the Amazon Web Services (AWS) Public Dataset Program. However, you don’t need access to the MIMIC-III data to follow along with this post.

Solution architecture and loading process

The following diagram shows the architecture that is used to convert the MIMIC-III dataset to the OMOP CDM.

The data conversion process includes the following steps:

  1. The entire infrastructure is spun up using an AWS CloudFormation template. This includes the Amazon EMR cluster, Amazon SNS topics/subscriptions, an AWS Lambda function and trigger, and AWS Identity and Access Management (IAM) roles.
  2. The MIMIC-III data is read in via an Apache Spark program that is running on Amazon EMR. The files are registered as tables in Spark so that they can be queried by Spark SQL.
  3. The transformation queries are located in a separate Amazon S3 location, which is read in by Spark and executed on the newly registered tables to convert the data into OMOP form.
  4. The data is then written to a staging S3 location, where it is ready to be copied into Amazon Redshift.
  5. As each file is loaded in OMOP form into S3, the Spark program sends a message to an SNS topic that signifies that the load completed successfully.
  6. After that message is pushed, it triggers a Lambda function that consumes the message and executes a COPY command from S3 into Amazon Redshift for the appropriate table.

This architecture provides a scalable way to use various healthcare sources and convert them to OMOP format, where the only changes needed are in the SQL transformation files. The transformation logic is stored in an S3 bucket and is completely de-coupled from the Apache Spark program that runs on EMR and converts the data into OMOP form. This makes the transformation code portable and allows the Spark jar to be reused if other data sources are added—for example, electronic health records (EHR), billing systems, and other research datasets.

Note: For larger files, you might experience the five-minute timeout limitation in Lambda. In that scenario you can use AWS Step Functions to split the file and load it one piece at a time.

Scaling the solution

The transformation code runs in a Spark container that can scale out based on how you define your EMR cluster. There are no single points of failure. As your data grows, your infrastructure can grow without requiring any changes to the underlying architecture.

If you add more data sources, such as EHRs and other research data, the high-level view of the ETL would look like the following:

In this case, the loads of the different systems are completely independent. If the EHR load is four times the size that you expected and uses all the resources, it has no impact on the Research Data or HR System loads because they are in separate containers.

You can scale your EMR cluster based on the size of the data that you anticipate. For example, you can have a 50-node cluster in your container for loading EHR data and a 2-node cluster for loading the HR System. This design helps you scale the resources based on what you consume, as opposed to expensive infrastructure sitting idle.

The only code that is unique to each execution is any diffs between the CloudFormation templates (e.g., cluster size and SQL file locations) and the transformation SQL that resides in S3 buckets. The Spark jar that is executed as an EMR step is reused across all three executions.

Upgrading versions

In this architecture, upgrading the versions of Amazon EMR, Apache Hadoop, or Spark requires a one-time change to one line of code in the CloudFormation template:

"EMRC2SparkBatch": {
      "Type": "AWS::EMR::Cluster",
      "Properties": {
        "Applications": [
          {
            "Name": "Hadoop"
          },
          {
            "Name": "Spark"
          }
        ],
        "Instances": {
          "MasterInstanceGroup": {
            "InstanceCount": 1,
            "InstanceType": "m3.xlarge",
            "Market": "ON_DEMAND",
            "Name": "Master"
          },
          "CoreInstanceGroup": {
            "InstanceCount": 1,
            "InstanceType": "m3.xlarge",
            "Market": "ON_DEMAND",
            "Name": "Core"
          },
          "TerminationProtected": false
        },
        "Name": "EMRC2SparkBatch",
        "JobFlowRole": { "Ref": "EMREC2InstanceProfile" },
          "ServiceRole": {
                    "Ref": "EMRRole"
                  },
        "ReleaseLabel": "emr-5.0.0",
        "VisibleToAllUsers": true      
}
    }

Note that this example uses a slightly lower version of EMR so that it can use Spark 2.0.0 instead of Spark 2.1.0, which does not support nulls in CSV files.

You can also select the version in the Release list in the General Configuration section of the EMR console:

The data sources all have different CloudFormation templates, so you can upgrade one data source at a time or upgrade them all together. As long as the reusable Spark jar is compatible with the new version, none of the transformation code has to change.

Executing queries on the data

After all the data is loaded, it’s easy to tear down the CloudFormation stack so you don’t pay for resources that aren’t being used:

CloudFormationManager cf = new CloudFormationManager(); 
cf.terminateStack(stack);    

This includes the EMR cluster, Lambda function, SNS topics and subscriptions, and temporary IAM roles that were created to push the data to Amazon Redshift. The S3 buckets that contain the raw MIMIC-III data and the data in OMOP form remain because they existed outside the CloudFormation stack.

You can now connect to the Amazon Redshift cluster and start executing queries on the ten OMOP tables that were created, as shown in the following example:

select *
from drug_exposure
limit 100;

OMOP analytics tools

For information about open source analytics tools that are built on top of the OMOP model, visit the OHDSI Software page.

The following are examples of data visualizations provided by Achilles, an open source visualization tool for OMOP.

Conclusion

This post demonstrated how to convert MIMIC-III data into OMOP form using data tools that are built for scale and flexibility. It compared the architecture against a traditional data warehouse and showed how this design scales by mixing a scale-out technology with EMR and a serverless technology with Lambda. It also showed how you can lower your costs by using CloudFormation to create your data pipeline infrastructure. And by tearing down the stack after the data is loaded, you don’t pay for idle servers.

You can find all the code in the AWS Labs GitHub repo with detailed, step-by-step instructions on how to load the data from MIMIC-III to OMOP using this design.

If you have any questions or suggestions, please add them below.


About the Author

Ryan Hood is a Data Engineer for AWS. He works on big data projects leveraging the newest AWS offerings. In his spare time, he enjoys watching the Cubs win the World Series and attempting to Sous-vide anything he can find in his refrigerator.

 

 


Related

Create a Healthcare Data Hub with AWS and Mirth Connect

 

 

 

 

 

 

 

RFD: the alien abduction prophecy protocol

Post Syndicated from Michal Zalewski original http://lcamtuf.blogspot.com/2017/05/rfd-alien-abduction-prophecy-protocol.html

“It’s tough to make predictions, especially about the future.”
– variously attributed to Yogi Berra and Niels Bohr

Right. So let’s say you are visited by transdimensional space aliens from outer space. There’s some old-fashioned probing, but eventually, they get to the point. They outline a series of apocalyptic prophecies, beginning with the surprise 2032 election of Dwayne Elizondo Mountain Dew Herbert Camacho as the President of the United States, followed by a limited-scale nuclear exchange with the Grand Duchy of Ruritania in 2036, and culminating with the extinction of all life due to a series of cascading Y2K38 failures that start at an Ohio pretzel reprocessing plan. Long story short, if you want to save mankind, you have to warn others of what’s to come.

But there’s a snag: when you wake up in a roadside ditch in Alabama, you realize that nobody is going to believe your story! If you come forward, your professional and social reputation will be instantly destroyed. If you’re lucky, the vindication of your claims will come fifteen years later; if not, it might turn out that you were pranked by some space alien frat boys who just wanted to have some cheap space laughs. The bottom line is, you need to be certain before you make your move. You figure this means staying mum until the Election Day of 2032.

But wait, this plan is also not very good! After all, how could your future self convince others that you knew about President Camacho all along? Well… if you work in information security, you are probably familiar with a neat solution: write down your account of events in a text file, calculate a cryptographic hash of this file, and publish the resulting value somewhere permanent. Fifteen years later, reveal the contents of your file and point people to your old announcement. Explain that you must have been in the possession of this very file back in 2017; otherwise, you would not have known its hash. Voila – a commitment scheme!

Although elegant, this approach can be risky: historically, the usable life of cryptographic hash functions seemed to hover at somewhere around 15 years – so even if you pick a very modern algorithm, there is a real risk that future advances in cryptanalysis could severely undermine the strength of your proof. No biggie, though! For extra safety, you could combine several independent hashing functions, or increase the computational complexity of the hash by running it in a loop. There are also some less-known hash functions, such as SPHINCS, that are designed with different trade-offs in mind and may offer longer-term security guarantees.

Of course, the computation of the hash is not enough; it needs to become an immutable part of the public record and remain easy to look up for years to come. There is no guarantee that any particular online publishing outlet is going to stay afloat that long and continue to operate in its current form. The survivability of more specialized and experimental platforms, such as blockchain-based notaries, seems even less clear. Thankfully, you can resort to another kludge: if you publish the hash through a large number of independent online venues, there is a good chance that at least one of them will be around in 2032.

(Offline notarization – whether of the pen-and-paper or the PKI-based variety – offers an interesting alternative. That said, in the absence of an immutable, public ledger, accusations of forgery or collusion would be very easy to make – especially if the fate of the entire planet is at stake.)

Even with this out of the way, there is yet another profound problem with the plan: a current-day scam artist could conceivably generate hundreds or thousands of political predictions, publish the hashes, and then simply discard or delete the ones that do not come true by 2032 – thus creating an illusion of prescience. To convince skeptics that you are not doing just that, you could incorporate a cryptographic proof of work into your approach, attaching a particular CPU time “price tag” to every hash. The future you could then claim that it would have been prohibitively expensive for the former you to attempt the “prediction spam” attack. But this argument seems iffy: a $1,000 proof may already be too costly for a lower middle class abductee, while a determined tech billionaire could easily spend $100,000 to pull off an elaborate prank on the entire world. Not to mention, massive CPU resources can be commandeered with little or no effort by the operators of large botnets and many other actors of this sort.

In the end, my best idea is to rely on an inherently low-bandwidth publication medium, rather than a high-cost one. For example, although a determined hoaxer could place thousands of hash-bearing classifieds in some of the largest-circulation newspapers, such sleigh-of-hand would be trivial for future sleuths to spot (at least compared to combing through the entire Internet for an abandoned hash). Or, as per an anonymous suggestion relayed by Thomas Ptacek: just tattoo the signature on your body, then post some post some pics; there are only so many places for a tattoo to go.

Still, what was supposed to be a nice, scientific proof devolved into a bunch of hand-wavy arguments and poorly-quantified probabilities. For the sake of future abductees: is there a better way?

Roundup of AWS HIPAA Eligible Service Announcements

Post Syndicated from Ana Visneski original https://aws.amazon.com/blogs/aws/roundup-of-aws-hipaa-eligible-service-announcements/

At AWS we have had a number of HIPAA eligible service announcements. Patrick Combes, the Healthcare and Life Sciences Global Technical Leader at AWS, and Aaron Friedman, a Healthcare and Life Sciences Partner Solutions Architect at AWS, have written this post to tell you all about it.

-Ana


We are pleased to announce that the following AWS services have been added to the BAA in recent weeks: Amazon API Gateway, AWS Direct Connect, AWS Database Migration Service, and Amazon SQS. All four of these services facilitate moving data into and through AWS, and we are excited to see how customers will be using these services to advance their solutions in healthcare. While we know the use cases for each of these services are vast, we wanted to highlight some ways that customers might use these services with Protected Health Information (PHI).

As with all HIPAA-eligible services covered under the AWS Business Associate Addendum (BAA), PHI must be encrypted while at-rest or in-transit. We encourage you to reference our HIPAA whitepaper, which details how you might configure each of AWS’ HIPAA-eligible services to store, process, and transmit PHI. And of course, for any portion of your application that does not touch PHI, you can use any of our 90+ services to deliver the best possible experience to your users. You can find some ideas on architecting for HIPAA on our website.

Amazon API Gateway
Amazon API Gateway is a web service that makes it easy for developers to create, publish, monitor, and secure APIs at any scale. With PHI now able to securely transit API Gateway, applications such as patient/provider directories, patient dashboards, medical device reports/telemetry, HL7 message processing and more can securely accept and deliver information to any number and type of applications running within AWS or client presentation layers.

One particular area we are excited to see how our customers leverage Amazon API Gateway is with the exchange of healthcare information. The Fast Healthcare Interoperability Resources (FHIR) specification will likely become the next-generation standard for how health information is shared between entities. With strong support for RESTful architectures, FHIR can be easily codified within an API on Amazon API Gateway. For more information on FHIR, our AWS Healthcare Competency partner, Datica, has an excellent primer.

AWS Direct Connect
Some of our healthcare and life sciences customers, such as Johnson & Johnson, leverage hybrid architectures and need to connect their on-premises infrastructure to the AWS Cloud. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.

In addition to a hybrid-architecture strategy, AWS Direct Connect can assist with the secure migration of data to AWS, which is the first step to using the wide array of our HIPAA-eligible services to store and process PHI, such as Amazon S3 and Amazon EMR. Additionally, you can connect to third-party/externally-hosted applications or partner-provided solutions as well as securely and reliably connect end users to those same healthcare applications, such as a cloud-based Electronic Medical Record system.

AWS Database Migration Service (DMS)
To date, customers have migrated over 20,000 databases to AWS through the AWS Database Migration Service. Customers often use DMS as part of their cloud migration strategy, and now it can be used to securely and easily migrate your core databases containing PHI to the AWS Cloud. As your source database remains fully operational during the migration with DMS, you minimize downtime for these business-critical applications as you migrate your databases to AWS. This service can now be utilized to securely transfer such items as patient directories, payment/transaction record databases, revenue management databases and more into AWS.

Amazon Simple Queue Service (SQS)
Amazon Simple Queue Service (SQS) is a message queueing service for reliably communicating among distributed software components and microservices at any scale. One way that we envision customers using SQS with PHI is to buffer requests between application components that pass HL7 or FHIR messages to other parts of their application. You can leverage features like SQS FIFO to ensure your messages containing PHI are passed in the order they are received and delivered in the order they are received, and available until a consumer processes and deletes it. This is important for applications with patient record updates or processing payment information in a hospital.

Let’s get building!
We are beyond excited to see how our customers will use our newly HIPAA-eligible services as part of their healthcare applications. What are you most excited for? Leave a comment below.