Tag Archives: blocked

Coaxing 2D platforming out of Unity

Post Syndicated from Eevee original https://eev.ee/blog/2017/10/13/coaxing-2d-platforming-out-of-unity/

An anonymous donor asked a question that I can’t even begin to figure out how to answer, but they also said anything else is fine, so here’s anything else.

I’ve been avoiding writing about game physics, since I want to save it for ✨ the book I’m writing ✨, but that book will almost certainly not touch on Unity. Here, then, is a brief run through some of the brick walls I ran into while trying to convince Unity to do 2D platforming.

This is fairly high-level — there are no blocks of code or helpful diagrams. I’m just getting this out of my head because it’s interesting. If you want more gritty details, I guess you’ll have to wait for ✨ the book ✨.

The setup

I hadn’t used Unity before. I hadn’t even used a “real” physics engine before. My games so far have mostly used LÖVE, a Lua-based engine. LÖVE includes box2d bindings, but for various reasons (not all of them good), I opted to avoid them and instead write my own physics completely from scratch. (How, you ask? ✨ Book ✨!)

I was invited to work on a Unity project, Chaos Composer, that someone else had already started. It had basic movement already implemented; I taught myself Unity’s physics system by hacking on it. It’s entirely possible that none of this is actually the best way to do anything, since I was really trying to reproduce my own homegrown stuff in Unity, but it’s the best I’ve managed to come up with.

Two recurring snags were that you can’t ask Unity to do multiple physics updates in a row, and sometimes getting the information I wanted was difficult. Working with my own code spoiled me a little, since I could invoke it at any time and ask it anything I wanted; Unity, on the other hand, is someone else’s black box with a rigid interface on top.

Also, wow, Googling for a lot of this was not quite as helpful as expected. A lot of what’s out there is just the first thing that works, and often that’s pretty hacky and imposes severe limits on the game design (e.g., “this won’t work with slopes”). Basic movement and collision are the first thing you do, which seems to me like the worst time to be locking yourself out of a lot of design options. I tried very (very, very, very) hard to minimize those kinds of constraints.

Problem 1: Movement

When I showed up, movement was already working. Problem solved!

Like any good programmer, I immediately set out to un-solve it. Given a “real” physics engine like Unity prominently features, you have two options: ⓐ treat the player as a physics object, or ⓑ don’t. The existing code went with option ⓑ, like I’d done myself with LÖVE, and like I’d seen countless people advise. Using a physics sim makes for bad platforming.

But… why? I believed it, but I couldn’t concretely defend it. I had to know for myself. So I started a blank project, drew some physics boxes, and wrote a dozen-line player controller.

Ah! Immediate enlightenment.

If the player was sliding down a wall, and I tried to move them into the wall, they would simply freeze in midair until I let go of the movement key. The trouble is that the physics sim works in terms of forces — moving the player involves giving them a nudge in some direction, like a giant invisible hand pushing them around the level. Surprise! If you press a real object against a real wall with your real hand, you’ll see the same effect — friction will cancel out gravity, and the object will stay in midair..

Platformer movement, as it turns out, doesn’t make any goddamn physical sense. What is air control? What are you pushing against? Nothing, really; we just have it because it’s nice to play with, because not having it is a nightmare.

I looked to see if there were any common solutions to this, and I only really found one: make all your walls frictionless.

Game development is full of hacks like this, and I… don’t like them. I can accept that minor hacks are necessary sometimes, but this one makes an early and widespread change to a fundamental system to “fix” something that was wrong in the first place. It also imposes an “invisible” requirement, something I try to avoid at all costs — if you forget to make a particular wall frictionless, you’ll never know unless you happen to try sliding down it.

And so, I swiftly returned to the existing code. It wasn’t too different from what I’d come up with for LÖVE: it applied gravity by hand, tracked the player’s velocity, computed the intended movement each frame, and moved by that amount. The interesting thing was that it used MovePosition, which schedules a movement for the next physics update and stops the movement if the player hits something solid.

It’s kind of a nice hybrid approach, actually; all the “physics” for conscious actors is done by hand, but the physics engine is still used for collision detection. It’s also used for collision rejection — if the player manages to wedge themselves several pixels into a solid object, for example, the physics engine will try to gently nudge them back out of it with no extra effort required on my part. I still haven’t figured out how to get that to work with my homegrown stuff, which is built to prevent overlap rather than to jiggle things out of it.

But wait, what about…

Our player is a dynamic body with rotation lock and no gravity. Why not just use a kinematic body?

I must be missing something, because I do not understand the point of kinematic bodies. I ran into this with Godot, too, which documented them the same way: as intended for use as players and other manually-moved objects. But by default, they don’t even collide with other kinematic bodies or static geometry. What? There’s a checkbox to turn this on, which I enabled, but then I found out that MovePosition doesn’t stop kinematic bodies when they hit something, so I would’ve had to cast along the intended path of movement to figure out when to stop, thus duplicating the same work the physics engine was about to do.

But that’s impossible anyway! Static geometry generally wants to be made of edge colliders, right? They don’t care about concave/convex. Imagine the player is standing on the ground near a wall and tries to move towards the wall. Both the ground and the wall are different edges from the same edge collider.

If you try to cast the player’s hitbox horizontally, parallel to the ground, you’ll only get one collision: the existing collision with the ground. Casting doesn’t distinguish between touching and hitting. And because Unity only reports one collision per collider, and because the ground will always show up first, you will never find out about the impending wall collision.

So you’re forced to either use raycasts for collision detection or decomposed polygons for world geometry, both of which are slightly worse tools for no real gain.

I ended up sticking with a dynamic body.

Oh, one other thing that doesn’t really fit anywhere else: keep track of units! If you’re adding something called “velocity” directly to something called “position”, something has gone very wrong. Acceleration is distance per time squared; velocity is distance per time; position is distance. You must multiply or divide by time to convert between them.

I never even, say, add a constant directly to position every frame; I always phrase it as velocity and multiply by Δt. It keeps the units consistent: time is always in seconds, not in tics.

Problem 2: Slopes

Ah, now we start to get off in the weeds.

A sort of pre-problem here was detecting whether we’re on a slope, which means detecting the ground. The codebase originally used a manual physics query of the area around the player’s feet to check for the ground, which seems to be somewhat common, but that can’t tell me the angle of the detected ground. (It’s also kind of error-prone, since “around the player’s feet” has to be specified by hand and may not stay correct through animations or changes in the hitbox.)

I replaced that with what I’d eventually settled on in LÖVE: detect the ground by detecting collisions, and looking at the normal of the collision. A normal is a vector that points straight out from a surface, so if you’re standing on the ground, the normal points straight up; if you’re on a 10° incline, the normal points 10° away from straight up.

Not all collisions are with the ground, of course, so I assumed something is ground if the normal pointed away from gravity. (I like this definition more than “points upwards”, because it avoids assuming anything about the direction of gravity, which leaves some interesting doors open for later on.) That’s easily detected by taking the dot product — if it’s negative, the collision was with the ground, and I now have the normal of the ground.

Actually doing this in practice was slightly tricky. With my LÖVE engine, I could cram this right into the middle of collision resolution. With Unity, not quite so much. I went through a couple iterations before I really grasped Unity’s execution order, which I guess I will have to briefly recap for this to make sense.

Unity essentially has two update cycles. It performs physics updates at fixed intervals for consistency, and updates everything else just before rendering. Within a single frame, Unity does as many fixed physics updates as it has spare time for (which might be zero, one, or more), then does a regular update, then renders. User code can implement either or both of Update, which runs during a regular update, and FixedUpdate, which runs just before Unity does a physics pass.

So my solution was:

  • At the very end of FixedUpdate, clear the actor’s “on ground” flag and ground normal.

  • During OnCollisionEnter2D and OnCollisionStay2D (which are called from within a physics pass), if there’s a collision that looks like it’s with the ground, set the “on ground” flag and ground normal. (If there are multiple ground collisions, well, good luck figuring out the best way to resolve that! At the moment I’m just taking the first and hoping for the best.)

That means there’s a brief window between the end of FixedUpdate and Unity’s physics pass during which a grounded actor might mistakenly believe it’s not on the ground, which is a bit of a shame, but there are very few good reasons for anything to be happening in that window.

Okay! Now we can do slopes.

Just kidding! First we have to do sliding.

When I first looked at this code, it didn’t apply gravity while the player was on the ground. I think I may have had some problems with detecting the ground as result, since the player was no longer pushing down against it? Either way, it seemed like a silly special case, so I made gravity always apply.

Lo! I was a fool. The player could no longer move.

Why? Because MovePosition does exactly what it promises. If the player collides with something, they’ll stop moving. Applying gravity means that the player is trying to move diagonally downwards into the ground, and so MovePosition stops them immediately.

Hence, sliding. I don’t want the player to actually try to move into the ground. I want them to move the unblocked part of that movement. For flat ground, that means the horizontal part, which is pretty much the same as discarding gravity. For sloped ground, it’s a bit more complicated!

Okay but actually it’s less complicated than you’d think. It can be done with some cross products fairly easily, but Unity makes it even easier with a couple casts. There’s a Vector3.ProjectOnPlane function that projects an arbitrary vector on a plane given by its normal — exactly the thing I want! So I apply that to the attempted movement before passing it along to MovePosition. I do the same thing with the current velocity, to prevent the player from accelerating infinitely downwards while standing on flat ground.

One other thing: I don’t actually use the detected ground normal for this. The player might be touching two ground surfaces at the same time, and I’d want to project on both of them. Instead, I use the player body’s GetContacts method, which returns contact points (and normals!) for everything the player is currently touching. I believe those contact points are tracked by the physics engine anyway, so asking for them doesn’t require any actual physics work.

(Looking at the code I have, I notice that I still only perform the slide for surfaces facing upwards — but I’d want to slide against sloped ceilings, too. Why did I do this? Maybe I should remove that.)

(Also, I’m pretty sure projecting a vector on a plane is non-commutative, which raises the question of which order the projections should happen in and what difference it makes. I don’t have a good answer.)

(I note that my LÖVE setup does something slightly different: it just tries whatever the movement ought to be, and if there’s a collision, then it projects — and tries again with the remaining movement. But I can’t ask Unity to do multiple moves in one physics update, alas.)

Okay! Now, slopes. But actually, with the above work done, slopes are most of the way there already.

One obvious problem is that the player tries to move horizontally even when on a slope, and the easy fix is to change their movement from speed * Vector2.right to speed * new Vector2(ground.y, -ground.x) while on the ground. That’s the ground normal rotated a quarter-turn clockwise, so for flat ground it still points to the right, and in general it points rightwards along the ground. (Note that it assumes the ground normal is a unit vector, but as far as I’m aware, that’s true for all the normals Unity gives you.)

Another issue is that if the player stands motionless on a slope, gravity will cause them to slowly slide down it — because the movement from gravity will be projected onto the slope, and unlike flat ground, the result is no longer zero. For conscious actors only, I counter this by adding the opposite factor to the player’s velocity as part of adding in their walking speed. This matches how the real world works, to some extent: when you’re standing on a hill, you’re exerting some small amount of effort just to stay in place.

(Note that slope resistance is not the same as friction. Okay, yes, in the real world, virtually all resistance to movement happens as a result of friction, but bracing yourself against the ground isn’t the same as being passively resisted.)

From here there are a lot of things you can do, depending on how you think slopes should be handled. You could make the player unable to walk up slopes that are too steep. You could make walking down a slope faster than walking up it. You could make jumping go along the ground normal, rather than straight up. You could raise the player’s max allowed speed while running downhill. Whatever you want, really. Armed with a normal and awareness of dot products, you can do whatever you want.

But first you might want to fix a few aggravating side effects.

Problem 3: Ground adherence

I don’t know if there’s a better name for this. I rarely even see anyone talk about it, which surprises me; it seems like it should be a very common problem.

The problem is: if the player runs up a slope which then abruptly changes to flat ground, their momentum will carry them into the air. For very fast players going off the top of very steep slopes, this makes sense, but it becomes visible even for relatively gentle slopes. It was a mild nightmare in the original release of our game Lunar Depot 38, which has very “rough” ground made up of lots of shallow slopes — so the player is very frequently slightly off the ground, which meant they couldn’t jump, for seemingly no reason. (I even had code to fix this, but I disabled it because of a silly visual side effect that I never got around to fixing.)

Anyway! The reason this is a problem is that game protagonists are generally not boxes sliding around — they have legs. We don’t go flying off the top of real-world hilltops because we put our foot down until it touches the ground.

Simulating this footfall is surprisingly fiddly to get right, especially with someone else’s physics engine. It’s made somewhat easier by Cast, which casts the entire hitbox — no matter what shape it is — in a particular direction, as if it had moved, and tells you all the hypothetical collisions in order.

So I cast the player in the direction of gravity by some distance. If the cast hits something solid with a ground-like collision normal, then the player must be close to the ground, and I move them down to touch it (and set that ground as the new ground normal).

There are some wrinkles.

Wrinkle 1: I only want to do this if the player is off the ground now, but was on the ground last frame, and is not deliberately moving upwards. That latter condition means I want to skip this logic if the player jumps, for example, but also if the player is thrust upwards by a spring or abducted by a UFO or whatever. As long as external code goes through some interface and doesn’t mess with the player’s velocity directly, that shouldn’t be too hard to track.

Wrinkle 2: When does this logic run? It needs to happen after the player moves, which means after a Unity physics pass… but there’s no callback for that point in time. I ended up running it at the beginning of FixedUpdate and the beginning of Update — since I definitely want to do it before rendering happens! That means it’ll sometimes happen twice between physics updates. (I could carefully juggle a flag to skip the second run, but I… didn’t do that. Yet?)

Wrinkle 3: I can’t move the player with MovePosition! Remember, MovePosition schedules a movement, it doesn’t actually perform one; that means if it’s called twice before the physics pass, the first call is effectively ignored. I can’t easily combine the drop with the player’s regular movement, for various fiddly reasons. I ended up doing it “by hand” using transform.Translate, which I think was the “old way” to do manual movement before MovePosition existed. I’m not totally sure if it activates triggers? For that matter, I’m not sure it even notices collisions — but since I did a full-body Cast, there shouldn’t be any anyway.

Wrinkle 4: What, exactly, is “some distance”? I’ve yet to find a satisfying answer for this. It seems like it ought to be based on the player’s current speed and the slope of the ground they’re moving along, but every time I’ve done that math, I’ve gotten totally ludicrous answers that sometimes exceed the size of a tile. But maybe that’s not wrong? Play around, I guess, and think about when the effect should “break” and the player should go flying off the top of a hill.

Wrinkle 5: It’s possible that the player will launch off a slope, hit something, and then be adhered to the ground where they wouldn’t have hit it. I don’t much like this edge case, but I don’t see a way around it either.

This problem is surprisingly awkward for how simple it sounds, and the solution isn’t entirely satisfying. Oh, well; the results are much nicer than the solution. As an added bonus, this also fixes occasional problems with running down a hill and becoming detached from the ground due to precision issues or whathaveyou.

Problem 4: One-way platforms

Ah, what a nightmare.

It took me ages just to figure out how to define one-way platforms. Only block when the player is moving downwards? Nope. Only block when the player is above the platform? Nuh-uh.

Well, okay, yes, those approaches might work for convex players and flat platforms. But what about… sloped, one-way platforms? There’s no reason you shouldn’t be able to have those. If Super Mario World can do it, surely Unity can do it almost 30 years later.

The trick is, again, to look at the collision normal. If it faces away from gravity, the player is hitting a ground-like surface, so the platform should block them. Otherwise (or if the player overlaps the platform), it shouldn’t.

Here’s the catch: Unity doesn’t have conditional collision. I can’t decide, on the fly, whether a collision should block or not. In fact, I think that by the time I get a callback like OnCollisionEnter2D, the physics pass is already over.

I could go the other way and use triggers (which are non-blocking), but then I have the opposite problem: I can’t stop the player on the fly. I could move them back to where they hit the trigger, but I envision all kinds of problems as a result. What if they were moving fast enough to activate something on the other side of the platform? What if something else moved to where I’m trying to shove them back to in the meantime? How does this interact with ground detection and listing contacts, which would rightly ignore a trigger as non-blocking?

I beat my head against this for a while, but the inability to respond to collision conditionally was a huge roadblock. It’s all the more infuriating a problem, because Unity ships with a one-way platform modifier thing. Unfortunately, it seems to have been implemented by someone who has never played a platformer. It’s literally one-way — the player is only allowed to move straight upwards through it, not in from the sides. It also tries to block the player if they’re moving downwards while inside the platform, which invokes clumsy rejection behavior. And this all seems to be built into the physics engine itself somehow, so I can’t simply copy whatever they did.

Eventually, I settled on the following. After calculating attempted movement (including sliding), just at the end of FixedUpdate, I do a Cast along the movement vector. I’m not thrilled about having to duplicate the physics engine’s own work, but I do filter to only things on a “one-way platform” physics layer, which should at least help. For each object the cast hits, I use Physics2D.IgnoreCollision to either ignore or un-ignore the collision between the player and the platform, depending on whether the collision was ground-like or not.

(A lot of people suggested turning off collision between layers, but that can’t possibly work — the player might be standing on one platform while inside another, and anyway, this should work for all actors!)

Again, wrinkles! But fewer this time. Actually, maybe just one: handling the case where the player already overlaps the platform. I can’t just check for that with e.g. OverlapCollider, because that doesn’t distinguish between overlapping and merely touching.

I came up with a fairly simple fix: if I was going to un-ignore the collision (i.e. make the platform block), and the cast distance is reported as zero (either already touching or overlapping), I simply do nothing instead. If I’m standing on the platform, I must have already set it blocking when I was approaching it from the top anyway; if I’m overlapping it, I must have already set it non-blocking to get here in the first place.

I can imagine a few cases where this might go wrong. Moving platforms, especially, are going to cause some interesting issues. But this is the best I can do with what I know, and it seems to work well enough so far.

Oh, and our player can deliberately drop down through platforms, which was easy enough to implement; I just decide the platform is always passable while some button is held down.

Problem 5: Pushers and carriers

I haven’t gotten to this yet! Oh boy, can’t wait. I implemented it in LÖVE, but my way was hilariously invasive; I’m hoping that having a physics engine that supports a handwaved “this pushes that” will help. Of course, you also have to worry about sticking to platforms, for which the recommended solution is apparently to parent the cargo to the platform, which sounds goofy to me? I guess I’ll find out when I throw myself at it later.

Overall result

I ended up with a fairly pleasant-feeling system that supports slopes and one-way platforms and whatnot, with all the same pieces as I came up with for LÖVE. The code somehow ended up as less of a mess, too, but it probably helps that I’ve been down this rabbit hole once before and kinda knew what I was aiming for this time.

Animation of a character running smoothly along the top of an irregular dinosaur skeleton

Sorry that I don’t have a big block of code for you to copy-paste into your project. I don’t think there are nearly enough narrative discussions of these fundamentals, though, so hopefully this is useful to someone. If not, well, look forward to ✨ my book, that I am writing ✨!

Predict Billboard Top 10 Hits Using RStudio, H2O and Amazon Athena

Post Syndicated from Gopal Wunnava original https://aws.amazon.com/blogs/big-data/predict-billboard-top-10-hits-using-rstudio-h2o-and-amazon-athena/

Success in the popular music industry is typically measured in terms of the number of Top 10 hits artists have to their credit. The music industry is a highly competitive multi-billion dollar business, and record labels incur various costs in exchange for a percentage of the profits from sales and concert tickets.

Predicting the success of an artist’s release in the popular music industry can be difficult. One release may be extremely popular, resulting in widespread play on TV, radio and social media, while another single may turn out quite unpopular, and therefore unprofitable. Record labels need to be selective in their decision making, and predictive analytics can help them with decision making around the type of songs and artists they need to promote.

In this walkthrough, you leverage H2O.ai, Amazon Athena, and RStudio to make predictions on whether a song might make it to the Top 10 Billboard charts. You explore the GLM, GBM, and deep learning modeling techniques using H2O’s rapid, distributed and easy-to-use open source parallel processing engine. RStudio is a popular IDE, licensed either commercially or under AGPLv3, for working with R. This is ideal if you don’t want to connect to a server via SSH and use code editors such as vi to do analytics. RStudio is available in a desktop version, or a server version that allows you to access R via a web browser. RStudio’s Notebooks feature is used to demonstrate the execution of code and output. In addition, this post showcases how you can leverage Athena for query and interactive analysis during the modeling phase. A working knowledge of statistics and machine learning would be helpful to interpret the analysis being performed in this post.


Your goal is to predict whether a song will make it to the Top 10 Billboard charts. For this purpose, you will be using multiple modeling techniques―namely GLM, GBM and deep learning―and choose the model that is the best fit.

This solution involves the following steps:

  • Install and configure RStudio with Athena
  • Log in to RStudio
  • Install R packages
  • Connect to Athena
  • Create a dataset
  • Create models

Install and configure RStudio with Athena

Use the following AWS CloudFormation stack to install, configure, and connect RStudio on an Amazon EC2 instance with Athena.

Launching this stack creates all required resources and prerequisites:

  • Amazon EC2 instance with Amazon Linux (minimum size of t2.large is recommended)
  • Provisioning of the EC2 instance in an existing VPC and public subnet
  • Installation of Java 8
  • Assignment of an IAM role to the EC2 instance with the required permissions for accessing Athena and Amazon S3
  • Security group allowing access to the RStudio and SSH ports from the internet (I recommend restricting access to these ports)
  • S3 staging bucket required for Athena (referenced within RStudio as ATHENABUCKET)
  • RStudio username and password
  • Setup logs in Amazon CloudWatch Logs (if needed for additional troubleshooting)
  • Amazon EC2 Systems Manager agent, which makes it easy to manage and patch

All AWS resources are created in the US-East-1 Region. To avoid cross-region data transfer fees, launch the CloudFormation stack in the same region. To check the availability of Athena in other regions, see Region Table.

Log in to RStudio

The instance security group has been automatically configured to allow incoming connections on the RStudio port 8787 from any source internet address. You can edit the security group to restrict source IP access. If you have trouble connecting, ensure that port 8787 isn’t blocked by subnet network ACLS or by your outgoing proxy/firewall.

  1. In the CloudFormation stack, choose Outputs, Value, and then open the RStudio URL. You might need to wait for a few minutes until the instance has been launched.
  2. Log in to RStudio with the and password you provided during setup.

Install R packages

Next, install the required R packages from the RStudio console. You can download the R notebook file containing just the code.

#install pacman – a handy package manager for managing installs
if("pacman" %in% rownames(installed.packages()) == FALSE)
h2o.init(nthreads = -1)
##  Connection successful!
## R is connected to the H2O cluster: 
##     H2O cluster uptime:         2 hours 42 minutes 
##     H2O cluster version: 
##     H2O cluster version age:    4 months and 4 days !!! 
##     H2O cluster name:           H2O_started_from_R_rstudio_hjx881 
##     H2O cluster total nodes:    1 
##     H2O cluster total memory:   3.30 GB 
##     H2O cluster total cores:    4 
##     H2O cluster allowed cores:  4 
##     H2O cluster healthy:        TRUE 
##     H2O Connection ip:          localhost 
##     H2O Connection port:        54321 
##     H2O Connection proxy:       NA 
##     H2O Internal Security:      FALSE 
##     R Version:                  R version 3.3.3 (2017-03-06)
## Warning in h2o.clusterInfo(): 
## Your H2O cluster version is too old (4 months and 4 days)!
## Please download and install the latest version from http://h2o.ai/download/
#install aws sdk if not present (pre-requisite for using Athena with an IAM role)
if (!aws_sdk_present()) {


Connect to Athena

Next, establish a connection to Athena from RStudio, using an IAM role associated with your EC2 instance. Use ATHENABUCKET to specify the S3 staging directory.

URL <- 'https://s3.amazonaws.com/athena-downloads/drivers/AthenaJDBC41-1.0.1.jar'
fil <- basename(URL)
#download the file into current working directory
if (!file.exists(fil)) download.file(URL, fil)
#verify that the file has been downloaded successfully
## [1] "AthenaJDBC41-1.0.1.jar"
drv <- JDBC(driverClass="com.amazonaws.athena.jdbc.AthenaDriver", fil, identifier.quote="'")

con <- jdbcConnection <- dbConnect(drv, 'jdbc:awsathena://athena.us-east-1.amazonaws.com:443/',

Verify the connection. The results returned depend on your specific Athena setup.

## <JDBCConnection>
##  [1] "gdelt"               "wikistats"           "elb_logs_raw_native"
##  [4] "twitter"             "twitter2"            "usermovieratings"   
##  [7] "eventcodes"          "events"              "billboard"          
## [10] "billboardtop10"      "elb_logs"            "gdelthist"          
## [13] "gdeltmaster"         "twitter"             "twitter3"

Create a dataset

For this analysis, you use a sample dataset combining information from Billboard and Wikipedia with Echo Nest data in the Million Songs Dataset. Upload this dataset into your own S3 bucket. The table below provides a description of the fields used in this dataset.

Field Description
year Year that song was released
songtitle Title of the song
artistname Name of the song artist
songid Unique identifier for the song
artistid Unique identifier for the song artist
timesignature Variable estimating the time signature of the song
timesignature_confidence Confidence in the estimate for the timesignature
loudness Continuous variable indicating the average amplitude of the audio in decibels
tempo Variable indicating the estimated beats per minute of the song
tempo_confidence Confidence in the estimate for tempo
key Variable with twelve levels indicating the estimated key of the song (C, C#, B)
key_confidence Confidence in the estimate for key
energy Variable that represents the overall acoustic energy of the song, using a mix of features such as loudness
pitch Continuous variable that indicates the pitch of the song
timbre_0_min thru timbre_11_min Variables that indicate the minimum values over all segments for each of the twelve values in the timbre vector
timbre_0_max thru timbre_11_max Variables that indicate the maximum values over all segments for each of the twelve values in the timbre vector
top10 Indicator for whether or not the song made it to the Top 10 of the Billboard charts (1 if it was in the top 10, and 0 if not)

Create an Athena table based on the dataset

In the Athena console, select the default database, sampled, or create a new database.

Run the following create table statement.

create external table if not exists billboard
year int,
songtitle string,
artistname string,
songID string,
artistID string,
timesignature int,
timesignature_confidence double,
loudness double,
tempo double,
tempo_confidence double,
key int,
key_confidence double,
energy double,
pitch double,
timbre_0_min double,
timbre_0_max double,
timbre_1_min double,
timbre_1_max double,
timbre_2_min double,
timbre_2_max double,
timbre_3_min double,
timbre_3_max double,
timbre_4_min double,
timbre_4_max double,
timbre_5_min double,
timbre_5_max double,
timbre_6_min double,
timbre_6_max double,
timbre_7_min double,
timbre_7_max double,
timbre_8_min double,
timbre_8_max double,
timbre_9_min double,
timbre_9_max double,
timbre_10_min double,
timbre_10_max double,
timbre_11_min double,
timbre_11_max double,
Top10 int
LOCATION 's3://aws-bigdata-blog/artifacts/predict-billboard/data'

Inspect the table definition for the ‘billboard’ table that you have created. If you chose a database other than sampledb, replace that value with your choice.

dbGetQuery(con, "show create table sampledb.billboard")
##                                      createtab_stmt
## 1       CREATE EXTERNAL TABLE `sampledb.billboard`(
## 2                                       `year` int,
## 3                               `songtitle` string,
## 4                              `artistname` string,
## 5                                  `songid` string,
## 6                                `artistid` string,
## 7                              `timesignature` int,
## 8                `timesignature_confidence` double,
## 9                                `loudness` double,
## 10                                  `tempo` double,
## 11                       `tempo_confidence` double,
## 12                                       `key` int,
## 13                         `key_confidence` double,
## 14                                 `energy` double,
## 15                                  `pitch` double,
## 16                           `timbre_0_min` double,
## 17                           `timbre_0_max` double,
## 18                           `timbre_1_min` double,
## 19                           `timbre_1_max` double,
## 20                           `timbre_2_min` double,
## 21                           `timbre_2_max` double,
## 22                           `timbre_3_min` double,
## 23                           `timbre_3_max` double,
## 24                           `timbre_4_min` double,
## 25                           `timbre_4_max` double,
## 26                           `timbre_5_min` double,
## 27                           `timbre_5_max` double,
## 28                           `timbre_6_min` double,
## 29                           `timbre_6_max` double,
## 30                           `timbre_7_min` double,
## 31                           `timbre_7_max` double,
## 32                           `timbre_8_min` double,
## 33                           `timbre_8_max` double,
## 34                           `timbre_9_min` double,
## 35                           `timbre_9_max` double,
## 36                          `timbre_10_min` double,
## 37                          `timbre_10_max` double,
## 38                          `timbre_11_min` double,
## 39                          `timbre_11_max` double,
## 40                                     `top10` int)
## 41                             ROW FORMAT DELIMITED 
## 42                         FIELDS TERMINATED BY ',' 
## 43                            STORED AS INPUTFORMAT 
## 44       'org.apache.hadoop.mapred.TextInputFormat' 
## 45                                     OUTPUTFORMAT 
## 46  'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
## 47                                        LOCATION
## 48    's3://aws-bigdata-blog/artifacts/predict-billboard/data'
## 49                                  TBLPROPERTIES (
## 50            'transient_lastDdlTime'='1505484133')

Run a sample query

Next, run a sample query to obtain a list of all songs from Janet Jackson that made it to the Billboard Top 10 charts.

dbGetQuery(con, " SELECT songtitle,artistname,top10   FROM sampledb.billboard WHERE lower(artistname) =     'janet jackson' AND top10 = 1")
##                       songtitle    artistname top10
## 1                       Runaway Janet Jackson     1
## 2               Because Of Love Janet Jackson     1
## 3                         Again Janet Jackson     1
## 4                            If Janet Jackson     1
## 5  Love Will Never Do (Without You) Janet Jackson 1
## 6                     Black Cat Janet Jackson     1
## 7               Come Back To Me Janet Jackson     1
## 8                       Alright Janet Jackson     1
## 9                      Escapade Janet Jackson     1
## 10                Rhythm Nation Janet Jackson     1

Determine how many songs in this dataset are specifically from the year 2010.

dbGetQuery(con, " SELECT count(*)   FROM sampledb.billboard WHERE year = 2010")
##   _col0
## 1   373

The sample dataset provides certain song properties of interest that can be analyzed to gauge the impact to the song’s overall popularity. Look at one such property, timesignature, and determine the value that is the most frequent among songs in the database. Timesignature is a measure of the number of beats and the type of note involved.

Running the query directly may result in an error, as shown in the commented lines below. This error is a result of trying to retrieve a large result set over a JDBC connection, which can cause out-of-memory issues at the client level. To address this, reduce the fetch size and run again.

#t<-dbGetQuery(con, " SELECT timesignature FROM sampledb.billboard")
#Note:  Running the preceding query results in the following error: 
#Error in .jcall(rp, "I", "fetch", stride, block): java.sql.SQLException: The requested #fetchSize is more than the allowed value in Athena. Please reduce the fetchSize and try #again. Refer to the Athena documentation for valid fetchSize values.
# Use the dbSendQuery function, reduce the fetch size, and run again
r <- dbSendQuery(con, " SELECT timesignature     FROM sampledb.billboard")
dftimesignature<- fetch(r, n=-1, block=100)
## [1] TRUE
## dftimesignature
##    0    1    3    4    5    7 
##   10  143  503 6787  112   19
## [1] 7574

From the results, observe that 6787 songs have a timesignature of 4.

Next, determine the song with the highest tempo.

dbGetQuery(con, " SELECT songtitle,artistname,tempo   FROM sampledb.billboard WHERE tempo = (SELECT max(tempo) FROM sampledb.billboard) ")
##                   songtitle      artistname   tempo
## 1 Wanna Be Startin' Somethin' Michael Jackson 244.307

Create the training dataset

Your model needs to be trained such that it can learn and make accurate predictions. Split the data into training and test datasets, and create the training dataset first.  This dataset contains all observations from the year 2009 and earlier. You may face the same JDBC connection issue pointed out earlier, so this query uses a fetch size.

#BillboardTrain <- dbGetQuery(con, "SELECT * FROM sampledb.billboard WHERE year <= 2009")
#Running the preceding query results in the following error:-
#Error in .verify.JDBC.result(r, "Unable to retrieve JDBC result set for ", : Unable to retrieve #JDBC result set for SELECT * FROM sampledb.billboard WHERE year <= 2009 (Internal error)
#Follow the same approach as before to address this issue.

r <- dbSendQuery(con, "SELECT * FROM sampledb.billboard WHERE year <= 2009")
BillboardTrain <- fetch(r, n=-1, block=100)
## [1] TRUE
##   year           songtitle artistname timesignature
## 1 2009 The Awkward Goodbye    Athlete             3
## 2 2009        Rubik's Cube    Athlete             3
##   timesignature_confidence loudness   tempo tempo_confidence
## 1                    0.732   -6.320  89.614   0.652
## 2                    0.906   -9.541 117.742   0.542
## [1] 7201

Create the test dataset

BillboardTest <- dbGetQuery(con, "SELECT * FROM sampledb.billboard where year = 2010")
##   year              songtitle        artistname key
## 1 2010 This Is the House That Doubt Built A Day to Remember  11
## 2 2010        Sticks & Bricks A Day to Remember  10
##   key_confidence    energy pitch timbre_0_min
## 1          0.453 0.9666556 0.024        0.002
## 2          0.469 0.9847095 0.025        0.000
## [1] 373

Convert the training and test datasets into H2O dataframes

train.h2o <- as.h2o(BillboardTrain)
  |                                                                 |   0%
  |=================================================================| 100%
test.h2o <- as.h2o(BillboardTest)
  |                                                                 |   0%
  |=================================================================| 100%

Inspect the column names in your H2O dataframes.

##  [1] "year"                     "songtitle"               
##  [3] "artistname"               "songid"                  
##  [5] "artistid"                 "timesignature"           
##  [7] "timesignature_confidence" "loudness"                
##  [9] "tempo"                    "tempo_confidence"        
## [11] "key"                      "key_confidence"          
## [13] "energy"                   "pitch"                   
## [15] "timbre_0_min"             "timbre_0_max"            
## [17] "timbre_1_min"             "timbre_1_max"            
## [19] "timbre_2_min"             "timbre_2_max"            
## [21] "timbre_3_min"             "timbre_3_max"            
## [23] "timbre_4_min"             "timbre_4_max"            
## [25] "timbre_5_min"             "timbre_5_max"            
## [27] "timbre_6_min"             "timbre_6_max"            
## [29] "timbre_7_min"             "timbre_7_max"            
## [31] "timbre_8_min"             "timbre_8_max"            
## [33] "timbre_9_min"             "timbre_9_max"            
## [35] "timbre_10_min"            "timbre_10_max"           
## [37] "timbre_11_min"            "timbre_11_max"           
## [39] "top10"

Create models

You need to designate the independent and dependent variables prior to applying your modeling algorithms. Because you’re trying to predict the ‘top10’ field, this would be your dependent variable and everything else would be independent.

Create your first model using GLM. Because GLM works best with numeric data, you create your model by dropping non-numeric variables. You only use the variables in the dataset that describe the numerical attributes of the song in the logistic regression model. You won’t use these variables:  “year”, “songtitle”, “artistname”, “songid”, or “artistid”.

y.dep <- 39
x.indep <- c(6:38)
##  [1]  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
## [24] 29 30 31 32 33 34 35 36 37 38

Create Model 1: All numeric variables

Create Model 1 with the training dataset, using GLM as the modeling algorithm and H2O’s built-in h2o.glm function.

modelh1 <- h2o.glm( y = y.dep, x = x.indep, training_frame = train.h2o, family = "binomial")
  |                                                                 |   0%
  |=====                                                            |   8%
  |=================================================================| 100%

Measure the performance of Model 1, using H2O’s built-in performance function.

## H2OBinomialMetrics: glm
## MSE:  0.09924684
## RMSE:  0.3150347
## LogLoss:  0.3220267
## Mean Per-Class Error:  0.2380168
## AUC:  0.8431394
## Gini:  0.6862787
## R^2:  0.254663
## Null Deviance:  326.0801
## Residual Deviance:  240.2319
## AIC:  308.2319
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##          0   1    Error     Rate
## 0      255  59 0.187898  =59/314
## 1       17  42 0.288136   =17/59
## Totals 272 101 0.203753  =76/373
## Maximum Metrics: Maximum metrics at their respective thresholds
##                         metric threshold    value idx
## 1                       max f1  0.192772 0.525000 100
## 2                       max f2  0.124912 0.650510 155
## 3                 max f0point5  0.416258 0.612903  23
## 4                 max accuracy  0.416258 0.879357  23
## 5                max precision  0.813396 1.000000   0
## 6                   max recall  0.037579 1.000000 282
## 7              max specificity  0.813396 1.000000   0
## 8             max absolute_mcc  0.416258 0.455251  23
## 9   max min_per_class_accuracy  0.161402 0.738854 125
## 10 max mean_per_class_accuracy  0.124912 0.765006 155
## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or ` 
## [1] 0.8431394

The AUC metric provides insight into how well the classifier is able to separate the two classes. In this case, the value of 0.8431394 indicates that the classification is good. (A value of 0.5 indicates a worthless test, while a value of 1.0 indicates a perfect test.)

Next, inspect the coefficients of the variables in the dataset.

dfmodelh1 <- as.data.frame(h2o.varimp(modelh1))
##                       names coefficients sign
## 1              timbre_0_max  1.290938663  NEG
## 2                  loudness  1.262941934  POS
## 3                     pitch  0.616995941  NEG
## 4              timbre_1_min  0.422323735  POS
## 5              timbre_6_min  0.349016024  NEG
## 6                    energy  0.348092062  NEG
## 7             timbre_11_min  0.307331997  NEG
## 8              timbre_3_max  0.302225619  NEG
## 9             timbre_11_max  0.243632060  POS
## 10             timbre_4_min  0.224233951  POS
## 11             timbre_4_max  0.204134342  POS
## 12             timbre_5_min  0.199149324  NEG
## 13             timbre_0_min  0.195147119  POS
## 14 timesignature_confidence  0.179973904  POS
## 15         tempo_confidence  0.144242598  POS
## 16            timbre_10_max  0.137644568  POS
## 17             timbre_7_min  0.126995955  NEG
## 18            timbre_10_min  0.123851179  POS
## 19             timbre_7_max  0.100031481  NEG
## 20             timbre_2_min  0.096127636  NEG
## 21           key_confidence  0.083115820  POS
## 22             timbre_6_max  0.073712419  POS
## 23            timesignature  0.067241917  POS
## 24             timbre_8_min  0.061301881  POS
## 25             timbre_8_max  0.060041698  POS
## 26                      key  0.056158445  POS
## 27             timbre_3_min  0.050825116  POS
## 28             timbre_9_max  0.033733561  POS
## 29             timbre_2_max  0.030939072  POS
## 30             timbre_9_min  0.020708113  POS
## 31             timbre_1_max  0.014228818  NEG
## 32                    tempo  0.008199861  POS
## 33             timbre_5_max  0.004837870  POS
## 34                                    NA <NA>

Typically, songs with heavier instrumentation tend to be louder (have higher values in the variable “loudness”) and more energetic (have higher values in the variable “energy”). This knowledge is helpful for interpreting the modeling results.

You can make the following observations from the results:

  • The coefficient estimates for the confidence values associated with the time signature, key, and tempo variables are positive. This suggests that higher confidence leads to a higher predicted probability of a Top 10 hit.
  • The coefficient estimate for loudness is positive, meaning that mainstream listeners prefer louder songs with heavier instrumentation.
  • The coefficient estimate for energy is negative, meaning that mainstream listeners prefer songs that are less energetic, which are those songs with light instrumentation.

These coefficients lead to contradictory conclusions for Model 1. This could be due to multicollinearity issues. Inspect the correlation between the variables “loudness” and “energy” in the training set.

## [1] 0.7399067

This number indicates that these two variables are highly correlated, and Model 1 does indeed suffer from multicollinearity. Typically, you associate a value of -1.0 to -0.5 or 1.0 to 0.5 to indicate strong correlation, and a value of 0.1 to 0.1 to indicate weak correlation. To avoid this correlation issue, omit one of these two variables and re-create the models.

You build two variations of the original model:

  • Model 2, in which you keep “energy” and omit “loudness”
  • Model 3, in which you keep “loudness” and omit “energy”

You compare these two models and choose the model with a better fit for this use case.

Create Model 2: Keep energy and omit loudness

##  [1] "year"                     "songtitle"               
##  [3] "artistname"               "songid"                  
##  [5] "artistid"                 "timesignature"           
##  [7] "timesignature_confidence" "loudness"                
##  [9] "tempo"                    "tempo_confidence"        
## [11] "key"                      "key_confidence"          
## [13] "energy"                   "pitch"                   
## [15] "timbre_0_min"             "timbre_0_max"            
## [17] "timbre_1_min"             "timbre_1_max"            
## [19] "timbre_2_min"             "timbre_2_max"            
## [21] "timbre_3_min"             "timbre_3_max"            
## [23] "timbre_4_min"             "timbre_4_max"            
## [25] "timbre_5_min"             "timbre_5_max"            
## [27] "timbre_6_min"             "timbre_6_max"            
## [29] "timbre_7_min"             "timbre_7_max"            
## [31] "timbre_8_min"             "timbre_8_max"            
## [33] "timbre_9_min"             "timbre_9_max"            
## [35] "timbre_10_min"            "timbre_10_max"           
## [37] "timbre_11_min"            "timbre_11_max"           
## [39] "top10"
y.dep <- 39
x.indep <- c(6:7,9:38)
##  [1]  6  7  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
## [24] 30 31 32 33 34 35 36 37 38
modelh2 <- h2o.glm( y = y.dep, x = x.indep, training_frame = train.h2o, family = "binomial")
  |                                                                 |   0%
  |=======                                                          |  10%
  |=================================================================| 100%

Measure the performance of Model 2.

## H2OBinomialMetrics: glm
## MSE:  0.09922606
## RMSE:  0.3150017
## LogLoss:  0.3228213
## Mean Per-Class Error:  0.2490554
## AUC:  0.8431933
## Gini:  0.6863867
## R^2:  0.2548191
## Null Deviance:  326.0801
## Residual Deviance:  240.8247
## AIC:  306.8247
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##          0  1    Error     Rate
## 0      280 34 0.108280  =34/314
## 1       23 36 0.389831   =23/59
## Totals 303 70 0.152815  =57/373
## Maximum Metrics: Maximum metrics at their respective thresholds
##                         metric threshold    value idx
## 1                       max f1  0.254391 0.558140  69
## 2                       max f2  0.113031 0.647208 157
## 3                 max f0point5  0.413999 0.596026  22
## 4                 max accuracy  0.446250 0.876676  18
## 5                max precision  0.811739 1.000000   0
## 6                   max recall  0.037682 1.000000 283
## 7              max specificity  0.811739 1.000000   0
## 8             max absolute_mcc  0.254391 0.469060  69
## 9   max min_per_class_accuracy  0.141051 0.716561 131
## 10 max mean_per_class_accuracy  0.113031 0.761821 157
## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`
dfmodelh2 <- as.data.frame(h2o.varimp(modelh2))
##                       names coefficients sign
## 1                     pitch  0.700331511  NEG
## 2              timbre_1_min  0.510270513  POS
## 3              timbre_0_max  0.402059546  NEG
## 4              timbre_6_min  0.333316236  NEG
## 5             timbre_11_min  0.331647383  NEG
## 6              timbre_3_max  0.252425901  NEG
## 7             timbre_11_max  0.227500308  POS
## 8              timbre_4_max  0.210663865  POS
## 9              timbre_0_min  0.208516163  POS
## 10             timbre_5_min  0.202748055  NEG
## 11             timbre_4_min  0.197246582  POS
## 12            timbre_10_max  0.172729619  POS
## 13         tempo_confidence  0.167523934  POS
## 14 timesignature_confidence  0.167398830  POS
## 15             timbre_7_min  0.142450727  NEG
## 16             timbre_8_max  0.093377516  POS
## 17            timbre_10_min  0.090333426  POS
## 18            timesignature  0.085851625  POS
## 19             timbre_7_max  0.083948442  NEG
## 20           key_confidence  0.079657073  POS
## 21             timbre_6_max  0.076426046  POS
## 22             timbre_2_min  0.071957831  NEG
## 23             timbre_9_max  0.071393189  POS
## 24             timbre_8_min  0.070225578  POS
## 25                      key  0.061394702  POS
## 26             timbre_3_min  0.048384697  POS
## 27             timbre_1_max  0.044721121  NEG
## 28                   energy  0.039698433  POS
## 29             timbre_5_max  0.039469064  POS
## 30             timbre_2_max  0.018461133  POS
## 31                    tempo  0.013279926  POS
## 32             timbre_9_min  0.005282143  NEG
## 33                                    NA <NA>

## [1] 0.8431933

You can make the following observations:

  • The AUC metric is 0.8431933.
  • Inspecting the coefficient of the variable energy, Model 2 suggests that songs with high energy levels tend to be more popular. This is as per expectation.
  • As H2O orders variables by significance, the variable energy is not significant in this model.

You can conclude that Model 2 is not ideal for this use , as energy is not significant.

CreateModel 3: Keep loudness but omit energy

##  [1] "year"                     "songtitle"               
##  [3] "artistname"               "songid"                  
##  [5] "artistid"                 "timesignature"           
##  [7] "timesignature_confidence" "loudness"                
##  [9] "tempo"                    "tempo_confidence"        
## [11] "key"                      "key_confidence"          
## [13] "energy"                   "pitch"                   
## [15] "timbre_0_min"             "timbre_0_max"            
## [17] "timbre_1_min"             "timbre_1_max"            
## [19] "timbre_2_min"             "timbre_2_max"            
## [21] "timbre_3_min"             "timbre_3_max"            
## [23] "timbre_4_min"             "timbre_4_max"            
## [25] "timbre_5_min"             "timbre_5_max"            
## [27] "timbre_6_min"             "timbre_6_max"            
## [29] "timbre_7_min"             "timbre_7_max"            
## [31] "timbre_8_min"             "timbre_8_max"            
## [33] "timbre_9_min"             "timbre_9_max"            
## [35] "timbre_10_min"            "timbre_10_max"           
## [37] "timbre_11_min"            "timbre_11_max"           
## [39] "top10"
y.dep <- 39
x.indep <- c(6:12,14:38)
##  [1]  6  7  8  9 10 11 12 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
## [24] 30 31 32 33 34 35 36 37 38
modelh3 <- h2o.glm( y = y.dep, x = x.indep, training_frame = train.h2o, family = "binomial")
  |                                                                 |   0%
  |========                                                         |  12%
  |=================================================================| 100%
## H2OBinomialMetrics: glm
## MSE:  0.0978859
## RMSE:  0.3128672
## LogLoss:  0.3178367
## Mean Per-Class Error:  0.264925
## AUC:  0.8492389
## Gini:  0.6984778
## R^2:  0.2648836
## Null Deviance:  326.0801
## Residual Deviance:  237.1062
## AIC:  303.1062
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##          0  1    Error     Rate
## 0      286 28 0.089172  =28/314
## 1       26 33 0.440678   =26/59
## Totals 312 61 0.144772  =54/373
## Maximum Metrics: Maximum metrics at their respective thresholds
##                         metric threshold    value idx
## 1                       max f1  0.273799 0.550000  60
## 2                       max f2  0.125503 0.663265 155
## 3                 max f0point5  0.435479 0.628931  24
## 4                 max accuracy  0.435479 0.882038  24
## 5                max precision  0.821606 1.000000   0
## 6                   max recall  0.038328 1.000000 280
## 7              max specificity  0.821606 1.000000   0
## 8             max absolute_mcc  0.435479 0.471426  24
## 9   max min_per_class_accuracy  0.173693 0.745763 120
## 10 max mean_per_class_accuracy  0.125503 0.775073 155
## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`
dfmodelh3 <- as.data.frame(h2o.varimp(modelh3))
##                       names coefficients sign
## 1              timbre_0_max 1.216621e+00  NEG
## 2                  loudness 9.780973e-01  POS
## 3                     pitch 7.249788e-01  NEG
## 4              timbre_1_min 3.891197e-01  POS
## 5              timbre_6_min 3.689193e-01  NEG
## 6             timbre_11_min 3.086673e-01  NEG
## 7              timbre_3_max 3.025593e-01  NEG
## 8             timbre_11_max 2.459081e-01  POS
## 9              timbre_4_min 2.379749e-01  POS
## 10             timbre_4_max 2.157627e-01  POS
## 11             timbre_0_min 1.859531e-01  POS
## 12             timbre_5_min 1.846128e-01  NEG
## 13 timesignature_confidence 1.729658e-01  POS
## 14             timbre_7_min 1.431871e-01  NEG
## 15            timbre_10_max 1.366703e-01  POS
## 16            timbre_10_min 1.215954e-01  POS
## 17         tempo_confidence 1.183698e-01  POS
## 18             timbre_2_min 1.019149e-01  NEG
## 19           key_confidence 9.109701e-02  POS
## 20             timbre_7_max 8.987908e-02  NEG
## 21             timbre_6_max 6.935132e-02  POS
## 22             timbre_8_max 6.878241e-02  POS
## 23            timesignature 6.120105e-02  POS
## 24                      key 5.814805e-02  POS
## 25             timbre_8_min 5.759228e-02  POS
## 26             timbre_1_max 2.930285e-02  NEG
## 27             timbre_9_max 2.843755e-02  POS
## 28             timbre_3_min 2.380245e-02  POS
## 29             timbre_2_max 1.917035e-02  POS
## 30             timbre_5_max 1.715813e-02  POS
## 31                    tempo 1.364418e-02  NEG
## 32             timbre_9_min 8.463143e-05  NEG
## 33                                    NA <NA>
## Warning in h2o.find_row_by_threshold(object, t): Could not find exact
## threshold: 0.5 for this set of metrics; using closest threshold found:
## 0.501855569251422. Run `h2o.predict` and apply your desired threshold on a
## probability column.
## [[1]]
## [1] 0.2033898
## [1] 0.8492389

You can make the following observations:

  • The AUC metric is 0.8492389.
  • From the confusion matrix, the model correctly predicts that 33 songs will be top 10 hits (true positives). However, it has 26 false positives (songs that the model predicted would be Top 10 hits, but ended up not being Top 10 hits).
  • Loudness has a positive coefficient estimate, meaning that this model predicts that songs with heavier instrumentation tend to be more popular. This is the same conclusion from Model 2.
  • Loudness is significant in this model.

Overall, Model 3 predicts a higher number of top 10 hits with an accuracy rate that is acceptable. To choose the best fit for production runs, record labels should consider the following factors:

  • Desired model accuracy at a given threshold
  • Number of correct predictions for top10 hits
  • Tolerable number of false positives or false negatives

Next, make predictions using Model 3 on the test dataset.

predict.regh <- h2o.predict(modelh3, test.h2o)
  |                                                                 |   0%
  |=================================================================| 100%
##   predict        p0          p1
## 1       0 0.9654739 0.034526052
## 2       0 0.9654748 0.034525236
## 3       0 0.9635547 0.036445318
## 4       0 0.9343579 0.065642149
## 5       0 0.9978334 0.002166601
## 6       0 0.9779949 0.022005078
## [373 rows x 3 columns]
##   predict
## 1       0
## 2       0
## 3       0
## 4       0
## 5       0
## 6       0
## [373 rows x 1 column]
#Rename the predicted column 
colnames(dpr)[colnames(dpr) == 'predict'] <- 'predict_top10'
##   0   1 
## 312  61

The first set of output results specifies the probabilities associated with each predicted observation.  For example, observation 1 is 96.54739% likely to not be a Top 10 hit, and 3.4526052% likely to be a Top 10 hit (predict=1 indicates Top 10 hit and predict=0 indicates not a Top 10 hit).  The second set of results list the actual predictions made.  From the third set of results, this model predicts that 61 songs will be top 10 hits.

Compute the baseline accuracy, by assuming that the baseline predicts the most frequent outcome, which is that most songs are not Top 10 hits.

##   0   1 
## 314  59

Now observe that the baseline model would get 314 observations correct, and 59 wrong, for an accuracy of 314/(314+59) = 0.8418231.

It seems that Model 3, with an accuracy of 0.8552, provides you with a small improvement over the baseline model. But is this model useful for record labels?

View the two models from an investment perspective:

  • A production company is interested in investing in songs that are more likely to make it to the Top 10. The company’s objective is to minimize the risk of financial losses attributed to investing in songs that end up unpopular.
  • How many songs does Model 3 correctly predict as a Top 10 hit in 2010? Looking at the confusion matrix, you see that it predicts 33 top 10 hits correctly at an optimal threshold, which is more than half the number
  • It will be more useful to the record label if you can provide the production company with a list of songs that are highly likely to end up in the Top 10.
  • The baseline model is not useful, as it simply does not label any song as a hit.

Considering the three models built so far, you can conclude that Model 3 proves to be the best investment choice for the record label.

GBM model

H2O provides you with the ability to explore other learning models, such as GBM and deep learning. Explore building a model using the GBM technique, using the built-in h2o.gbm function.

Before you do this, you need to convert the target variable to a factor for multinomial classification techniques.

gbm.modelh <- h2o.gbm(y=y.dep, x=x.indep, training_frame = train.h2o, ntrees = 500, max_depth = 4, learn_rate = 0.01, seed = 1122,distribution="multinomial")
  |                                                                 |   0%
  |===                                                              |   5%
  |=====                                                            |   7%
  |======                                                           |   9%
  |=======                                                          |  10%
  |======================                                           |  33%
  |=====================================                            |  56%
  |====================================================             |  79%
  |================================================================ |  98%
  |=================================================================| 100%
## H2OBinomialMetrics: gbm
## MSE:  0.09860778
## RMSE:  0.3140188
## LogLoss:  0.3206876
## Mean Per-Class Error:  0.2120263
## AUC:  0.8630573
## Gini:  0.7261146
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##          0  1    Error     Rate
## 0      266 48 0.152866  =48/314
## 1       16 43 0.271186   =16/59
## Totals 282 91 0.171582  =64/373
## Maximum Metrics: Maximum metrics at their respective thresholds
##                       metric threshold    value idx
## 1                     max f1  0.189757 0.573333  90
## 2                     max f2  0.130895 0.693717 145
## 3               max f0point5  0.327346 0.598802  26
## 4               max accuracy  0.442757 0.876676  14
## 5              max precision  0.802184 1.000000   0
## 6                 max recall  0.049990 1.000000 284
## 7            max specificity  0.802184 1.000000   0
## 8           max absolute_mcc  0.169135 0.496486 104
## 9 max min_per_class_accuracy  0.169135 0.796610 104
## 10 max mean_per_class_accuracy  0.169135 0.805948 104
## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `
## Warning in h2o.find_row_by_threshold(object, t): Could not find exact
## threshold: 0.5 for this set of metrics; using closest threshold found:
## 0.501205344484314. Run `h2o.predict` and apply your desired threshold on a
## probability column.
## [[1]]
## [1] 0.1355932
## [1] 0.8630573

This model correctly predicts 43 top 10 hits, which is 10 more than the number predicted by Model 3. Moreover, the AUC metric is higher than the one obtained from Model 3.

As seen above, H2O’s API provides the ability to obtain key statistical measures required to analyze the models easily, using several built-in functions. The record label can experiment with different parameters to arrive at the model that predicts the maximum number of Top 10 hits at the desired level of accuracy and threshold.

H2O also allows you to experiment with deep learning models. Deep learning models have the ability to learn features implicitly, but can be more expensive computationally.

Now, create a deep learning model with the h2o.deeplearning function, using the same training and test datasets created before. The time taken to run this model depends on the type of EC2 instance chosen for this purpose.  For models that require more computation, consider using accelerated computing instances such as the P2 instance type.

  dlearning.modelh <- h2o.deeplearning(y = y.dep,
                                      x = x.indep,
                                      training_frame = train.h2o,
                                      epoch = 250,
                                      hidden = c(250,250),
                                      activation = "Rectifier",
                                      seed = 1122,
  |                                                                 |   0%
  |===                                                              |   4%
  |=====                                                            |   8%
  |========                                                         |  12%
  |==========                                                       |  16%
  |=============                                                    |  20%
  |================                                                 |  24%
  |==================                                               |  28%
  |=====================                                            |  32%
  |=======================                                          |  36%
  |==========================                                       |  40%
  |=============================                                    |  44%
  |===============================                                  |  48%
  |==================================                               |  52%
  |====================================                             |  56%
  |=======================================                          |  60%
  |==========================================                       |  64%
  |============================================                     |  68%
  |===============================================                  |  72%
  |=================================================                |  76%
  |====================================================             |  80%
  |=======================================================          |  84%
  |=========================================================        |  88%
  |============================================================     |  92%
  |==============================================================   |  96%
  |=================================================================| 100%
##    user  system elapsed 
##   1.216   0.020 166.508
## H2OBinomialMetrics: deeplearning
## MSE:  0.1678359
## RMSE:  0.4096778
## LogLoss:  1.86509
## Mean Per-Class Error:  0.3433013
## AUC:  0.7568822
## Gini:  0.5137644
## Confusion Matrix (vertical: actual; across: predicted) for F1-optimal threshold:
##          0  1    Error     Rate
## 0      290 24 0.076433  =24/314
## 1       36 23 0.610169   =36/59
## Totals 326 47 0.160858  =60/373
## Maximum Metrics: Maximum metrics at their respective thresholds
##                       metric threshold    value idx
## 1                     max f1  0.826267 0.433962  46
## 2                     max f2  0.000000 0.588235 239
## 3               max f0point5  0.999929 0.511811  16
## 4               max accuracy  0.999999 0.865952  10
## 5              max precision  1.000000 1.000000   0
## 6                 max recall  0.000000 1.000000 326
## 7            max specificity  1.000000 1.000000   0
## 8           max absolute_mcc  0.999929 0.363219  16
## 9 max min_per_class_accuracy  0.000004 0.662420 145
## 10 max mean_per_class_accuracy  0.000000 0.685334 224
## Gains/Lift Table: Extract with `h2o.gainsLift(<model>, <data>)` or `h2o.gainsLift(<model>, valid=<T/F>, xval=<T/F>)`
## Warning in h2o.find_row_by_threshold(object, t): Could not find exact
## threshold: 0.5 for this set of metrics; using closest threshold found:
## 0.496293348880151. Run `h2o.predict` and apply your desired threshold on a
## probability column.
## [[1]]
## [1] 0.3898305
## [1] 0.7568822

The AUC metric for this model is 0.7568822, which is less than what you got from the earlier models. I recommend further experimentation using different hyper parameters, such as the learning rate, epoch or the number of hidden layers.

H2O’s built-in functions provide many key statistical measures that can help measure model performance. Here are some of these key terms.

Metric Description
Sensitivity Measures the proportion of positives that have been correctly identified. It is also called the true positive rate, or recall.
Specificity Measures the proportion of negatives that have been correctly identified. It is also called the true negative rate.
Threshold Cutoff point that maximizes specificity and sensitivity. While the model may not provide the highest prediction at this point, it would not be biased towards positives or negatives.
Precision The fraction of the documents retrieved that are relevant to the information needed, for example, how many of the positively classified are relevant

Provides insight into how well the classifier is able to separate the two classes. The implicit goal is to deal with situations where the sample distribution is highly skewed, with a tendency to overfit to a single class.

0.90 – 1 = excellent (A)

0.8 – 0.9 = good (B)

0.7 – 0.8 = fair (C)

.6 – 0.7 = poor (D)

0.5 – 0.5 = fail (F)

Here’s a summary of the metrics generated from H2O’s built-in functions for the three models that produced useful results.

Metric Model 3 GBM Model Deep Learning Model



















1.0 1.0





1.0 1.0





0.2033898 0.1355932



AUC 0.8492389 0.8630573 0.756882

Note: ‘t’ denotes threshold.

Your options at this point could be narrowed down to Model 3 and the GBM model, based on the AUC and accuracy metrics observed earlier.  If the slightly lower accuracy of the GBM model is deemed acceptable, the record label can choose to go to production with the GBM model, as it can predict a higher number of Top 10 hits.  The AUC metric for the GBM model is also higher than that of Model 3.

Record labels can experiment with different learning techniques and parameters before arriving at a model that proves to be the best fit for their business. Because deep learning models can be computationally expensive, record labels can choose more powerful EC2 instances on AWS to run their experiments faster.


In this post, I showed how the popular music industry can use analytics to predict the type of songs that make the Top 10 Billboard charts. By running H2O’s scalable machine learning platform on AWS, data scientists can easily experiment with multiple modeling techniques and interactively query the data using Amazon Athena, without having to manage the underlying infrastructure. This helps record labels make critical decisions on the type of artists and songs to promote in a timely fashion, thereby increasing sales and revenue.

If you have questions or suggestions, please comment below.

Additional Reading

Learn how to build and explore a simple geospita simple GEOINT application using SparkR.

About the Authors

gopalGopal Wunnava is a Partner Solution Architect with the AWS GSI Team. He works with partners and customers on big data engagements, and is passionate about building analytical solutions that drive business capabilities and decision making. In his spare time, he loves all things sports and movies related and is fond of old classics like Asterix, Obelix comics and Hitchcock movies.



Bob Strahan, a Senior Consultant with AWS Professional Services, contributed to this post.



Popcorn Time Creator Readies BitTorrent & Blockchain-Powered Video Platform

Post Syndicated from Andy original https://torrentfreak.com/popcorn-time-creator-readies-bittorrent-blockchain-powered-youtube-competitor-171012/

Without a doubt, YouTube is one of the most important websites available on the Internet today.

Its massive archive of videos brings pleasure to millions on a daily basis but its centralized nature means that owner Google always exercises control.

Over the years, people have looked to decentralize the YouTube concept and the latest project hoping to shake up the market has a particularly interesting player onboard.

Until 2015, only insiders knew that Argentinian designer Federico Abad was actually ‘Sebastian’, the shadowy figure behind notorious content sharing platform Popcorn Time.

Now he’s part of the team behind Flixxo, a BitTorrent and blockchain-powered startup hoping to wrestle a share of the video market from YouTube. Here’s how the team, which features blockchain startup RSK Labs, hope things will play out.

The Flixxo network will have no centralized storage of data, eliminating the need for expensive hosting along with associated costs. Instead, transfers will take place between peers using BitTorrent, meaning video content will be stored on the machines of Flixxo users. In practice, the content will be downloaded and uploaded in much the same way as users do on The Pirate Bay or indeed Abad’s baby, Popcorn Time.

However, there’s a twist to the system that envisions content creators, content consumers, and network participants (seeders) making revenue from their efforts.

At the heart of the Flixxo system are digital tokens (think virtual currency), called Flixx. These Flixx ‘coins’, which will go on sale in 12 days, can be used to buy access to content. Creators can also opt to pay consumers when those people help to distribute their content to others.

“Free from structural costs, producers can share the earnings from their content with the network that supports them,” the team explains.

“This way you get paid for helping us improve Flixxo, and you earn credits (in the form of digital tokens called Flixx) for watching higher quality content. Having no intermediaries means that the price you pay for watching the content that you actually want to watch is lower and fairer.”

The Flixxo team

In addition to earning tokens from helping to distribute content, people in the Flixxo ecosystem can also earn currency by watching sponsored content, i.e advertisements. While in a traditional system adverts are often considered a nuisance, Flixx tokens have real value, with a promise that users will be able to trade their Flixx not only for videos, but also for tangible and semi-tangible goods.

“Use your Flixx to reward the producers you follow, encouraging them to create more awesome content. Or keep your Flixx in your wallet and use them to buy a movie ticket, a pair of shoes from an online retailer, a chest of coins in your favourite game or even convert them to old-fashioned cash or up-and-coming digital assets, like Bitcoin,” the team explains.

The Flixxo team have big plans. After foundation in early 2016, the second quarter of 2017 saw the completion of a functional alpha release. In a little under two weeks, the project will begin its token generation event, with new offices in Los Angeles planned for the first half of 2018 alongside a premiere of the Flixxo platform.

“A total of 1,000,000,000 (one billion) Flixx tokens will be issued. A maximum of 300,000,000 (three hundred million) tokens will be sold. Some of these tokens (not more than 33% or 100,000,000 Flixx) may be sold with anticipation of the token allocation event to strategic investors,” Flixxo states.

Like all content platforms, Flixxo will live or die by the quality of the content it provides and whether, at least in the first instance, it can persuade people to part with their hard-earned cash. Only time will tell whether its content will be worth a premium over readily accessible YouTube content but with much-reduced costs, it may tempt creators seeking a bigger piece of the pie.

“Flixxo will also educate its community, teaching its users that in this new internet era value can be held and transferred online without intermediaries, a value that can be earned back by participating in a community, by contributing, being rewarded for every single social interaction,” the team explains.

Of course, the elephant in the room is what will happen when people begin sharing copyrighted content via Flixxo. Certainly, the fact that Popcorn Time’s founder is a key player and rival streaming platform Stremio is listed as a partner means that things could get a bit spicy later on.

Nevertheless, the team suggests that piracy and spam content distribution will be limited by mechanisms already built into the system.

“[A]uthors have to time-block tokens in a smart contract (set as a warranty) in order to upload content. This contract will also handle and block their earnings for a certain period of time, so that in the case of a dispute the unfair-uploader may lose those tokens,” they explain.

That being said, Flixxo also says that “there is no way” for third parties to censor content “which means that anyone has the chance of making any piece of media available on the network.” However, Flixxo says it will develop tools for filtering what it describes as “inappropriate content.”

At this point, things start to become a little unclear. On the one hand Flixxo says it could become a “revolutionary tool for uncensorable and untraceable media” yet on the other it says that it’s necessary to ensure that adult content, for example, isn’t seen by kids.

“We know there is a thin line between filtering or curating content and censorship, and it is a fact that we have an open network for everyone to upload any content. However, Flixxo as a platform will apply certain filtering based on clear rules – there should be a behavior-code for uploaders in order to offer the right content to the right user,” Flixxo explains.

To this end, Flixxo says it will deploy a centralized curation function, carried out by 101 delegates elected by the community, which will become progressively decentralized over time.

“This curation will have a cost, paid in Flixx, and will be collected from the warranty blocked by the content uploaders,” they add.

There can be little doubt that if Flixxo begins ‘curating’ unsuitable content, copyright holders will call on it to do the same for their content too. And, if the platform really takes off, 101 curators probably won’t scratch the surface. There’s also the not inconsiderable issue of what might happen to curators’ judgment when they’re incentivized to block curate content.

Finally, for those sick of “not available in your region” messages, there’s good and bad news. Flixxo insists there will be no geo-blocking of content on its part but individual creators will still have that feature available to them, should they choose.

The Flixx whitepaper can be downloaded here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

SOPA Ghosts Hinder U.S. Pirate Site Blocking Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/sopa-ghosts-hinder-u-s-pirate-site-blocking-efforts-171008/

Website blocking has become one of the entertainment industries’ favorite anti-piracy tools.

All over the world, major movie and music industry players have gone to court demanding that ISPs take action, often with great success.

Internal MPAA research showed that website blockades help to deter piracy and former boss Chris Dodd said that they are one of the most effective anti-tools available.

While not everyone is in agreement on this, the numbers are used to lobby politicians and convince courts. Interestingly, however, nothing is happening in the United States, which is where most pirate site visitors come from.

This is baffling to many people. Why would US-based companies go out of their way to demand ISP blocking in the most exotic locations, but fail to do the same at home?

We posed this question to Neil Turkewitz, RIAA’s former Executive Vice President International, who currently runs his own consulting group.

The main reason why pirate site blocking requests have not yet been made in the United States is down to SOPA. When the proposed SOPA legislation made headlines five years ago there was a massive backlash against website blocking, which isn’t something copyright groups want to reignite.

“The legacy of SOPA is that copyright industries want to avoid resurrecting the ghosts of SOPA past, and principally focus on ways to creatively encourage cooperation with platforms, and to use existing remedies,” Turkewitz tells us.

Instead of taking the likes of Comcast and Verizon to court, the entertainment industries focused on voluntary agreements, such as the now-defunct Copyright Alerts System. However, that doesn’t mean that website blocking and domain seizures are not an option.

“SOPA made ‘website blocking’ as such a four-letter word. But this is actually fairly misleading,” Turkewitz says.

“There have been a variety of civil and criminal actions addressing the conduct of entities subject to US jurisdiction facilitating piracy, regardless of the source, including hundreds of domain seizures by DHS/ICE.”

Indeed, there are plenty of legal options already available to do much of what SOPA promised. ABS-CBN has taken over dozens of pirate site domain names through the US court system. Most recently even through an ex-parte order, meaning that the site owners had no option to defend themselves before they lost their domains.

ISP and search engine blocking is also around the corner. As we reported earlier this week, a Virginia magistrate judge recently recommended an injunction which would require search engines and Internet providers to prevent users from accessing Sci-Hub.

Still, the major movie and music companies are not yet using these tools to take on The Pirate Bay or other major pirate sites. If it’s so easy, then why not? Apparently, SOPA may still be in the back of their minds.

Interestingly, the RIAA’s former top executive wasn’t a fan of SOPA when it was first announced, as it wouldn’t do much to extend the legal remedies that were already available.

“I actually didn’t like SOPA very much since it mostly reflected existing law and maintained a paradigm that didn’t involve ISP’s in creative interdiction, and simply preserved passivity. To see it characterized as ‘copyright gone wild’ was certainly jarring and incongruous,” Turkewitz says.

Ironically, it looks like a bill that failed to pass, and didn’t impress some copyright holders to begin with, is still holding them back after five years. They’re certainly not using all the legal options available to avoid SOPA comparison. The question is, for how long?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Yarrrr! Dutch ISPs Block The Pirate Bay But It’s Bad Timing for Trolls

Post Syndicated from Andy original https://torrentfreak.com/yarrrr-dutch-isps-block-the-pirate-bay-but-its-bad-timing-for-trolls-171005/

While many EU countries have millions of Internet pirates, few have given citizens the freedom to plunder like the Netherlands. For many years, Dutch Internet users actually went about their illegal downloading with government blessing.

Just over three years ago, downloading and copying movies and music for personal use was not punishable by law. Instead, the Dutch compensated rightsholders through a “piracy levy” on writable media, hard drives and electronic devices with storage capacity, including smartphones.

Following a ruling from the European Court of Justice in 2014, however, all that came to an end. Along with uploading (think BitTorrent sharing), downloading was also outlawed.

Around the same time, The Court of The Hague handed down a decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, it was decided that the ISPs wouldn’t have to block The Pirate Bay after all. After a long and tortuous battle, however, the ISPs learned last month that they would have to block the site, pending a decision from the Supreme Court.

On September 22, both ISPs were given 10 business days to prevent subscriber access to the notorious torrent site, or face fines of 2,000 euros per day, up to a maximum of one million euros.

With that time nearly up, yesterday Ziggo broke cover to become the first of the pair to block the site. On a dedicated diversion page, somewhat humorously titled ziggo.nl/yarrr, the ISP explained the situation to now-blocked users.

“You are trying to visit a page of The Pirate Bay. On September 22, the Hague Court obliged us to block access to this site. The pirate flag is thus handled by us. The case is currently at the Supreme Court which judges the basic questions in this case,” the notice reads.

Ziggo Pirate Bay message (translated)

Customers of XS4ALL currently have no problem visiting The Pirate Bay but according to a statement handed to Tweakers by a spokesperson, the blockade will be implemented today.

In addition to the site’s main domains, the injunction will force the ISPs to block 155 URLs and IP addresses in total, a list that has been drawn up by BREIN to include various mirrors, proxies, and alternate access points. XS4All says it will publish a list of all the blocked items on its notification page.

While the re-introduction of a Pirate Bay blockade in the Netherlands is an achievement for BREIN, it’s potentially bad timing for the copyright trolls waiting in the wings to snare Dutch file-sharers.

As recently reported, movie outfit Dutch Filmworks (DFW) is preparing a wave of cash-settlement copyright-trolling letters to mimic those sent by companies elsewhere.

There’s little doubt that users of The Pirate Bay would’ve been DFW’s targets but it seems likely that given the introduction of blockades, many Dutch users will start to educate themselves on the use of VPNs to protect their privacy, or at least become more aware of the risks.

Of course, there will be no real shortage of people who’ll continue to download without protection, but DFW are getting into this game just as it’s likely to get more difficult for them. As more and more sites get blocked (and that is definitely BREIN’s overall plan) the low hanging fruit will sit higher and higher up the tree – and the cash with it.

Like all methods of censorship, site-blocking eventually drives communication underground. While anti-piracy outfits all say blocking is necessary, obfuscation and encryption isn’t welcomed by any of them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cloudflare Bans Sites For Using Cryptocurrency Miners

Post Syndicated from Andy original https://torrentfreak.com/cloudflare-bans-sites-for-using-cryptocurrency-miners-171004/

After years of accepting donations via Bitcoin, last month various ‘pirate’ sites began to generate digital currency revenues in a brand new way.

It all began with The Pirate Bay, which quietly added a Javascript cryptocurrency miner to its main site, something that first manifested itself as a large spike in CPU utilization on the machines of visitors.

The stealth addition to the platform, which its operators later described as a test, was extremely controversial. While many thought of the miner as a cool and innovative way to generate revenue in a secure fashion, a vocal majority expressed a preference for permission being requested first, in case they didn’t want to participate in the program.

Over the past couple of weeks, several other sites have added similar miners, some which ask permission to run and others that do not. While the former probably aren’t considered problematic, the latter are now being viewed as a serious problem by an unexpected player in the ecosystem.

TorrentFreak has learned that popular CDN service Cloudflare, which is often criticized for not being harsh enough on ‘pirate’ sites, is actively suspending the accounts of sites that deploy cryptocurrency miners on their platforms.

“Cloudflare kicked us from their service for using a Coinhive miner,” the operator of ProxyBunker.online informed TF this morning.

ProxyBunker is a site that that links to several other domains that offer unofficial proxy services for the likes of The Pirate Bay, RARBG, KickassTorrents, Torrentz2, and dozens of other sites. It first tested a miner for four days starting September 23. Official implementation began October 1 but was ended last evening, abruptly.

“Late last night, all our domains got deleted off Cloudflare without warning so I emailed Cloudflare to ask what was going on,” the operator explained.

Bye bye

As the email above shows, Cloudflare cited only a “possible” terms of service violation. Further clarification was needed to get to the root of the problem.

So, just a few minutes later, the site operator contacted Cloudflare, acknowledging the suspension but pointing out that the notification email was somewhat vague and didn’t give a reason for the violation. A follow-up email from Cloudflare certainly put some meat on the bones.

“Multiple domains in your account were injecting Coinhive mining code without
notifying users and without any option to disabling [sic] the mining,” wrote Justin Paine, Head of Trust & Safety at Cloudflare.

“We consider this to be malware, and as such the account was suspended, and all domains removed from Cloudflare.”

Cloudflare: Unannounced miners are malware

ProxyBunker’s operator wrote back to Cloudflare explaining that the Coinhive miner had been running on his domains but that his main domain had a way of disabling mining, as per new code made available from Coinhive.

“We were running the miner on our proxybunker.online domain using Coinhive’s new Javacode Simple Miner UI that lets the user stop the miner at anytime and set the CPU speed it mines at,” he told TF.

Nevertheless, some element of the configuration appears to have fallen short of Cloudflare’s standards. So, shortly after Cloudflare’s explanation, the site operator asked if he could be reinstated if he completely removed the miner from his site. The response was a ‘yes’ but with a stern caveat attached.

“We will remove the account suspension, however do note you’ll need to re-sign up the domains as they were removed as a result of the account suspension. Please note — if we discover similar activity again the domains and account will be permanently blocked,” Cloudflare’s Justin warned.

ProxyBunker’s operator says that while he sees the value in cryptocurrency miners, he can understand why people might be opposed to them too. That being said, he would appreciate it if services like Cloudflare published clear guidelines on what is and is not acceptable.

“We do understand that most users will not like the miner using up a bit of their CPU but we do see the full potential as a new revenue stream,” he explains.

“I think third-party services need to post clear information that they’re not allowed on their services, if that’s the case.”

At time of publication, Cloudflare had not responded to TorrentFreak’s requests for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Recommends ISP and Search Engine Blocking of Sci-Hub in the US

Post Syndicated from Ernesto original https://torrentfreak.com/judge-recommends-isp-search-engine-blocking-sci-hub-us-171003/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site. In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

While the request is rather unprecedented for the US, as it includes search engine and ISP blocking, Magistrate Judge John Anderson has included these measures in his recommendations.

Judge Anderson agrees that Sci-Hub is guilty of copyright and trademark infringement. In addition to $4,800,000 in statutory damages, he recommends a broad injunction that would require search engines, ISPs, domain registrars and other services to block Sci-Hub’s domain names.

“… the undersigned recommends that it be ordered that any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works.”

The recommendation

In addition to the above, domain registries and registrars will also be required to suspend Sci-Hub’s domain names. This also happened previously in a different lawsuit, but Sci-Hub swiftly moved to a new domain at the time.

“Finally, the undersigned recommends that it be ordered that the domain name registries and/or registrars for Sci-Hub’s domain names and websites, or their technical administrators, shall place the domain names on registryHold/serverHold or such other status to render the names/sites non-resolving,” the recommendation adds.”

If the U.S. District Court Judge adopts this recommendation, it would mean that Internet providers such as Comcast could be ordered to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.

This would likely trigger a response from affected Internet services, who generally want to avoid being dragged into these cases. They would certainly don’t want such far-reaching measure to be introduced through a default order.

Sci-Hub itself doesn’t seem to be too bothered by the blocking prospect or the millions in damages it faces. The site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

Magistrate Judge John Anderson’s full findings of fact and recommendations are available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Cryptocurrency Miner Targeted by Anti-Virus and Adblock Tools

Post Syndicated from Ernesto original https://torrentfreak.com/cryptocurrency-miner-targeted-by-anti-virus-and-adblock-tools-170926/

Earlier this month The Pirate Bay caused some uproar by adding a Javascript-based cryptocurrency miner to its website.

The miner utilizes CPU power from visitors to generate Monero coins for the site, providing an extra revenue source.

While Pirate Bay only tested the option briefly, it inspired many others to follow suit. Streaming related sites such as Alluc, Vidoza, and Rapidvideo jumped on board, and torrent site Demonoid also ran some tests.

During the weekend, Coinhive’s miner code even appeared on the official website of Showtime. The code was quickly removed and it’s still unclear how it got there, as the company refuses to comment. It’s clear, though, that miners are a hot topic thanks to The Pirate Bay.

The revenue potential is also real. TorrentFreak spoke to Vidoza who say that with 30,000 online users throughout the day (2M unique visitors), they can make between $500 and $600. That’s when the miner is throttled at 50%. Although ads can bring in more, it’s not insignificant.

That said, all the uproar about cryptocurrency miners and their possible abuse has also attracted the attention of ad-blockers. Some people have coded new browser add-ons to block miners specifically and the popular uBlock Origin added Coinhive to its default blocklist as well. And that’s just after a few days.

Needless to say, this limits the number of miners, and thus the money that comes in. And there’s another problem with a similar effect.

In addition to ad-blockers, anti-virus tools are also flagging Coinhive. Malwarebytes is one of the companies that lists it as a malicious activity, warning users about the threat.

The anti-virus angle is one of the issues that worries Demonoid’s operator. The site is used to ad-blockers, but getting flagged by anti-virus companies is of a different order.

“The problem I see there and the reason we will likely discontinue [use of the miner] is that some anti-virus programs block it, and that might get the site on their blacklists,” Deimos informs TorrentFreak.

Demonoid’s miner announcement

Vidoza operator Eugene sees all the blocking as an unwelcome development and hopes that Coinhive will tackle it. Coinhive may want to come out in public and start to discuss the issue with ad-blockers and anti-virus companies, he says.

“They should find out under what conditions all these guys will stop blocking the script,” he notes.

The other option would be to circumvent the blocking through proxies and circumvention tools, but that might not be the best choice in the long run.

Coinhive, meanwhile, has chimed in as well. The company says that it wasn’t properly prepared for the massive attention and understands why some ad-blockers have put them on the blacklist.

“Providing a real alternative to ads and users who block them turned out to be a much harder problem. Coinhive, too, is now blocked by many ad-block browser extensions, which – we have to admit – is reasonable at this point.”

Most complaints have been targeted at sites that implemented the miner without the user’s consent. Coinhive doesn’t like this either and will take steps to prevent it in future.

“We’re a bit saddened to see that some of our customers integrate Coinhive into their pages without disclosing to their users what’s going on, let alone asking for their permission,” the Coinhive team notes.

The crypto miner provider is working on a new implementation that requires explicit consent from website visitors in order to run. This should deal with most of the negative responses.

If users start mining voluntarily, then ad-blockers and anti-virus companies should no longer have a reason to block the script. Nor will it be easy for malware peddlers to abuse it.

To be continued.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Canadian ISP Bell Calls For Pirate Site Blacklist in NAFTA Hearing

Post Syndicated from Ernesto original https://torrentfreak.com/canadian-isp-bell-calls-for-nationwide-pirate-site-blacklist-170925/

Website blocking has become a common tool for copyright holders to target online piracy.

In most countries, these blockades are ordered by local courts, which compel Internet providers to restrict access to certain websites.

While most ISPs initially object to such restrictions, the largest Canadian telco Bell is actively calling for such measures. In a hearing before the Standing Committee on International Trade on NAFTA, the company is clear on how online piracy should be curbed.

Rob Malcolmson, Bell’s Senior Vice-President Regulatory Affairs, mentioned that the United States has repeatedly complained about Canada’s apparent lack of copyright enforcement. To make NAFTA “work better” for Canadian culture in the digital economy, stronger enforcement is crucial.

“US interests have long complained that widespread online copyright infringement here in Canada is limiting the growth of the digital economy. In fact, many of the most prominent global players in the piracy ecosystem operate out of Canada as a relative safe harbor,” Malcolmson said.

“We recommend that the Government commits to stronger intellectual property enforcement by having an administrative agency dedicated to such enforcement and by prioritizing enforcement against digital pirates.”

In Bell’s view, all Canadian Internet providers should be required to block access to the most egregious pirate sites, without intervention from the courts.

“We would like to see measures put in place whereby all Internet service providers are required to block consumer access to pirated websites. In our view, that is the only way to stop it,” Malcolmson said.

The telco, which is a copyright holder itself, has clearly thought the plan through. It notes that Internet providers shouldn’t be tasked with determining which sites should be blocked. This should be the job of an independent outfit. Alternatively, the Canadian telco regulator CTRC could oversee the blocking scheme.

“In our view, it would be an independent agency that would be charged with that task. You certainly would not want the ISPs acting as censors as to what content is pirate content,” Malcolmson said.

“But, surely, an independent third party agency could be formed, could create a blacklist of pirate sites, and then the ISPs would be required to block it. That is at a high level how we would see it unfolding, perhaps overseen by a regulator like the CRTC.”

In addition to website blocking, Bell also recommends criminalizing commercial copyright infringement, which would support stronger enforcement against online piracy.

Canadian law professor Micheal Geist, who picked up Bell’s controversial comments, is very critical of the recommendations. Geist says that the proposal goes above and beyond what US copyright holders have asked for.

“The Bell proposals […] suggest that the company’s position as a common carrier representing the concerns of ISPs and their subscribers is long over,” Geist writes.

“Instead, Bell’s copyright advocacy goes beyond what even some U.S. rights holders have called for, envisioning new methods of using copyright law to police the Internet with oversight from the CRTC and implementing such provisions through NAFTA.”

If the Canadian Government considers the suggestions, there is bound to be pushback from other ISPs on the blocking elements. Internet providers are generally not eager to block content without a court order.

It is also worth keeping in mind that while Bell’s plans are in part a response to criticism from US interests, American ISPs are still not required to block any pirate sites, voluntarily or not.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Belgium Wants to Blacklist Pirate Sites & Hijack Their Traffic

Post Syndicated from Andy original https://torrentfreak.com/belgium-wants-to-blacklist-pirate-sites-hijack-their-traffic-170924/

The thorny issue of how to deal with the online piracy phenomenon used to be focused on punishing site users. Over time, enforcement action progressed to the services themselves, until they became both too resilient and prevalent to tackle effectively.

In Europe in particular, there’s now a trend of isolating torrent, streaming, and hosting platforms from their users. This is mainly achieved by website blocking carried out by local ISPs following an appropriate court order.

While the UK is perhaps best known for this kind of action, Belgium was one of the early pioneers of the practice.

After filing a lawsuit in 2010, the Belgian Anti-Piracy Foundation (BAF) weathered an early defeat at the Antwerp Commercial Court to achieve success at the Court of Appeal. Since then, local ISPs have been forced to block The Pirate Bay.

Since then there have been several efforts (1,2) to block more sites but rightsholders have complained that the process is too costly, lengthy, and cumbersome. Now the government is stepping in to do something about it.

Local media reports that Deputy Prime Minister Kris Peeters has drafted new proposals to tackle online piracy. In his role as Minister of Economy and Employment, Peeters sees authorities urgently tackling pirate sites with a range of new measures.

For starters, he wants to create a new department, formed within the FPS Economy, to oversee the fight against online infringement. The department would be tasked with detecting pirate sites more quickly and rendering them inaccessible in Belgium, along with any associated mirror sites or proxies.

Peeters wants the new department to add all blocked sites to a national ‘pirate blacklist. Interestingly, when Internet users try to access any of these sites, he wants them to be automatically diverted to legal sites where a fee will have to be paid for content.

While it’s not unusual to try and direct users away from pirate sites, for the most part Internet service providers have been somewhat reluctant to divert subscribers to commercial sites. Their assistance would be needed in this respect, so it will be interesting to see how negotiations pan out.

The Belgian Entertainment Association (BEA), which was formed nine years ago to represent the music, video, software and videogame industries, welcomed Peeters’ plans.

“It’s so important to close the doors to illegal download sites and to actively lead people to legal alternatives,” said chairman Olivier Maeterlinck.

“Surfers should not forget that the motives of illegal download sites are not always obvious. These sites also regularly try to exploit personal data.”

The current narrative that pirate sites are evil places is clearly gaining momentum among anti-piracy bodies, but there’s little sign that the public intends to boycott sites as a result. With that in mind, alternative legal action will still be required.

With that in mind, Peeters wants to streamline the system so that all piracy cases go through a single court, the Commercial Court of Brussels. This should reduce costs versus the existing model and there’s also the potential for more consistent rulings.

“It’s a good idea to have a clearer legal framework on this,” says Maeterlinck from BEA.

“There are plenty of legal platforms, streaming services like Spotify, for example, which are constantly developing and reaching an ever-increasing audience. Those businesses have a business model that ensure that the creators of certain media content are properly compensated. The rotten apples must be tackled, and those procedures should be less time-consuming.”

There’s little doubt that BEA could benefit from a little government assistance. Back in February, the group filed a lawsuit at the French commercial court in Brussels, asking ISPs to block subscriber access to several ‘pirate’ sites.

“Our action aims to block nine of the most popular streaming sites which offer copyright-protected content on a massive scale and without authorization,” Maeterlinck told TF at the time.

“In accordance with the principles established by the CJEU (UPC Telekabel and GS Media), BEA seeks a court order confirming the infringement and imposing site blocking measures on the ISPs, who are content providers as well.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia’s Largest Torrent Site Celebrates 13 Years Online in a Chinese Restaurant

Post Syndicated from Andy original https://torrentfreak.com/russias-largest-torrent-site-celebrates-13-years-online-in-a-chinese-restaurant-170923/

For most torrent fans around the world, The Pirate Bay is the big symbol of international defiance. Over the years the site has fought, avoided, and snubbed its nose at dozens of battles, yet still remains online today.

But there is another site, located somewhere in the east, that has been online for nearly as long, has millions more registered members, and has proven just as defiant.

RuTracker, for those who haven’t yet found it, is a Russian-focused treasure trove of both local and international content. For many years the site was frequented only by native speakers but with the wonders of tools like Google Translate, anyone can use the site at the flick of the switch. When people are struggling to find content, it’s likely that RuTracker has it.

This position has attracted the negative attention of a wide range of copyright holders and thanks to legislation introduced during 2013, the site is now subject to complete blocking in Russia. In fact, RuTracker has proven so stubborn to copyright holder demands, it is now permanently blocked in the region by all ISPs.

Surprisingly, especially given the enthusiasm for blockades among copyright holders, this doesn’t seem to have dampened demand for the site’s services. According to SimiliarWeb, against all the odds the site is still pulling in around 90 million visitors per month. But the impressive stats don’t stop there.

Impressive stats for a permanently blocked site

This week, RuTracker celebrates its 13th birthday, a relative lifetime for a site that has been front and center of Russia’s most significant copyright battles, trouble which doesn’t look like stopping anytime soon.

Back in 2010, for example, RU-Center, Russia’s largest domain name registrar and web-hosting provider, pulled the plug on the site’s former Torrents.ru domain. The Director of Public Relations at RU-Center said that the domain had been blocked on the orders of the Investigative Division of the regional prosecutor’s office in Moscow. The site never got its domain back but carried on regardless, despite the setbacks.

Back then the site had around 4,000,000 members but now, seven years on, its ranks have swelled to a reported 15,382,907. According to figures published by the site this week, 778,317 of those members signed up this year during a period the site was supposed to be completely inaccessible. Needless to say, its operators remain defiant.

“Today we celebrate the 13th anniversary of our tracker, which is the largest Russian (and not only) -language media library on this planet. A tracker strangely banished in the country where most of its audience is located – in Russia,” a site announcement reads.

“But, despite the prohibitions, with all these legislative obstacles, with all these technical difficulties, we see that our tracker still exists and is successfully developing. And we still believe that the library should be open and free for all, and not be subject to censorship or a victim of legislative and executive power lobbied by the monopolists of the media industry.”

It’s interesting to note the tone of the RuTracker announcement. On any other day it could’ve been written by the crew of The Pirate Bay who, in their prime, loved to stick a finger or two up to the copyright lobby and then rub their noses in it. For the team at RuTracker, that still appears to be one of the main goals.

Like The Pirate Bay but unlike many of the basic torrent indexers that have sprung up in recent years, RuTracker relies on users to upload its content. They certainly haven’t been sitting back. RuTracker reveals that during the past year and despite all the problems, users uploaded a total of 171,819 torrents – on average, 470 torrents per day.

Interestingly, the content most uploaded to the site also points to the growing internationalization of RuTracker. During the past year, the NBA / NCAA section proved most popular, closely followed by non-Russian rock music and NHL games. Non-Russian movies accounted for almost 2,000 fresh torrents in just 12 months.

“It is thanks to you this tracker lives!” the site’s operators informed the users.

“It is thanks to you that it was, is, and, for sure, will continue to offer the most comprehensive, diverse and, most importantly, quality content in the Russian Internet. You stayed with us when the tracker lost its original name: torrents.ru. You stayed with us when access to a new name was blocked in Russia: rutracker.org. You stayed with us when [the site’s trackers] were blocked. We will stay with you as long as you need us!”

So as RuTracker plans for another year online, all that remains is to celebrate its 13th birthday in style. That will be achieved tonight when every adult member of RuTracker is invited to enjoy Chinese meal at the Tian Jin Chinese Restaurant in St. Petersburg.

Turn up early, seating is limited.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Block The Pirate Bay Within 10 Days, Dutch Court Tells ISPs

Post Syndicated from Andy original https://torrentfreak.com/block-the-pirate-bay-within-10-days-dutch-court-tells-isps-170922/

Three years ago in 2014, The Court of The Hague handed down its decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, which brought the case, the Court decided that a blockade would be ineffective and also restrict the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked while BREIN took its case to the Supreme Court, which in turn referred the matter to the EU Court of Justice for clarification. This June, the ECJ ruled that as a platform effectively communicating copyright works to the public, The Pirate Bay can indeed be blocked.

The ruling meant there were no major obstacles preventing the Dutch Supreme Court from ordering a future ISP blockade. Clearly, however, BREIN wanted a blocking decision more quickly. A decision handed down today means the anti-piracy group will achieve that in just a few days’ time.

The Hague Court of Appeal today ruled (Dutch) that the 2014 decision, which lifted the blockade against The Pirate Bay, is now largely obsolete.

“According to the Court of Appeal, the Hague Court did not give sufficient weight to the interests of the beneficiaries represented by BREIN,” BREIN said in a statement.

“The Court also wrongly looked at whether torrent traffic had been reduced by the blockade. It should have also considered whether visits to the website of The Pirate Bay itself decreased with a blockade, which speaks for itself.”

As a result, an IP address and DNS blockade of The Pirate Bay, similar to those already in place in the UK and other EU countries, will soon be put in place. BREIN says that four IP addresses will be affected along with hundreds of domain names through which the torrent platform can be reached.

The ISPs have been given just 10 days to put the blocks in place and if they fail there are fines of 2,000 euros per day, up to a maximum of one million euros.

“It is nice that obviously harmful and illegal sites like The Pirate Bay will be blocked again in the Netherlands,” says BREIN chief Tim Kuik.

“A very bad time for our culture, which was free to access via these sites, is now happily behind us.”

Today’s interim decision by the Court of Appeal will stand until the Supreme Court hands down its decision in the main case between BREIN and Ziggo / XS4ALL.

Looking forward, it seems extremely unlikely that the Supreme Court will hand down a conflicting decision, so we’re probably already looking at the beginning of the end for direct accessibility of The Pirate Bay in the Netherlands.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Indian Movie Actor Mobbed By Press After Arrest of Torrent Site Admin

Post Syndicated from Andy original https://torrentfreak.com/indian-movie-actor-mobbed-by-press-after-airport-torrent-site-arrest-170913/

While most of the headlines relating to Internet piracy are focused on North America and Europe, there are dozens of countries where piracy is a way of life for millions of citizens. India, with its booming economy and growth in technology, is certainly one of them.

According to a recently published report, India now has 355 million Internet users out of a population of more than 1.3 billion. Not only is there massive room for growth, that figure is up from 277 million just two years ago. The rate of growth is astonishing.

Needless to say, Indians love their Internet and increasing numbers of citizens are also getting involved in the piracy game. There are many large sites and prominent release groups operating out of the country, some of them targeting the international market. Carry out a search for DVDSCR (DVD screener) on most search indexes globally and one is just as likely to find Indian movie releases as those emanating from the West.

If people didn’t know it already, India is nurturing a pirate force to be reckoned with, with local torrent and streaming sites pumping out the latest movies at an alarming rate. This has caused an outcry from many in the movie industry who are determined to do something to stem the tide.

One of these is actor Vishal Krishna, who not only stars in movies but is also a producer working in the Tamil film industry. Often referred to simply by his first name, Vishal has spoken out regularly against piracy in his role at the Tamil Film Producers Council.

In May, he referred to the operators of the hugely popular torrent site TamilRockers as ‘Internet Mafias’ while demanding their arrest for leaking the blockbuster Baahubali 2, a movie that pulled in US$120 million in six days. Now, it appears, he may have gotten his way. Well, partially, at least.

Last evening, reports began to surface of an arrest at Chennai airport in north east India. According to local media, Gauri Shankar, an alleged administrator of Tamilrockers.co, was detained by Triplicane police.

This would’ve been a huge coup for Vishal, who has been warning Tamilrockers to close down for the past three years. He even claimed to know the identity of the main perpetrator behind the site, noting that it was only a matter of time before he was brought to justice.

Soon after the initial reports, however, other media outlets claimed that Gauri Shankar is actually an operator at Tamilgun, another popular pirate portal currently blocked by ISPs on the orders of the Indian government.

So was it rockers or gun? According to Indiaglitz.com, Vishal rushed to the scene in Chennai to find out.

Outside the police station

What followed were quite extraordinary scenes outside the Triplicane police station. Emerging from the building flanked by close to 20 men, some in uniform, Vishal addressed an excited crowd of reporters. A swathe of microphones from various news outlets greeted him as he held up his hands urging the crowd to calm down.

“Just give us some time, I will give you the details,” Vishal said in two languages.

“Just give us some time. It is too early. I’ll just give it to you in a bit. It’s something connected to website piracy. Just give me some time. I have to give you all the details, proper details.”

So, even after all the excitement, it’s unclear who the police have in custody. Nevertheless, the attention this event is getting from the press is on a level rarely seen in a piracy case, so more news is bound to follow soon.

In the meantime, both TamilRockers and TamilGun remain online, operating as normal. Clearly, there is much more work to be done.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New UK IP Crime Report Reveals Continued Focus on ‘Pirate’ Kodi Boxes

Post Syndicated from Andy original https://torrentfreak.com/new-uk-ip-crime-report-reveals-continued-focus-on-pirate-kodi-boxes-170908/

The UK’s Intellectual Property Office has published its annual IP Crime Report, spanning the period 2016 to 2017.

It covers key events in the copyright and trademark arenas and is presented with input from the police and trading standards, plus private entities such as the BPI, Premier League, and Federation Against Copyright Theft, to name a few.

The report begins with an interesting statistic. Despite claims that many millions of UK citizens regularly engage in some kind of infringement, figures from the Ministry of Justice indicate that just 47 people were found guilty of offenses under the Copyright, Designs and Patents Act during 2016. That’s down on the 69 found guilty in the previous year.

Despite this low conviction rate, 15% of all internet users aged 12+ are reported to have consumed at least one item of illegal content between March and May 2017. Figures supplied by the Industry Trust for IP indicate that 19% of adults watch content via various IPTV devices – often referred to as set-top, streaming, Android, or Kodi boxes.

“At its cutting edge IP crime is innovative. It exploits technological loopholes before they become apparent. IP crime involves sophisticated hackers, criminal financial experts, international gangs and service delivery networks. Keeping pace with criminal innovation places a burden on IP crime prevention resources,” the report notes.

The report covers a broad range of IP crime, from counterfeit sportswear to foodstuffs, but our focus is obviously on Internet-based infringement. Various contributors cover various aspects of online activity as it affects them, including music industry group BPI.

“The main online piracy threats to the UK recorded music industry at present are from BitTorrent networks, linking/aggregator sites, stream-ripping sites, unauthorized streaming sites and cyberlockers,” the BPI notes.

The BPI’s website blocking efforts have been closely reported, with 63 infringing sites blocked to date via various court orders. However, the BPI reports that more than 700 related URLs, IP addresses, and proxy sites/ proxy aggregators have also been rendered inaccessible as part of the same action.

“Site blocking has proven to be a successful strategy as the longer the blocks are in place, the more effective they are. We have seen traffic to these sites reduce by an average of 70% or more,” the BPI reports.

While prosecutions against music pirates are a fairly rare event in the UK, the Crown Prosecution Service (CPS) Specialist Fraud Division highlights that their most significant prosecution of the past 12 months involved a prolific music uploader.

As first revealed here on TF, Wayne Evans was an uploader not only on KickassTorrents and The Pirate Bay, but also some of his own sites. Known online as OldSkoolScouse, Evans reportedly cost the UK’s Performing Rights Society more than £1m in a single year. He was sentenced in December 2016 to 12 months in prison.

While Evans has been free for some time already, the CPS places particular emphasis on the importance of the case, “since it provided sentencing guidance for the Copyright, Designs and Patents Act 1988, where before there was no definitive guideline.”

The CPS says the case was useful on a number of fronts. Despite illegal distribution of content being difficult to investigate and piracy losses proving tricky to quantify, the court found that deterrent sentences are appropriate for the kinds of offenses Evans was accused of.

The CPS notes that various factors affect the severity of such sentences, not least the length of time the unlawful activity has persisted and particularly if it has done so after the service of a cease and desist notice. Other factors include the profit made by defendants and/or the loss caused to copyright holders “so far as it can accurately be calculated.”

Importantly, however, the CPS says that beyond issues of personal mitigation and timely guilty pleas, a jail sentence is probably going to be the outcome for others engaging in this kind of activity in future. That’s something for torrent and streaming site operators and their content uploaders to consider.

“[U]nless the unlawful activity of this kind is very amateur, minor or short-lived, or in the absence of particularly compelling mitigation or other exceptional circumstances, an immediate custodial sentence is likely to be appropriate in cases of illegal distribution of copyright infringing articles,” the CPS concludes.

But while a music-related trial provided the highlight of the year for the CPS, the online infringement world is still dominated by the rise of streaming sites and the now omnipresent “fully-loaded Kodi Box” – set-top devices configured to receive copyright-infringing live TV and VOD.

In the IP Crime Report, the Intellectual Property Office references a former US Secretary of Defense to describe the emergence of the threat.

“The echoes of Donald Rumsfeld’s famous aphorism concerning ‘known knowns’ and ‘known unknowns’ reverberate across our landscape perhaps more than any other. The certainty we all share is that we must be ready to confront both ‘known unknowns’ and ‘unknown unknowns’,” the IPO writes.

“Not long ago illegal streaming through Kodi Boxes was an ‘unknown’. Now, this technology updates copyright infringement by empowering TV viewers with the technology they need to subvert copyright law at the flick of a remote control.”

While the set-top box threat has grown in recent times, the report highlights the important legal clarifications that emerged from the BREIN v Filmspeler case, which found itself before the European Court of Justice.

As widely reported, the ECJ determined that the selling of piracy-configured devices amounts to a communication to the public, something which renders their sale illegal. However, in a submission by PIPCU, the Police Intellectual Property Crime Unit, box sellers are said to cast a keen eye on the legal situation.

“Organised criminals, especially those in the UK who distribute set-top boxes, are aware of recent developments in the law and routinely exploit loopholes in it,” PIPCU reports.

“Given recent judgments on the sale of pre-programmed set-top boxes, it is now unlikely criminals would advertise the devices in a way which is clearly infringing by offering them pre-loaded or ‘fully loaded’ with apps and addons specifically designed to access subscription services for free.”

With sellers beginning to clean up their advertising, it seems likely that detection will become more difficult than when selling was considered a gray area. While that will present its own issues, PIPCU still sees problems on two fronts – a lack of clear legislation and a perception of support for ‘pirate’ devices among the public.

“There is no specific legislation currently in place for the prosecution of end users or sellers of set-top boxes. Indeed, the general public do not see the usage of these devices as potentially breaking the law,” the unit reports.

“PIPCU are currently having to try and ‘shoehorn’ existing legislation to fit the type of criminality being observed, such as conspiracy to defraud (common law) to tackle this problem. Cases are yet to be charged and results will be known by late 2017.”

Whether these prosecutions will be effective remains to be seen, but PIPCU’s comments suggest an air of caution set to a backdrop of box-sellers’ tendency to adapt to legal challenges.

“Due to the complexity of these cases it is difficult to substantiate charges under the Fraud Act (2006). PIPCU have convicted one person under the Serious Crime Act (2015) (encouraging or assisting s11 of the Fraud Act). However, this would not be applicable unless the suspect had made obvious attempts to encourage users to use the boxes to watch subscription only content,” PIPCU notes, adding;

“The selling community is close knit and adapts constantly to allow itself to operate in the gray area where current legislation is unclear and where they feel they can continue to sell ‘under the radar’.”

More generally, pirate sites as a whole are still seen as a threat. As reported last month, the current anti-piracy narrative is that pirate sites represent a danger to their users. As a result, efforts are underway to paint torrent and streaming sites as risky places to visit, with users allegedly exposed to malware and other malicious content. The scare strategy is supported by PIPCU.

“Unlike the purchase of counterfeit physical goods, consumers who buy unlicensed content online are not taking a risk. Faulty copyright doesn’t explode, burn or break. For this reason the message as to why the public should avoid copyright fraud needs to be re-focused.

“A more concerted attempt to push out a message relating to malware on pirate websites, the clear criminality and the links to organized crime of those behind the sites are crucial if public opinion is to be changed,” the unit advises.

But while the changing of attitudes is desirable for pro-copyright entities, PIPCU says that winning over the public may not prove to be an easy battle. It was given a small taste of backlash itself, after taking action against the operator of a pirate site.

“The scale of the problem regarding public opinion of online copyright crime is evidenced by our own experience. After PIPCU executed a warrant against the owner of a streaming website, a tweet about the event (read by 200,000 people) produced a reaction heavily weighted against PIPCU’s legitimate enforcement action,” PIPCU concludes.

In summary, it seems likely that more effort will be expended during the next 12 months to target the set-top box threat, but there doesn’t appear to be an abundance of confidence in existing legislation to tackle all but the most egregious offenders. That being said, a line has now been drawn in the sand – if the public is prepared to respect it.

The full IP Crime Report 2016-2017 is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia Blocks 4,000 Pirate Sites Plus 41,000 Innocent as Collateral Damage

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-4000-pirate-sites-plus-41000-innocent-as-collateral-damage-170905/

After years of criticism from both international and local rightsholders, in 2013 the Russian government decided to get tough on Internet piracy.

Under new legislation, sites engaged in Internet piracy could find themselves blocked by ISPs, rendering them inaccessible to local citizens and solving the piracy problem. Well, that was the theory, at least.

More than four years on, Russia is still grappling with a huge piracy problem that refuses to go away. It has been blocking thousands of sites at a steady rate, including RuTracker, the country’s largest torrent platform, but still the problem persists.

Now, a new report produced by Roskomsvoboda, the Center for the Protection of Digital Rights, and the Pirate Party of Russia, reveals a system that has not only failed to reach its stated aims but is also having a negative effect on the broader Internet.

“It’s already been four years since the creation of this ‘anti-piracy machine’ in Russia. The first amendments related to the fight against ‘piracy’ in the network came into force on August 1, 2013, and since then this mechanism has been twice revised,” Roskomsvoboda said in a statement.

“[These include] the emergence of additional responsibilities to restrict access to network resources and increase the number of subjects who are responsible for removing and blocking content. Since that time, several ‘purely Russian’ trends in ‘anti-piracy’ and trade in rights have also emerged.”

These revisions, which include the permanent blocking of persistently infringing sites and the planned blocking of mirror sites and anonymizers, have been widely documented. However, the researchers say that they want to shine a light on the effects of blocking procedures and subsequent actions that are causing significant issues for third-parties.

As part of the study, the authors collected data on the cases presented to the Moscow City Court by the most active plaintiffs in anti-piracy actions (mainly TV show distributors and music outfits including Sony Music Entertainment and Universal Music). They describe the court process and system overall as lacking.

“The court does not conduct a ‘triple test’ and ignores the position, rights and interests of respondents and third parties. It does not check the availability of illegal information on sites and appeals against decisions of the Moscow City Court do not bring any results,” the researchers write.

“Furthermore, the cancellation of the unlimited blocking of a site is simply impossible and in respect of hosting providers and security services, those web services are charged with all the legal costs of the case.”

The main reason behind this situation is that ‘pirate’ site operators rarely (if ever) turn up to defend themselves. If at some point they are found liable for infringement under the Criminal Code, they can be liable for up to six years in prison, hardly an incentive to enter into a copyright process voluntarily. As a result, hosts and other providers act as respondents.

This means that these third-party companies appear as defendants in the majority of cases, a position they find both “unfair and illogical.” They’re also said to be confused about how they are supposed to fulfill the blocking demands placed upon them by the Court.

“About 90% of court cases take place without the involvement of the site owner, since the requirements are imposed on the hosting provider, who is not responsible for the content of the site,” the report says.

Nevertheless, hosts and other providers have been ordered to block huge numbers of pirate sites.

According to the researchers, the total has now gone beyond 4,000 domains, but the knock on effect is much more expansive. Due to the legal requirement to block sites by both IP address and other means, third-party sites with shared IP addresses get caught up as collateral damage. The report states that more than 41,000 innocent sites have been blocked as the result of supposedly targeted court orders.

But with collateral damage mounting, the main issue as far as copyright holders are concerned is whether piracy is decreasing as a result. The report draws few conclusions on that front but notes that blocks are a blunt instrument. While they may succeed in stopping some people from accessing ‘pirate’ domains, the underlying infringement carries on regardless.

“Blocks create restrictions only for Internet users who are denied access to sites, but do not lead to the removal of illegal information or prevent intellectual property violations,” the researchers add.

With no sign of the system being overhauled to tackle the issues raised in the study (pdf, Russian), Russia is now set to introduce yet new anti-piracy measures.

As recently reported, new laws requiring search engines to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Datavalet Wi-Fi Blocks TorrentFreak Over ‘Criminal Hacking Skills’

Post Syndicated from Ernesto original https://torrentfreak.com/datavalet-wi-fi-blocks-torrentfreak-over-criminal-hacking-skills-170903/

At TorrentFreak we regularly write about website blocking efforts around the globe, usually related to well-known pirate sites.

Unfortunately, our own news site is not immune to access restrictions either. While no court has ordered ISPs to block access to our articles, some are doing this voluntarily.

This is especially true for companies that provide Wi-Fi hotspots, such as Datavalet. This wireless network provider works with various large organizations, including McDonald’s, Starbucks, and airports, to offer customers free Internet access.

Or rather to a part of the public Internet, we should say.

Over the past several months, we have had several reports from people who are unable to access TorrentFreak on Datavalet’s network. Users who load our website get an ominous warning instead, suggesting that we run some kind of a criminal hacking operation.

“Access to TORRENTFREAK.COM is not permitted as it is classified as: CRIMINAL SKILLS / HACKING.”

Criminal Skills?

Although we see ourselves as skilled writing news in our small niche, which incidentally covers crime and hacking, our own hacking skills are below par. Admittedly, mistakes are easily made but Datavalet’s blocking efforts are rather persistent.

The same issue was brought to our attention several years ago. At the time, we reached out to Datavalet and a friendly senior network analyst promised that they would look into it.

“We have forwarded your concerns to the proper resources and as soon as we have an update we will let you know,” the response was. But a few years later the block is still active, or active again.

Datavalet is just one one the many networks where TorrentFreak is blocked. Often, we are categorized as a file-sharing site, probably due to the word “torrent” in our name. This recently happened at the NYC Brooklyn library, for example.

After a reader kindly informed the library that we’re a news site, we were suddenly transferred from the “Peer-to-Peer File Sharing” to the “Proxy Avoidance” category.

“It appears that the website you want to access falls under the category ‘Proxy Avoidance’. These are sites that provide information about how to bypass proxy server features or to gain access to URLs in any way that bypass the proxy server,” the library explained.

Still blocked of course.

At least we’re not the only site facing this censorship battle. Datavelet and others regularly engage in overblocking to keep their network and customers safe. For example, Reddit was recently banned because it offered “nudity,” which is another no-go area.

Living up to our “proxy avoidance” reputation, we have to mention that people who regularly face these type of restrictions may want to invest in a VPN. These are generally quite good at bypassing these type of blockades. If they are not blocked themselves, that is.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Much Does ‘Free’ Premier League Piracy Cost These Days?

Post Syndicated from Andy original https://torrentfreak.com/how-much-does-free-premier-league-piracy-cost-these-days-170902/

Right now, the English Premier League is engaged in perhaps the most aggressively innovative anti-piracy operation the Internet has ever seen. After obtaining a new High Court order, it now has the ability to block ‘pirate’ streams of matches, in real-time, with no immediate legal oversight.

If the Premier League believes a server is streaming one of its matches, it can ask ISPs in the UK to block it, immediately. That’s unprecedented anywhere on the planet.

As previously reported, this campaign caused a lot of problems for people trying to access free and premium streams at the start of the season. Many IPTV services were blocked in the UK within minutes of matches starting, with free streams also dropping like flies. According to information obtained by TF, more than 600 illicit streams were blocked during that weekend.

While some IPTV providers and free streams continued without problems, it seems likely that it’s only a matter of time before the EPL begins to pick off more and more suppliers. To be clear, the EPL isn’t taking services or streams down, it’s only blocking them, which means that people using circumvention technologies like VPNs can get around the problem.

However, this raises the big issue again – that of continuously increasing costs. While piracy is often painted as free, it is not, and as setups get fancier, costs increase too.

Below, we take a very general view of a handful of the many ‘pirate’ configurations currently available, to work out how much ‘free’ piracy costs these days. The list is not comprehensive by any means (and excludes more obscure methods such as streaming torrents, which are always free and rarely blocked), but it gives an idea of costs and how the balance of power might eventually tip.

Basic beginner setup

On a base level, people who pirate online need at least some equipment. That could be an Android smartphone and easily installed free software such as Mobdro or Kodi. An Internet connection is a necessity and if the EPL blocks those all important streams, a VPN provider is required to circumvent the bans.

Assuming people already have a phone and the Internet, a VPN can be bought for less than £5 per month. This basic setup is certainly cheap but overall it’s an entry level experience that provides quality equal to the effort and money expended.

Equipment: Phone, tablet, PC
Comms: Fast Internet connection, decent VPN provider
Overal performance: Low quality, unpredictable, often unreliable
Cost: £5pm approx for VPN, plus Internet costs

Big screen, basic

For those who like their matches on the big screen, stepping up the chain costs more money. People need a TV with an HDMI input and a fast Internet connection as a minimum, alongside some kind of set-top device to run the necessary software.

Android devices are the most popular and are roughly split into two groups – the small standalone box type and the plug-in ‘stick’ variant such as Amazon’s Firestick.

A cheap Android set-top box

These cost upwards of £30 to £40 but the software to install on them is free. Like the phone, Mobdro is an option, but most people look to a Kodi setup with third-party addons. That said, all streams received on these setups are now vulnerable to EPL blocking so in the long-term, users will need to run a paid VPN.

The problem here is that some devices (including the 1st gen Firestick) aren’t ideal for running a VPN on top of a stream, so people will need to dump their old device and buy something more capable. That could cost another £30 to £40 and more, depending on requirements.

Importantly, none of this investment guarantees a decent stream – that’s down to what’s available on the day – but invariably the quality is low and/or intermittent, at best.

Equipment: TV, decent Android set-top box or equivalent
Comms: Fast Internet connection, decent VPN provider
Overall performance: Low to acceptable quality, unpredictable, often unreliable
Cost: £30 to £50 for set-top box, £5pm approx for VPN, plus Internet

Premium IPTV – PC or Android based

At this point, premium IPTV services come into play. People have a choice of spending varying amounts of money, depending on the quality of experience they require.

First of all, a monthly IPTV subscription with an established provider that isn’t going to disappear overnight is required, which can be a challenge to find in itself. We’re not here to review or recommend services but needless to say, like official TV packages they come in different flavors to suit varying wallet sizes. Some stick around, many don’t.

A decent one with a Sky-like EPG costs between £7 and £15 per month, depending on the quality and depth of streams, and how far in front users are prepared to commit.

Fairly typical IPTV with EPG (VOD shown)

Paying for a year in advance tends to yield better prices but with providers regularly disappearing and faltering in their service levels, people are often reluctant to do so. That said, some providers experience few problems so it’s a bit like gambling – research can improve the odds but there’s never a guarantee.

However, even when a provider, price, and payment period is decided upon, the process of paying for an IPTV service can be less than straightforward.

While some providers are happy to accept PayPal, many will only deal in credit cards, bitcoin, or other obscure payment methods. That sets up more barriers to entry that might deter the less determined customer. And, if time is indeed money, fussing around with new payment processors can be pricey, at least to begin with.

Once subscribed though, watching these streams is pretty straightforward. On a base level, people can use a phone, tablet, or set-top device to receive them, using software such as Perfect Player IPTV, for example. Currently available in free (ad supported) and premium (£2) variants, this software can be setup in a few clicks and will provide a decent user experience, complete with EPG.

Perfect Player IPTV

Those wanting to go down the PC route have more options but by far the most popular is receiving IPTV via a Kodi setup. For the complete novice, it’s not always easy to setup but some IPTV providers supply their own free addons, which streamline the process massively. These can also be used on Android-based Kodi setups, of course.

Nevertheless, if the EPL blocks the provider, a VPN is still going to be needed to access the IPTV service.

An Android tablet running Kodi

So, even if we ignore the cost of the PC and Internet connection, users could still find themselves paying between £10 and £20 per month for an IPTV service and a decent VPN. While more channels than simply football will be available from most providers, this is getting dangerously close to the £18 Sky are asking for its latest football package.

Equipment: TV, PC, or decent Android set-top box or equivalent
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overal performance: High quality, mostly reliable, user-friendly (once setup)
Cost: PC or £30/£50 for set-top box, IPTV subscription £7 to £15pm, £5pm approx for VPN, plus Internet, plus time and patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear

Premium IPTV – Deluxe setup

Moving up to the top of the range, things get even more costly. Those looking to give themselves the full home entertainment-like experience will often move away from the PC and into the living room in front of the TV, armed with a dedicated set-top box. Weapon of choice: the Mag254.

Like Amazon’s FireStick, PC or Android tablet, the Mag254 is an entirely legal, content agnostic device. However, enter the credentials provided by many illicit IPTV suppliers and users are presented with a slick Sky-like experience, far removed from anything available elsewhere. The device is operated by remote control and integrates seamlessly with any HDMI-capable TV.

Mag254 IPTV box

Something like this costs around £70 in the UK, plus the cost of a WiFi adaptor on top, if needed. The cost of the IPTV provider needs to be figured in too, plus a VPN subscription if the provider gets blocked by EPL, which is likely. However, in this respect the Mag254 has a problem – it can’t run a VPN natively. This means that if streams get blocked and people need to use a VPN, they’ll need to find an external solution.

Needless to say, this costs more money. People can either do all the necessary research and buy a VPN-capable router/modem that’s also compatible with their provider (this can stretch to a couple of hundred pounds) or they’ll need to invest in a small ‘travel’ router with VPN client features built in.

‘Travel’ router (with tablet running Mobdro for scale)

These devices are available on Amazon for around £25 and sit in between the Mag254 (or indeed any other wireless device) and the user’s own regular router. Once the details of the VPN subscription are entered into the router, all traffic passing through is encrypted and will tunnel through web blocking measures. They usually solve the problem (ymmv) but of course, this is another cost.

Equipment: Mag254 or similar, with WiFi
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overall performance: High quality, mostly reliable, very user-friendly
Cost: Mag254 around £75 with WiFi, IPTV subscription £7 to £15pm, £5pm for VPN (plus £25 for mini router), plus Internet, plus patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear


On the whole, people who want a reliable and high-quality Premier League streaming experience cannot get one for free, no matter where they source the content. There are many costs involved, some of which cannot be avoided.

If people aren’t screwing around with annoying and unreliable Kodi streams, they’ll be paying for an IPTV provider, VPN and other equipment. Or, if they want an easy life, they’ll be paying Sky, BT or Virgin Media. That might sound harsh to many pirates but it’s the only truly reliable solution.

However, for those looking for something that’s merely adequate, costs drop significantly. Indeed, if people don’t mind the hassle of wondering whether a sub-VHS quality stream will appear before the big match and stay on throughout, it can all be done on a shoestring.

But perhaps the most important thing to note in respect of costs is the recent changes to the pricing of Premier League content in the UK. As mentioned earlier, Sky now delivers a sports package for £18pm, which sounds like the best deal offered to football fans in recent years. It will be tempting for sure and has all the hallmarks of a price point carefully calculated by Sky.

The big question is whether it will be low enough to tip significant numbers of people away from piracy. The reality is that if another couple of thousand streams get hit hard again this weekend – and the next – and the next – many pirating fans will be watching the season drift away for yet another month, unviewed. That’s got to be frustrating.

The bottom line is that high-quality streaming piracy is becoming a little bit pricey just for football so if it becomes unreliable too – and that’s the Premier League’s goal – the balance of power could tip. At this point, the EPL will need to treat its new customers with respect, in order to keep them feeling both entertained and unexploited.

Fail on those counts – especially the latter – and the cycle will start again.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Search Engines Will Open Systems to Prove Piracy & VPN Blocking

Post Syndicated from Andy original https://torrentfreak.com/search-engines-will-open-systems-to-prove-piracy-vpn-blocking-170901/

Over the past several years, Russia has become something of a world leader when it comes to website blocking. Tens of thousands of websites are now blocked in the country on copyright infringement and a wide range of other grounds.

With circumvention technologies such as VPNs, however, Russian citizens are able to access blocked sites, a position that has irritated Russian authorities who are determined to control what information citizens are allowed to access.

After working on new legislation for some time, late July President Vladimir Putin signed a new law which requires local telecoms watchdog Rozcomnadzor to maintain a list of banned domains while identifying sites, services, and software that provide access to them.

Rozcomnadzor is required to contact the operators of such services with a request for them to block banned resources. If they do not, then they themselves will become blocked. In addition, search engines are also required to remove blocked resources from their search results, in order to discourage people from accessing them.

With compliance now a matter of law, attention has turned to how search engines can implement the required mechanisms. This week Roskomnadzor hosted a meeting with representatives of the largest Russian search engines including Yandex, Sputnik, Search Mail.ru, where this topic was top of the agenda.

Since failure to comply can result in a fine of around $12,000 per breach, search companies have a vested interest in the systems working well against not only pirate sites, but also mirrors and anonymization tools that provide access to them.

“During the meeting, a consolidated position on the implementation of new legislative requirements was developed,” Rozcomnadzor reports.

“It was determined that the list of blocked resources to be removed from search results will be transferred to the operators of search engines in an automated process.”

While sending over lists of domains directly to search engines probably isn’t that groundbreaking, Rozcomnadzor wants to ensure that companies like Yandex are also responding to the removal requests properly.

So, instead of simply carrying out test searches itself, it’s been agreed that the watchdog will gain direct access to the search engines’ systems, so that direct verification can take place.

“In addition, preliminary agreements have been reached that the verification of the enforcement of the law by the search engines will be carried out through the interaction of the information systems of Roskomnadzor and the operators of search engines,” Rozcomnadzor reports.

Time for search engines to come into full compliance is ticking away. The law requiring them to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Part of that regulation requires anonymization services to disclose the identities of their owners to the government.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Kim Dotcom Wants K.im to Trigger a “Copyright Revolution”

Post Syndicated from Ernesto original https://torrentfreak.com/kim-dotcom-wants-k-im-to-trigger-a-copyright-revolution-170831/

For many people Kim Dotcom is synonymous with Megaupload, the file-sharing giant that was taken down by the U.S. Government early 2012.

While Megaupload is no more, the New Zealand Internet entrepreneur is working on a new file-sharing site. Initially dubbed Megaupload 2, the new service will be called K.im, and it will be quite different from its predecessor.

This week Dotcom, who’s officially the chief “evangelist” of the service, showed a demo to a few thousand people revealing more about what it’s going to offer.

K.im is not a central hosting service, quite the contrary. It will allow users to upload content and distribute it to dozens of other services, including Dropbox, Google, Reddit, Storj, and even torrent sites.

The files are distributed across the Internet where they can be accessed freely. However, there is a catch. The uploaders set a price for each download and people who want a copy can only unlock it through the K.im app or browser addon, after they’ve paid.

Pick your price

K.im, paired with Bitcache, is basically a micropayment solution. It allows creators to charge the public for everything they upload. Every download is tied to a Bitcoin transaction, turning files into their own “stores.”

Kim Dotcom tells TorrentFreak that he sees the service as a copyright revolution. It should be a win-win solution for independent creators, rightsholders, and people who are used to pirating stuff.

“I’m working for both sides. For the copyright holders and also for the people who what to pay for content but have been geo-blocked and then are forced to download for free,” Dotcom says.

Like any other site that allows user uploaded content, K.im can also be used by pirates who want to charge a small fee for spreading infringing content. This is something Dotcom is aware of, but he has a solution in mind.

Much like YouTube, which allows rightsholders to “monetize” videos that use their work, K.im will provide an option to claim pirated content. Rightsholders can then change the price and all revenue will go to them.

So, if someone uploads a pirated copy of the Game of Thrones season finale through K.im, HBO can claim that file, charge an appropriate fee, and profit from it. The uploader, meanwhile, maintains his privacy.

“It is the holy grail of copyright enforcement. It is my gift to Hollywood, the movie studios, and everyone else,” Dotcom says.

Dotcom believes that piracy is in large part caused by an availability problem. People can often not find the content they’re looking for so it’s K.im’s goal to distribute files as widely as possible. This includes several torrent sites, which are currently featured in the demo.

Torrent uploads?

Interestingly, it will be hard to upload content to sites such as YTS, EZTV, KickassTorrents, and RARBG, as they’ve been shut down or don’t allow user uploads. However, Dotcom stresses that the names are just examples, and that they are still working on partnering with various sites.

Whether torrent sites will be eager to cooperate has yet to be seen. It’s possible that the encrypted files, which can’t be opened without paying, will be seen as “spam” by traditional torrent sites.

Also, from a user perspective, one has to wonder how many people are willing to pay for something if they set out to pirate it. After all, there will always be plenty of free options for those who refuse to or can’t pay.

Dotcom, however, is convinced that K.im can create a “copyright revolution.” He stresses that site owners and uploaders can greatly benefit from it as they receive affiliate fees, even after a pirated file is claimed by a rightsholder.

In addition, he says it will revolutionize copyright enforcement, as copyright holders can monetize the work of pirates. That is, if they are willing to work with the service.

“Rightsholders can turn piracy traffic into revenue and users can access the content on any platform. Since every file is a store, it doesn’t matter where it ends up,” Dotcom says.

Dotcom does have a very valid point here. Many people have simply grown used to pirating because it’s much more convenient than using a dozen different services. In Dotcom’s vision, people can just use one site to access everything.

The ideas don’t stop at sharing files either. In the future, Dotcom also wants to use the micropayment option to offer YouTubers and media organizations to accept payments from the public, BBC notes.

There’s still a long way to go before K.im and Bitcache go public though. The expected launch date is not final yet, but the services are expected to go live in mid-to-late 2018.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Porn Producer Says He’ll Prove That AMC TV Exec is a BitTorrent Pirate

Post Syndicated from Andy original https://torrentfreak.com/porn-producer-says-hell-prove-that-amc-tv-exec-is-a-bittorrent-pirate-170818/

When people are found sharing copyrighted pornographic content online in the United States, there’s always a chance that an angry studio will attempt to track down the perpertrator in pursuit of a cash settlement.

That’s what adult studio Flava Works did recently, after finding its content being shared without permission on a number of gay-focused torrent sites. It’s now clear that their target was Marc Juris, President & General Manager of AMC-owned WE tv. Until this week, however, that information was secret.

As detailed in our report yesterday, Flava Works contacted Juris with an offer of around $97,000 to settle the case before trial. And, crucially, before Juris was publicly named in a lawsuit. If Juris decided not to pay, that amount would increase significantly, Flava Works CEO Phillip Bleicher told him at the time.

Not only did Juris not pay, he actually went on the offensive, filing a ‘John Doe’ complaint in a California district court which accused Flava Works of extortion and blackmail. It’s possible that Juris felt that this would cause Flava Works to back off but in fact, it had quite the opposite effect.

In a complaint filed this week in an Illinois district court, Flava Works named Juris and accused him of a broad range of copyright infringement offenses.

The complaint alleges that Juris was a signed-up member of Flava Works’ network of websites, from where he downloaded pornographic content as his subscription allowed. However, it’s claimed that Juris then uploaded this material elsewhere, in breach of copyright law.

“Defendant downloaded copyrighted videos of Flava Works as part of his paid memberships and, in violation of the terms and conditions of the paid sites, posted and distributed the aforesaid videos on other websites, including websites with peer to peer sharing and torrents technology,” the complaint reads.

“As a result of Defendant’ conduct, third parties were able to download the copyrighted videos, without permission of Flava Works.”

In addition to demanding injunctions against Juris, Flava Works asks the court for a judgment in its favor amounting to a cool $1.2m, more than twelve times the amount it was initially prepared to settle for. It’s a huge amount, but according to CEO Phillip Bleicher, it’s what his company is owed, despite Juris being a former customer.

“Juris was a member of various Flava Works websites at various times dating back to 2006. He is no longer a member and his login info has been blocked by us to prevent him from re-joining,” Bleicher informs TF.

“We allow full downloads, although each download a person performs, it tags the video with a hidden code that identifies who the user was that downloaded it and their IP info and date / time.”

We asked Bleicher how he can be sure that the content downloaded from Flava Works and re-uploaded elsewhere was actually uploaded by Juris. Fine details weren’t provided but he’s insistent that the company’s evidence holds up.

“We identified him directly, this was done by cross referencing all his IP logins with Flava Works, his email addresses he used and his usernames. We can confirm that he is/was a member of Gay-Torrents.org and Gayheaven.org. We also believe (we will find out in discovery) that he is a member of a Russian file sharing site called GayTorrent.Ru,” he says.

While the technicalities of who downloaded and shared what will be something for the court to decide, there’s still Juris’ allegations that Bleicher used extortion-like practices to get him to settle and used his relative fame against him. Bleicher says that’s not how things played out.

“[Juris] hired an attorney and they agreed to settle out of court. But then we saw him still accessing the file sharing sites (one site shows a user’s last login) and we were waiting on the settlement agreement to be drafted up by his attorney,” he explains.

“When he kept pushing the date of when we would see an agreement back we gave him a final deadline and said that after this date we would sue [him] and with all lawsuits – we make a press release.”

Bleicher says at this point Juris replaced his legal team and hired lawyer Mark Geragos, who Bleicher says tried to “bully” him, warning him of potential criminal offenses.

“Your threats in the last couple months to ‘expose’ Mr. Juris knowing he is a high profile individual, i.e., today you threatened to issue a press release, to induce him into wiring you close to $100,000 is outright extortion and subject to criminal prosecution,” Geragos wrote.

“I suggest you direct your attention to various statutes which specifically criminalize your conduct in the various jurisdictions where you have threatened suit.”

Interestingly, Geragos then went on to suggest that the lawsuit may ultimately backfire, since going public might affect Flava Works’ reputation in the gay market.

“With respect to Mr. Juris, your actions have been nothing but extortion and we reject your attempts and will vigorously pursue all available remedies against you,” Geragos’ email reads.

“We intend to use the platform you have provided to raise awareness in the LGBTQ community of this new form of digital extortion that you promote.”

But Bleicher, it seems, is up for a fight.

“Marc knows what he did and enjoyed downloading our videos and sharing them and those of videos of other studios, but now he has been caught,” he told the lawyer.

“This is the kind of case I would like to take all the way to trial, win or lose. It shows
people that want to steal our copyrighted videos that we aggressively protect our intellectual property.”

But to the tune of $1.2m? Apparently so.

“We could get up to $150,000 per infringement – we have solid proof of eight full videos – not to mention we have caught [Juris] downloading many other studios’ videos too – I think – but not sure – the number was over 75,” Bleicher told TF.

It’s quite rare for this kind of dispute to play out in public, especially considering Juris’ profile and occupation. Only time will tell if this will ultimately end in a settlement, but Bleicher and Juris seemed determined at this stage to stand by their ground and fight this out in court.

Complaint (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.