Tag Archives: 3m

GoMovies/123Movies Launches Anime Streaming Site

Post Syndicated from Ernesto original https://torrentfreak.com/gomovies123movies-launches-anime-streaming-site-171204/

Pirate video streaming sites are booming. Their relative ease of use through on-demand viewing makes them a viable alternative to P2P file-sharing, which traditionally dominated the piracy arena.

The popular movie streaming site GoMovies, formerly known as 123movies, is one of the most-used streaming sites. Despite the rebranding and several domain changes, it has built a steady base of millions of users over the past year and a half.

And it’s not done yet, we learn today.

The site, currently operating from the Gostream.is domain name, recently launched a new spinoff targeting anime fans. Animehub.to is currently promoted on GoMovies and the site’s operators aim to turn it into the leading streaming site for anime content.

Animehub.to

Someone connected to GoMovies told us that they’ve received a lot of requests from users to add anime content. Anime has traditionally been a large niche on file-sharing sites and the same is true on streaming platforms.

Technically speaking, GoMovies could have easily filled up the original site with anime content, but the owners prefer a different outlet.

With a separate anime site, they hope to draw in more visitors, TorrentFreak was told by an insider. For one, this makes it possible to rank better in search engines. It also allows the operators to cater specifically to the anime audience, with anime specific categories and release schedules.

Anime copyright holders will not be pleased with the new initiative, that’s for sure, but GoMovies is not new to legal pressure.

Earlier this year the US Ambassador to Vietnam called on the local Government to criminally prosecute people behind 123movies, the previous iteration of the site. In addition, the MPAA reported the site to the US Government in its recent overview of notorious pirate sites.

Pressure or not, it appears that GoMovies has no intention of slowing down or changing its course, although we’ve heard that yet another rebranding is on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Object models

Post Syndicated from Eevee original https://eev.ee/blog/2017/11/28/object-models/

Anonymous asks, with dollars:

More about programming languages!

Well then!

I’ve written before about what I think objects are: state and behavior, which in practice mostly means method calls.

I suspect that the popular impression of what objects are, and also how they should work, comes from whatever C++ and Java happen to do. From that point of view, the whole post above is probably nonsense. If the baseline notion of “object” is a rigid definition woven tightly into the design of two massively popular languages, then it doesn’t even make sense to talk about what “object” should mean — it does mean the features of those languages, and cannot possibly mean anything else.

I think that’s a shame! It piles a lot of baggage onto a fairly simple idea. Polymorphism, for example, has nothing to do with objects — it’s an escape hatch for static type systems. Inheritance isn’t the only way to reuse code between objects, but it’s the easiest and fastest one, so it’s what we get. Frankly, it’s much closer to a speed tradeoff than a fundamental part of the concept.

We could do with more experimentation around how objects work, but that’s impossible in the languages most commonly thought of as object-oriented.

Here, then, is a (very) brief run through the inner workings of objects in four very dynamic languages. I don’t think I really appreciated objects until I’d spent some time with Python, and I hope this can help someone else whet their own appetite.

Python 3

Of the four languages I’m going to touch on, Python will look the most familiar to the Java and C++ crowd. For starters, it actually has a class construct.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
class Vector:
    def __init__(self, x, y):
        self.x = x
        self.y = y

    def __neg__(self):
        return Vector(-self.x, -self.y)

    def __div__(self, denom):
        return Vector(self.x / denom, self.y / denom)

    @property
    def magnitude(self):
        return (self.x ** 2 + self.y ** 2) ** 0.5

    def normalized(self):
        return self / self.magnitude

The __init__ method is an initializer, which is like a constructor but named differently (because the object already exists in a usable form by the time the initializer is called). Operator overloading is done by implementing methods with other special __dunder__ names. Properties can be created with @property, where the @ is syntax for applying a wrapper function to a function as it’s defined. You can do inheritance, even multiply:

1
2
3
4
class Foo(A, B, C):
    def bar(self, x, y, z):
        # do some stuff
        super().bar(x, y, z)

Cool, a very traditional object model.

Except… for some details.

Some details

For one, Python objects don’t have a fixed layout. Code both inside and outside the class can add or remove whatever attributes they want from whatever object they want. The underlying storage is just a dict, Python’s mapping type. (Or, rather, something like one. Also, it’s possible to change, which will probably be the case for everything I say here.)

If you create some attributes at the class level, you’ll start to get a peek behind the curtains:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class Foo:
    values = []

    def add_value(self, value):
        self.values.append(value)

a = Foo()
b = Foo()
a.add_value('a')
print(a.values)  # ['a']
b.add_value('b')
print(b.values)  # ['a', 'b']

The [] assigned to values isn’t a default assigned to each object. In fact, the individual objects don’t know about it at all! You can use vars(a) to get at the underlying storage dict, and you won’t see a values entry in there anywhere.

Instead, values lives on the class, which is a value (and thus an object) in its own right. When Python is asked for self.values, it checks to see if self has a values attribute; in this case, it doesn’t, so Python keeps going and asks the class for one.

Python’s object model is secretly prototypical — a class acts as a prototype, as a shared set of fallback values, for its objects.

In fact, this is also how method calls work! They aren’t syntactically special at all, which you can see by separating the attribute lookup from the call.

1
2
3
print("abc".startswith("a"))  # True
meth = "abc".startswith
print(meth("a"))  # True

Reading obj.method looks for a method attribute; if there isn’t one on obj, Python checks the class. Here, it finds one: it’s a function from the class body.

Ah, but wait! In the code I just showed, meth seems to “know” the object it came from, so it can’t just be a plain function. If you inspect the resulting value, it claims to be a “bound method” or “built-in method” rather than a function, too. Something funny is going on here, and that funny something is the descriptor protocol.

Descriptors

Python allows attributes to implement their own custom behavior when read from or written to. Such an attribute is called a descriptor. I’ve written about them before, but here’s a quick overview.

If Python looks up an attribute, finds it in a class, and the value it gets has a __get__ method… then instead of using that value, Python will use the return value of its __get__ method.

The @property decorator works this way. The magnitude property in my original example was shorthand for doing this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class MagnitudeDescriptor:
    def __get__(self, instance, owner):
        if instance is None:
            return self
        return (instance.x ** 2 + instance.y ** 2) ** 0.5

class Vector:
    def __init__(self, x, y):
        self.x = x
        self.y = y

    magnitude = MagnitudeDescriptor()

When you ask for somevec.magnitude, Python checks somevec but doesn’t find magnitude, so it consults the class instead. The class does have a magnitude, and it’s a value with a __get__ method, so Python calls that method and somevec.magnitude evaluates to its return value. (The instance is None check is because __get__ is called even if you get the descriptor directly from the class via Vector.magnitude. A descriptor intended to work on instances can’t do anything useful in that case, so the convention is to return the descriptor itself.)

You can also intercept attempts to write to or delete an attribute, and do absolutely whatever you want instead. But note that, similar to operating overloading in Python, the descriptor must be on a class; you can’t just slap one on an arbitrary object and have it work.

This brings me right around to how “bound methods” actually work. Functions are descriptors! The function type implements __get__, and when a function is retrieved from a class via an instance, that __get__ bundles the function and the instance together into a tiny bound method object. It’s essentially:

1
2
3
4
5
class FunctionType:
    def __get__(self, instance, owner):
        if instance is None:
            return self
        return functools.partial(self, instance)

The self passed as the first argument to methods is not special or magical in any way. It’s built out of a few simple pieces that are also readily accessible to Python code.

Note also that because obj.method() is just an attribute lookup and a call, Python doesn’t actually care whether method is a method on the class or just some callable thing on the object. You won’t get the auto-self behavior if it’s on the object, but otherwise there’s no difference.

More attribute access, and the interesting part

Descriptors are one of several ways to customize attribute access. Classes can implement __getattr__ to intervene when an attribute isn’t found on an object; __setattr__ and __delattr__ to intervene when any attribute is set or deleted; and __getattribute__ to implement unconditional attribute access. (That last one is a fantastic way to create accidental recursion, since any attribute access you do within __getattribute__ will of course call __getattribute__ again.)

Here’s what I really love about Python. It might seem like a magical special case that descriptors only work on classes, but it really isn’t. You could implement exactly the same behavior yourself, in pure Python, using only the things I’ve just told you about. Classes are themselves objects, remember, and they are instances of type, so the reason descriptors only work on classes is that type effectively does this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
class type:
    def __getattribute__(self, name):
        value = super().__getattribute__(name)
        # like all op overloads, __get__ must be on the type, not the instance
        ty = type(value)
        if hasattr(ty, '__get__'):
            # it's a descriptor!  this is a class access so there is no instance
            return ty.__get__(value, None, self)
        else:
            return value

You can even trivially prove to yourself that this is what’s going on by skipping over types behavior:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
class Descriptor:
    def __get__(self, instance, owner):
        print('called!')

class Foo:
    bar = Descriptor()

Foo.bar  # called!
type.__getattribute__(Foo, 'bar')  # called!
object.__getattribute__(Foo, 'bar')  # ...

And that’s not all! The mysterious super function, used to exhaustively traverse superclass method calls even in the face of diamond inheritance, can also be expressed in pure Python using these primitives. You could write your own superclass calling convention and use it exactly the same way as super.

This is one of the things I really like about Python. Very little of it is truly magical; virtually everything about the object model exists in the types rather than the language, which means virtually everything can be customized in pure Python.

Class creation and metaclasses

A very brief word on all of this stuff, since I could talk forever about Python and I have three other languages to get to.

The class block itself is fairly interesting. It looks like this:

1
2
class Name(*bases, **kwargs):
    # code

I’ve said several times that classes are objects, and in fact the class block is one big pile of syntactic sugar for calling type(...) with some arguments to create a new type object.

The Python documentation has a remarkably detailed description of this process, but the gist is:

  • Python determines the type of the new class — the metaclass — by looking for a metaclass keyword argument. If there isn’t one, Python uses the “lowest” type among the provided base classes. (If you’re not doing anything special, that’ll just be type, since every class inherits from object and object is an instance of type.)

  • Python executes the class body. It gets its own local scope, and any assignments or method definitions go into that scope.

  • Python now calls type(name, bases, attrs, **kwargs). The name is whatever was right after class; the bases are position arguments; and attrs is the class body’s local scope. (This is how methods and other class attributes end up on the class.) The brand new type is then assigned to Name.

Of course, you can mess with most of this. You can implement __prepare__ on a metaclass, for example, to use a custom mapping as storage for the local scope — including any reads, which allows for some interesting shenanigans. The only part you can’t really implement in pure Python is the scoping bit, which has a couple extra rules that make sense for classes. (In particular, functions defined within a class block don’t close over the class body; that would be nonsense.)

Object creation

Finally, there’s what actually happens when you create an object — including a class, which remember is just an invocation of type(...).

Calling Foo(...) is implemented as, well, a call. Any type can implement calls with the __call__ special method, and you’ll find that type itself does so. It looks something like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# oh, a fun wrinkle that's hard to express in pure python: type is a class, so
# it's an instance of itself
class type:
    def __call__(self, *args, **kwargs):
        # remember, here 'self' is a CLASS, an instance of type.
        # __new__ is a true constructor: object.__new__ allocates storage
        # for a new blank object
        instance = self.__new__(self, *args, **kwargs)
        # you can return whatever you want from __new__ (!), and __init__
        # is only called on it if it's of the right type
        if isinstance(instance, self):
            instance.__init__(*args, **kwargs)
        return instance

Again, you can trivially confirm this by asking any type for its __call__ method. Assuming that type doesn’t implement __call__ itself, you’ll get back a bound version of types implementation.

1
2
>>> list.__call__
<method-wrapper '__call__' of type object at 0x7fafb831a400>

You can thus implement __call__ in your own metaclass to completely change how subclasses are created — including skipping the creation altogether, if you like.

And… there’s a bunch of stuff I haven’t even touched on.

The Python philosophy

Python offers something that, on the surface, looks like a “traditional” class/object model. Under the hood, it acts more like a prototypical system, where failed attribute lookups simply defer to a superclass or metaclass.

The language also goes to almost superhuman lengths to expose all of its moving parts. Even the prototypical behavior is an implementation of __getattribute__ somewhere, which you are free to completely replace in your own types. Proxying and delegation are easy.

Also very nice is that these features “bundle” well, by which I mean a library author can do all manner of convoluted hijinks, and a consumer of that library doesn’t have to see any of it or understand how it works. You only need to inherit from a particular class (which has a metaclass), or use some descriptor as a decorator, or even learn any new syntax.

This meshes well with Python culture, which is pretty big on the principle of least surprise. These super-advanced features tend to be tightly confined to single simple features (like “makes a weak attribute“) or cordoned with DSLs (e.g., defining a form/struct/database table with a class body). In particular, I’ve never seen a metaclass in the wild implement its own __call__.

I have mixed feelings about that. It’s probably a good thing overall that the Python world shows such restraint, but I wonder if there are some very interesting possibilities we’re missing out on. I implemented a metaclass __call__ myself, just once, in an entity/component system that strove to minimize fuss when communicating between components. It never saw the light of day, but I enjoyed seeing some new things Python could do with the same relatively simple syntax. I wouldn’t mind seeing, say, an object model based on composition (with no inheritance) built atop Python’s primitives.

Lua

Lua doesn’t have an object model. Instead, it gives you a handful of very small primitives for building your own object model. This is pretty typical of Lua — it’s a very powerful language, but has been carefully constructed to be very small at the same time. I’ve never encountered anything else quite like it, and “but it starts indexing at 1!” really doesn’t do it justice.

The best way to demonstrate how objects work in Lua is to build some from scratch. We need two key features. The first is metatables, which bear a passing resemblance to Python’s metaclasses.

Tables and metatables

The table is Lua’s mapping type and its primary data structure. Keys can be any value other than nil. Lists are implemented as tables whose keys are consecutive integers starting from 1. Nothing terribly surprising. The dot operator is sugar for indexing with a string key.

1
2
3
4
5
local t = { a = 1, b = 2 }
print(t['a'])  -- 1
print(t.b)  -- 2
t.c = 3
print(t['c'])  -- 3

A metatable is a table that can be associated with another value (usually another table) to change its behavior. For example, operator overloading is implemented by assigning a function to a special key in a metatable.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
local t = { a = 1, b = 2 }
--print(t + 0)  -- error: attempt to perform arithmetic on a table value

local mt = {
    __add = function(left, right)
        return 12
    end,
}
setmetatable(t, mt)
print(t + 0)  -- 12

Now, the interesting part: one of the special keys is __index, which is consulted when the base table is indexed by a key it doesn’t contain. Here’s a table that claims every key maps to itself.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
local t = {}
local mt = {
    __index = function(table, key)
        return key
    end,
}
setmetatable(t, mt)
print(t.foo)  -- foo
print(t.bar)  -- bar
print(t[3])  -- 3

__index doesn’t have to be a function, either. It can be yet another table, in which case that table is simply indexed with the key. If the key still doesn’t exist and that table has a metatable with an __index, the process repeats.

With this, it’s easy to have several unrelated tables that act as a single table. Call the base table an object, fill the __index table with functions and call it a class, and you have half of an object system. You can even get prototypical inheritance by chaining __indexes together.

At this point things are a little confusing, since we have at least three tables going on, so here’s a diagram. Keep in mind that Lua doesn’t actually have anything called an “object”, “class”, or “method” — those are just convenient nicknames for a particular structure we might build with Lua’s primitives.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
                    ╔═══════════╗        ...
                    ║ metatable ║         ║
                    ╟───────────╢   ┌─────╨───────────────────────┐
                    ║ __index   ╫───┤ lookup table ("superclass") │
                    ╚═══╦═══════╝   ├─────────────────────────────┤
  ╔═══════════╗         ║           │ some other method           ┼─── function() ... end
  ║ metatable ║         ║           └─────────────────────────────┘
  ╟───────────╢   ┌─────╨──────────────────┐
  ║ __index   ╫───┤ lookup table ("class") │
  ╚═══╦═══════╝   ├────────────────────────┤
      ║           │ some method            ┼─── function() ... end
      ║           └────────────────────────┘
┌─────╨─────────────────┐
│ base table ("object") │
└───────────────────────┘

Note that a metatable is not the same as a class; it defines behavior, not methods. Conversely, if you try to use a class directly as a metatable, it will probably not do much. (This is pretty different from e.g. Python, where operator overloads are just methods with funny names. One nice thing about the Lua approach is that you can keep interface-like functionality separate from methods, and avoid clogging up arbitrary objects’ namespaces. You could even use a dummy table as a key and completely avoid name collisions.)

Anyway, code!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
local class = {
    foo = function(a)
        print("foo got", a)
    end,
}
local mt = { __index = class }
-- setmetatable returns its first argument, so this is nice shorthand
local obj1 = setmetatable({}, mt)
local obj2 = setmetatable({}, mt)
obj1.foo(7)  -- foo got 7
obj2.foo(9)  -- foo got 9

Wait, wait, hang on. Didn’t I call these methods? How do they get at the object? Maybe Lua has a magical this variable?

Methods, sort of

Not quite, but this is where the other key feature comes in: method-call syntax. It’s the lightest touch of sugar, just enough to have method invocation.

1
2
3
4
5
6
7
8
9
-- note the colon!
a:b(c, d, ...)

-- exactly equivalent to this
-- (except that `a` is only evaluated once)
a.b(a, c, d, ...)

-- which of course is really this
a["b"](a, c, d, ...)

Now we can write methods that actually do something.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
local class = {
    bar = function(self)
        print("our score is", self.score)
    end,
}
local mt = { __index = class }
local obj1 = setmetatable({ score = 13 }, mt)
local obj2 = setmetatable({ score = 25 }, mt)
obj1:bar()  -- our score is 13
obj2:bar()  -- our score is 25

And that’s all you need. Much like Python, methods and data live in the same namespace, and Lua doesn’t care whether obj:method() finds a function on obj or gets one from the metatable’s __index. Unlike Python, the function will be passed self either way, because self comes from the use of : rather than from the lookup behavior.

(Aside: strictly speaking, any Lua value can have a metatable — and if you try to index a non-table, Lua will always consult the metatable’s __index. Strings all have the string library as a metatable, so you can call methods on them: try ("%s %s"):format(1, 2). I don’t think Lua lets user code set the metatable for non-tables, so this isn’t that interesting, but if you’re writing Lua bindings from C then you can wrap your pointers in metatables to give them methods implemented in C.)

Bringing it all together

Of course, writing all this stuff every time is a little tedious and error-prone, so instead you might want to wrap it all up inside a little function. No problem.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
local function make_object(body)
    -- create a metatable
    local mt = { __index = body }
    -- create a base table to serve as the object itself
    local obj = setmetatable({}, mt)
    -- and, done
    return obj
end

-- you can leave off parens if you're only passing in 
local Dog = {
    -- this acts as a "default" value; if obj.barks is missing, __index will
    -- kick in and find this value on the class.  but if obj.barks is assigned
    -- to, it'll go in the object and shadow the value here.
    barks = 0,

    bark = function(self)
        self.barks = self.barks + 1
        print("woof!")
    end,
}

local mydog = make_object(Dog)
mydog:bark()  -- woof!
mydog:bark()  -- woof!
mydog:bark()  -- woof!
print(mydog.barks)  -- 3
print(Dog.barks)  -- 0

It works, but it’s fairly barebones. The nice thing is that you can extend it pretty much however you want. I won’t reproduce an entire serious object system here — lord knows there are enough of them floating around — but the implementation I have for my LÖVE games lets me do this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
local Animal = Object:extend{
    cries = 0,
}

-- called automatically by Object
function Animal:init()
    print("whoops i couldn't think of anything interesting to put here")
end

-- this is just nice syntax for adding a first argument called 'self', then
-- assigning this function to Animal.cry
function Animal:cry()
    self.cries = self.cries + 1
end

local Cat = Animal:extend{}

function Cat:cry()
    print("meow!")
    Cat.__super.cry(self)
end

local cat = Cat()
cat:cry()  -- meow!
cat:cry()  -- meow!
print(cat.cries)  -- 2

When I say you can extend it however you want, I mean that. I could’ve implemented Python (2)-style super(Cat, self):cry() syntax; I just never got around to it. I could even make it work with multiple inheritance if I really wanted to — or I could go the complete opposite direction and only implement composition. I could implement descriptors, customizing the behavior of individual table keys. I could add pretty decent syntax for composition/proxying. I am trying very hard to end this section now.

The Lua philosophy

Lua’s philosophy is to… not have a philosophy? It gives you the bare minimum to make objects work, and you can do absolutely whatever you want from there. Lua does have something resembling prototypical inheritance, but it’s not so much a first-class feature as an emergent property of some very simple tools. And since you can make __index be a function, you could avoid the prototypical behavior and do something different entirely.

The very severe downside, of course, is that you have to find or build your own object system — which can get pretty confusing very quickly, what with the multiple small moving parts. Third-party code may also have its own object system with subtly different behavior. (Though, in my experience, third-party code tries very hard to avoid needing an object system at all.)

It’s hard to say what the Lua “culture” is like, since Lua is an embedded language that’s often a little different in each environment. I imagine it has a thousand millicultures, instead. I can say that the tedium of building my own object model has led me into something very “traditional”, with prototypical inheritance and whatnot. It’s partly what I’m used to, but it’s also just really dang easy to get working.

Likewise, while I love properties in Python and use them all the dang time, I’ve yet to use a single one in Lua. They wouldn’t be particularly hard to add to my object model, but having to add them myself (or shop around for an object model with them and also port all my code to use it) adds a huge amount of friction. I’ve thought about designing an interesting ECS with custom object behavior, too, but… is it really worth the effort? For all the power and flexibility Lua offers, the cost is that by the time I have something working at all, I’m too exhausted to actually use any of it.

JavaScript

JavaScript is notable for being preposterously heavily used, yet not having a class block.

Well. Okay. Yes. It has one now. It didn’t for a very long time, and even the one it has now is sugar.

Here’s a vector class again:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
class Vector {
    constructor(x, y) {
        this.x = x;
        this.y = y;
    }

    get magnitude() {
        return Math.sqrt(this.x * this.x + this.y * this.y);
    }

    dot(other) {
        return this.x * other.x + this.y * other.y;
    }
}

In “classic” JavaScript, this would be written as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
function Vector(x, y) {
    this.x = x;
    this.y = y;
}

Object.defineProperty(Vector.prototype, 'magnitude', {
    configurable: true,
    enumerable: true,
    get: function() {
        return Math.sqrt(this.x * this.x + this.y * this.y);
    },
});


Vector.prototype.dot = function(other) {
    return this.x * other.x + this.y * other.y;
};

Hm, yes. I can see why they added class.

The JavaScript model

In JavaScript, a new type is defined in terms of a function, which is its constructor.

Right away we get into trouble here. There is a very big difference between these two invocations, which I actually completely forgot about just now after spending four hours writing about Python and Lua:

1
2
let vec = Vector(3, 4);
let vec = new Vector(3, 4);

The first calls the function Vector. It assigns some properties to this, which here is going to be window, so now you have a global x and y. It then returns nothing, so vec is undefined.

The second calls Vector with this set to a new empty object, then evaluates to that object. The result is what you’d actually expect.

(You can detect this situation with the strange new.target expression, but I have never once remembered to do so.)

From here, we have true, honest-to-god, first-class prototypical inheritance. The word “prototype” is even right there. When you write this:

1
vec.dot(vec2)

JavaScript will look for dot on vec and (presumably) not find it. It then consults vecs prototype, an object you can see for yourself by using Object.getPrototypeOf(). Since vec is a Vector, its prototype is Vector.prototype.

I stress that Vector.prototype is not the prototype for Vector. It’s the prototype for instances of Vector.

(I say “instance”, but the true type of vec here is still just object. If you want to find Vector, it’s automatically assigned to the constructor property of its own prototype, so it’s available as vec.constructor.)

Of course, Vector.prototype can itself have a prototype, in which case the process would continue if dot were not found. A common (and, arguably, very bad) way to simulate single inheritance is to set Class.prototype to an instance of a superclass to get the prototype right, then tack on the methods for Class. Nowadays we can do Object.create(Superclass.prototype).

Now that I’ve been through Python and Lua, though, this isn’t particularly surprising. I kinda spoiled it.

I suppose one difference in JavaScript is that you can tack arbitrary attributes directly onto Vector all you like, and they will remain invisible to instances since they aren’t in the prototype chain. This is kind of backwards from Lua, where you can squirrel stuff away in the metatable.

Another difference is that every single object in JavaScript has a bunch of properties already tacked on — the ones in Object.prototype. Every object (and by “object” I mean any mapping) has a prototype, and that prototype defaults to Object.prototype, and it has a bunch of ancient junk like isPrototypeOf.

(Nit: it’s possible to explicitly create an object with no prototype via Object.create(null).)

Like Lua, and unlike Python, JavaScript doesn’t distinguish between keys found on an object and keys found via a prototype. Properties can be defined on prototypes with Object.defineProperty(), but that works just as well directly on an object, too. JavaScript doesn’t have a lot of operator overloading, but some things like Symbol.iterator also work on both objects and prototypes.

About this

You may, at this point, be wondering what this is. Unlike Lua and Python (and the last language below), this is a special built-in value — a context value, invisibly passed for every function call.

It’s determined by where the function came from. If the function was the result of an attribute lookup, then this is set to the object containing that attribute. Otherwise, this is set to the global object, window. (You can also set this to whatever you want via the call method on functions.)

This decision is made lexically, i.e. from the literal source code as written. There are no Python-style bound methods. In other words:

1
2
3
4
5
// this = obj
obj.method()
// this = window
let meth = obj.method
meth()

Also, because this is reassigned on every function call, it cannot be meaningfully closed over, which makes using closures within methods incredibly annoying. The old approach was to assign this to some other regular name like self (which got syntax highlighting since it’s also a built-in name in browsers); then we got Function.bind, which produced a callable thing with a fixed context value, which was kind of nice; and now finally we have arrow functions, which explicitly close over the current this when they’re defined and don’t change it when called. Phew.

Class syntax

I already showed class syntax, and it’s really just one big macro for doing all the prototype stuff The Right Way. It even prevents you from calling the type without new. The underlying model is exactly the same, and you can inspect all the parts.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
class Vector { ... }

console.log(Vector.prototype);  // { dot: ..., magnitude: ..., ... }
let vec = new Vector(3, 4);
console.log(Object.getPrototypeOf(vec));  // same as Vector.prototype

// i don't know why you would subclass vector but let's roll with it
class Vectest extends Vector { ... }

console.log(Vectest.prototype);  // { ... }
console.log(Object.getPrototypeOf(Vectest.prototype))  // same as Vector.prototype

Alas, class syntax has a couple shortcomings. You can’t use the class block to assign arbitrary data to either the type object or the prototype — apparently it was deemed too confusing that mutations would be shared among instances. Which… is… how prototypes work. How Python works. How JavaScript itself, one of the most popular languages of all time, has worked for twenty-two years. Argh.

You can still do whatever assignment you want outside of the class block, of course. It’s just a little ugly, and not something I’d think to look for with a sugary class.

A more subtle result of this behavior is that a class block isn’t quite the same syntax as an object literal. The check for data isn’t a runtime thing; class Foo { x: 3 } fails to parse. So JavaScript now has two largely but not entirely identical styles of key/value block.

Attribute access

Here’s where things start to come apart at the seams, just a little bit.

JavaScript doesn’t really have an attribute protocol. Instead, it has two… extension points, I suppose.

One is Object.defineProperty, seen above. For common cases, there’s also the get syntax inside a property literal, which does the same thing. But unlike Python’s @property, these aren’t wrappers around some simple primitives; they are the primitives. JavaScript is the only language of these four to have “property that runs code on access” as a completely separate first-class concept.

If you want to intercept arbitrary attribute access (and some kinds of operators), there’s a completely different primitive: the Proxy type. It doesn’t let you intercept attribute access or operators; instead, it produces a wrapper object that supports interception and defers to the wrapped object by default.

It’s cool to see composition used in this way, but also, extremely weird. If you want to make your own type that overloads in or calling, you have to return a Proxy that wraps your own type, rather than actually returning your own type. And (unlike the other three languages in this post) you can’t return a different type from a constructor, so you have to throw that away and produce objects only from a factory. And instanceof would be broken, but you can at least fix that with Symbol.hasInstance — which is really operator overloading, implement yet another completely different way.

I know the design here is a result of legacy and speed — if any object could intercept all attribute access, then all attribute access would be slowed down everywhere. Fair enough. It still leaves the surface area of the language a bit… bumpy?

The JavaScript philosophy

It’s a little hard to tell. The original idea of prototypes was interesting, but it was hidden behind some very awkward syntax. Since then, we’ve gotten a bunch of extra features awkwardly bolted on to reflect the wildly varied things the built-in types and DOM API were already doing. We have class syntax, but it’s been explicitly designed to avoid exposing the prototype parts of the model.

I admit I don’t do a lot of heavy JavaScript, so I might just be overlooking it, but I’ve seen virtually no code that makes use of any of the recent advances in object capabilities. Forget about custom iterators or overloading call; I can’t remember seeing any JavaScript in the wild that even uses properties yet. I don’t know if everyone’s waiting for sufficient browser support, nobody knows about them, or nobody cares.

The model has advanced recently, but I suspect JavaScript is still shackled to its legacy of “something about prototypes, I don’t really get it, just copy the other code that’s there” as an object model. Alas! Prototypes are so good. Hopefully class syntax will make it a bit more accessible, as it has in Python.

Perl 5

Perl 5 also doesn’t have an object system and expects you to build your own. But where Lua gives you two simple, powerful tools for building one, Perl 5 feels more like a puzzle with half the pieces missing. Clearly they were going for something, but they only gave you half of it.

In brief, a Perl object is a reference that has been blessed with a package.

I need to explain a few things. Honestly, one of the biggest problems with the original Perl object setup was how many strange corners and unique jargon you had to understand just to get off the ground.

(If you want to try running any of this code, you should stick a use v5.26; as the first line. Perl is very big on backwards compatibility, so you need to opt into breaking changes, and even the mundane say builtin is behind a feature gate.)

References

A reference in Perl is sort of like a pointer, but its main use is very different. See, Perl has the strange property that its data structures try very hard to spill their contents all over the place. Despite having dedicated syntax for arrays — @foo is an array variable, distinct from the single scalar variable $foo — it’s actually impossible to nest arrays.

1
2
3
my @foo = (1, 2, 3, 4);
my @bar = (@foo, @foo);
# @bar is now a flat list of eight items: 1, 2, 3, 4, 1, 2, 3, 4

The idea, I guess, is that an array is not one thing. It’s not a container, which happens to hold multiple things; it is multiple things. Anywhere that expects a single value, such as an array element, cannot contain an array, because an array fundamentally is not a single value.

And so we have “references”, which are a form of indirection, but also have the nice property that they’re single values. They add containment around arrays, and in general they make working with most of Perl’s primitive types much more sensible. A reference to a variable can be taken with the \ operator, or you can use [ ... ] and { ... } to directly create references to anonymous arrays or hashes.

1
2
3
my @foo = (1, 2, 3, 4);
my @bar = (\@foo, \@foo);
# @bar is now a nested list of two items: [1, 2, 3, 4], [1, 2, 3, 4]

(Incidentally, this is the sole reason I initially abandoned Perl for Python. Non-trivial software kinda requires nesting a lot of data structures, so you end up with references everywhere, and the syntax for going back and forth between a reference and its contents is tedious and ugly.)

A Perl object must be a reference. Perl doesn’t care what kind of reference — it’s usually a hash reference, since hashes are a convenient place to store arbitrary properties, but it could just as well be a reference to an array, a scalar, or even a sub (i.e. function) or filehandle.

I’m getting a little ahead of myself. First, the other half: blessing and packages.

Packages and blessing

Perl packages are just namespaces. A package looks like this:

1
2
3
4
5
6
7
package Foo::Bar;

sub quux {
    say "hi from quux!";
}

# now Foo::Bar::quux() can be called from anywhere

Nothing shocking, right? It’s just a named container. A lot of the details are kind of weird, like how a package exists in some liminal quasi-value space, but the basic idea is a Bag Of Stuff.

The final piece is “blessing,” which is Perl’s funny name for binding a package to a reference. A very basic class might look like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
package Vector;

# the name 'new' is convention, not special
sub new {
    # perl argument passing is weird, don't ask
    my ($class, $x, $y) = @_;

    # create the object itself -- here, unusually, an array reference makes sense
    my $self = [ $x, $y ];

    # associate the package with that reference
    # note that $class here is just the regular string, 'Vector'
    bless $self, $class;

    return $self;
}

sub x {
    my ($self) = @_;
    return $self->[0];
}

sub y {
    my ($self) = @_;
    return $self->[1];
}

sub magnitude {
    my ($self) = @_;
    return sqrt($self->x ** 2 + $self->y ** 2);
}

# switch back to the "default" package
package main;

# -> is method call syntax, which passes the invocant as the first argument;
# for a package, that's just the package name
my $vec = Vector->new(3, 4);
say $vec->magnitude;  # 5

A few things of note here. First, $self->[0] has nothing to do with objects; it’s normal syntax for getting the value of a index 0 out of an array reference called $self. (Most classes are based on hashrefs and would use $self->{value} instead.) A blessed reference is still a reference and can be treated like one.

In general, -> is Perl’s dereferencey operator, but its exact behavior depends on what follows. If it’s followed by brackets, then it’ll apply the brackets to the thing in the reference: ->{} to index a hash reference, ->[] to index an array reference, and ->() to call a function reference.

But if -> is followed by an identifier, then it’s a method call. For packages, that means calling a function in the package and passing the package name as the first argument. For objects — blessed references — that means calling a function in the associated package and passing the object as the first argument.

This is a little weird! A blessed reference is a superposition of two things: its normal reference behavior, and some completely orthogonal object behavior. Also, object behavior has no notion of methods vs data; it only knows about methods. Perl lets you omit parentheses in a lot of places, including when calling a method with no arguments, so $vec->magnitude is really $vec->magnitude().

Perl’s blessing bears some similarities to Lua’s metatables, but ultimately Perl is much closer to Ruby’s “message passing” approach than the above three languages’ approaches of “get me something and maybe it’ll be callable”. (But this is no surprise — Ruby is a spiritual successor to Perl 5.)

All of this leads to one little wrinkle: how do you actually expose data? Above, I had to write x and y methods. Am I supposed to do that for every single attribute on my type?

Yes! But don’t worry, there are third-party modules to help with this incredibly fundamental task. Take Class::Accessor::Fast, so named because it’s faster than Class::Accessor:

1
2
3
package Foo;
use base qw(Class::Accessor::Fast);
__PACKAGE__->mk_accessors(qw(fred wilma barney));

(__PACKAGE__ is the lexical name of the current package; qw(...) is a list literal that splits its contents on whitespace.)

This assumes you’re using a hashref with keys of the same names as the attributes. $obj->fred will return the fred key from your hashref, and $obj->fred(4) will change it to 4.

You also, somewhat bizarrely, have to inherit from Class::Accessor::Fast. Speaking of which,

Inheritance

Inheritance is done by populating the package-global @ISA array with some number of (string) names of parent packages. Most code instead opts to write use base ...;, which does the same thing. Or, more commonly, use parent ...;, which… also… does the same thing.

Every package implicitly inherits from UNIVERSAL, which can be freely modified by Perl code.

A method can call its superclass method with the SUPER:: pseudo-package:

1
2
3
4
sub foo {
    my ($self) = @_;
    $self->SUPER::foo;
}

However, this does a depth-first search, which means it almost certainly does the wrong thing when faced with multiple inheritance. For a while the accepted solution involved a third-party module, but Perl eventually grew an alternative you have to opt into: C3, which may be more familiar to you as the order Python uses.

1
2
3
4
5
6
use mro 'c3';

sub foo {
    my ($self) = @_;
    $self->next::method;
}

Offhand, I’m not actually sure how next::method works, seeing as it was originally implemented in pure Perl code. I suspect it involves peeking at the caller’s stack frame. If so, then this is a very different style of customizability from e.g. Python — the MRO was never intended to be pluggable, and the use of a special pseudo-package means it isn’t really, but someone was determined enough to make it happen anyway.

Operator overloading and whatnot

Operator overloading looks a little weird, though really it’s pretty standard Perl.

1
2
3
4
5
6
7
8
package MyClass;

use overload '+' => \&_add;

sub _add {
    my ($self, $other, $swap) = @_;
    ...
}

use overload here is a pragma, where “pragma” means “regular-ass module that does some wizardry when imported”.

\&_add is how you get a reference to the _add sub so you can pass it to the overload module. If you just said &_add or _add, that would call it.

And that’s it; you just pass a map of operators to functions to this built-in module. No worry about name clashes or pollution, which is pretty nice. You don’t even have to give references to functions that live in the package, if you don’t want them to clog your namespace; you could put them in another package, or even inline them anonymously.

One especially interesting thing is that Perl lets you overload every operator. Perl has a lot of operators. It considers some math builtins like sqrt and trig functions to be operators, or at least operator-y enough that you can overload them. You can also overload the “file text” operators, such as -e $path to test whether a file exists. You can overload conversions, including implicit conversion to a regex. And most fascinating to me, you can overload dereferencing — that is, the thing Perl does when you say $hashref->{key} to get at the underlying hash. So a single object could pretend to be references of multiple different types, including a subref to implement callability. Neat.

Somewhat related: you can overload basic operators (indexing, etc.) on basic types (not references!) with the tie function, which is designed completely differently and looks for methods with fixed names. Go figure.

You can intercept calls to nonexistent methods by implementing a function called AUTOLOAD, within which the $AUTOLOAD global will contain the name of the method being called. Originally this feature was, I think, intended for loading binary components or large libraries on-the-fly only when needed, hence the name. Offhand I’m not sure I ever saw it used the way __getattr__ is used in Python.

Is there a way to intercept all method calls? I don’t think so, but it is Perl, so I must be forgetting something.

Actually no one does this any more

Like a decade ago, a council of elder sages sat down and put together a whole whizbang system that covers all of it: Moose.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
package Vector;
use Moose;

has x => (is => 'rw', isa => 'Int');
has y => (is => 'rw', isa => 'Int');

sub magnitude {
    my ($self) = @_;
    return sqrt($self->x ** 2 + $self->y ** 2);
}

Moose has its own way to do pretty much everything, and it’s all built on the same primitives. Moose also adds metaclasses, somehow, despite that the underlying model doesn’t actually support them? I’m not entirely sure how they managed that, but I do remember doing some class introspection with Moose and it was much nicer than the built-in way.

(If you’re wondering, the built-in way begins with looking at the hash called %Vector::. No, that’s not a typo.)

I really cannot stress enough just how much stuff Moose does, but I don’t want to delve into it here since Moose itself is not actually the language model.

The Perl philosophy

I hope you can see what I meant with what I first said about Perl, now. It has multiple inheritance with an MRO, but uses the wrong one by default. It has extensive operator overloading, which looks nothing like how inheritance works, and also some of it uses a totally different mechanism with special method names instead. It only understands methods, not data, leaving you to figure out accessors by hand.

There’s 70% of an object system here with a clear general design it was gunning for, but none of the pieces really look anything like each other. It’s weird, in a distinctly Perl way.

The result is certainly flexible, at least! It’s especially cool that you can use whatever kind of reference you want for storage, though even as I say that, I acknowledge it’s no different from simply subclassing list or something in Python. It feels different in Perl, but maybe only because it looks so different.

I haven’t written much Perl in a long time, so I don’t know what the community is like any more. Moose was already ubiquitous when I left, which you’d think would let me say “the community mostly focuses on the stuff Moose can do” — but even a decade ago, Moose could already do far more than I had ever seen done by hand in Perl. It’s always made a big deal out of roles (read: interfaces), for instance, despite that I’d never seen anyone care about them in Perl before Moose came along. Maybe their presence in Moose has made them more popular? Who knows.

Also, I wrote Perl seriously, but in the intervening years I’ve only encountered people who only ever used Perl for one-offs. Maybe it’ll come as a surprise to a lot of readers that Perl has an object model at all.

End

Well, that was fun! I hope any of that made sense.

Special mention goes to Rust, which doesn’t have an object model you can fiddle with at runtime, but does do things a little differently.

It’s been really interesting thinking about how tiny differences make a huge impact on what people do in practice. Take the choice of storage in Perl versus Python. Perl’s massively common URI class uses a string as the storage, nothing else; I haven’t seen anything like that in Python aside from markupsafe, which is specifically designed as a string type. I would guess this is partly because Perl makes you choose — using a hashref is an obvious default, but you have to make that choice one way or the other. In Python (especially 3), inheriting from object and getting dict-based storage is the obvious thing to do; the ability to use another type isn’t quite so obvious, and doing it “right” involves a tiny bit of extra work.

Or, consider that Lua could have descriptors, but the extra bit of work (especially design work) has been enough of an impediment that I’ve never implemented them. I don’t think the object implementations I’ve looked at have included them, either. Super weird!

In that light, it’s only natural that objects would be so strongly associated with the features Java and C++ attach to them. I think that makes it all the more important to play around! Look at what Moose has done. No, really, you should bear in mind my description of how Perl does stuff and flip through the Moose documentation. It’s amazing what they’ve built.

Despite Massive Site-Blocking, Russian Pirate Video Market Doubles

Post Syndicated from Andy original https://torrentfreak.com/despite-massive-site-blocking-russian-pirate-video-market-doubles-171029/

On many occasions, outgoing MPAA chief Chris Dodd has praised countries that have implemented legislation that allows for widespread site-blocking.

In 2014, when the UK had blocked just a few dozen sites, Dodd described the practice as both “balanced and proportionate” while noting that it had made “tremendous progress in tackling infringing websites.” This, the former senator said, made it “one of the most effective anti-piracy measures in the world.”

With many thousands of ‘pirate’ sites blocked across the country, perhaps Dodd should ask the Russians how they’re getting on. Spoiler: Not that great.

According to new research published by Group-IB and reported by Izvestia, Internet pirates have been adapting to their new reality, finding new and stable ways of doing business while growing their turnover.

In fact, according to the ‘Economics of Pirate Sites Report 2016’, they’ve been so successful that the market for Internet pirate video more than doubled in value during 2016, reaching a peak of 3.3 billion rubles ($57.2m) versus just 1.5 billion rubles ($26m) in 2015. Overall Internet piracy in 2016 was valued at a billion rubles more ($74.5m), Group-IB notes.

According to the report, Russian pirates operated with impunity until 2012, at which point the government – under pressure from rightsholders – began introducing tough anti-piracy legislation, which included the blocking of pirate domains. Since then there have been a number of amendments which further tightened the law but pirates have adapted each time, protecting their revenue with new business plans.

Group-IB says that the most successful pirate site in 2016 was Seasonvar.ru, which pulled in a million visitors every day generating just over $3.3m in revenue. Second place was taken by My-hit.org, with almost 400,000 daily visitors generating an annual income of $1.2m. HDrezka.me served more than 315,000 people daily and made roughly the same to take third. Fourth and fifth spots were taken by Kinokrad.co and Baskino.Club.

Overall, it’s estimated that the average pirate video site makes around $156,000 per year via advertising, subscriptions, or via voluntary donations. They’re creative with their money channels too.

According to Maxim Ryabyko, Director General of Association for the Protection of Copyright on the Internet (AZAPO), sites use middle-men for dealing with both advertisers and payment processors, which enables operators to remain anonymous.

Like in the United States, UK and elsewhere in Europe, lists of pirate sites have been drawn up to give advertisers and networks guidance on where not to place their ads. However, plenty of companies aren’t involved in the initiative. When challenged over their ads appearing on pirate sites, some protest that it hasn’t been proven that the sites are acting illegally, so the business will continue.

“There is no negative attitude towards piracy by intermediaries. Their money is not objectionable. Sometimes they say: ‘Go and sue. There will be a court hearing, there will be a decision, and we will draw conclusions then’,” Ryabyko told Izvestia.

Dealing with pirates in the criminal arena isn’t easy either, with key players admitting that the police have other things to concentrate on.

“Law enforcers do not know how to search for pirates, they have other priorities,” says Sergei Semenov of the Film and TV Producers Association.

“Criminal prosecutions in this area are a great strain – single cases require a lot of time for investigation and going to court does not achieve the desired effect. We therefore need to diversify the grounds for bringing people to justice. We need to see how this can be applied.”

One idea is to prosecute pirates for non-payment of taxes. Well, it worked for Al Capone….

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA Reports Pirate Sites, Hosts and Ad-Networks to US Government

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-reports-pirate-sites-hosts-and-ad-networks-to-us-government-171004/

Responding to a request from the Office of the US Trade Representative (USTR), the MPAA has submitted an updated list of “notorious markets” that it says promote the illegal distribution of movies and TV-shows.

These annual submissions help to guide the U.S. Government’s position towards foreign countries when it comes to copyright enforcement.

What stands out in the MPAA’s latest overview is that it no longer includes offline markets, only sites and services that are available on the Internet. This suggests that online copyright infringement is seen as a priority.

The MPAA’s report includes more than two dozen alleged pirate sites in various categories. While this is not an exhaustive list, the movie industry specifically highlights some of the worst offenders in various categories.

“Content thieves take advantage of a wide constellation of easy-to-use online technologies, such as direct download and streaming, to create infringing sites and applications, often with the look and feel of legitimate content distributors, luring unsuspecting consumers into piracy,” the MPAA writes.

According to the MPAA, torrent sites remain popular, serving millions of torrents to tens of millions of users at any given time.

The Pirate Bay has traditionally been one of the main targets. Based on data from Alexa and SimilarWeb, the MPAA says that TPB has about 62 million unique visitors per month. The other torrent sites mentioned are 1337x.to, Rarbg.to, Rutracker.org, and Torrentz2.eu.

MPAA calls out torrent sites

The second highlighted category covers various linking and streaming sites. This includes the likes of Fmovies.is, Gostream.is, Primewire.ag, Kinogo.club, MeWatchSeries.to, Movie4k.tv and Repelis.tv.

Direct download sites and video hosting services also get a mention. Nowvideo.sx, Openload.co, Rapidgator.net, Uploaded.net and the Russian social network VK.com. Many of these services refuse to properly process takedown notices, the MPAA claims.

The last category is new and centers around piracy apps. These sites offer mobile applications that allow users to stream pirated content, such as IpPlayBox.tv, MoreTV, 3DBoBoVR, TVBrowser, and KuaiKa, which are particularly popular in Asia.

Aside from listing specific sites, the MPAA also draws the US Government’s attention to the streaming box problem. The report specifically mentions that Kodi-powered boxes are regularly abused for infringing purposes.

“An emerging global threat is streaming piracy which is enabled by piracy devices preloaded with software to illicitly stream movies and television programming and a burgeoning ecosystem of infringing add-ons,” the MPAA notes.

“The most popular software is an open source media player software, Kodi. Although Kodi is not itself unlawful, and does not host or link to unlicensed content, it can be easily configured to direct consumers toward unlicensed films and television shows.”

Pirate streaming boxes

There are more than 750 websites offering infringing devices, the Hollywood group notes, adding that the rapid growth of this problem is startling. Interestingly, the report mentions TVAddons.ag as a “piracy add-on repository,” noting that it’s currently offline. Whether the new TVAddons is also seen a problematic is unclear.

The MPAA also continues its trend of calling out third-party intermediaries, including hosting providers. These companies refuse to take pirate sites offline following complaints, even when the MPAA views them as blatantly violating the law.

“Hosting companies provide the essential infrastructure required to operate a website,” the MPAA writes. “Given the central role of hosting providers in the online ecosystem, it is very concerning that many refuse to take action upon being notified…”

The Hollywood group specifically mentions Private Layer and Netbrella as notorious markets. CDN provider CloudFlare is also named. As a US-based company, the latter can’t be included in the list. However, the MPAA explains that it is often used as an anonymization tool by sites and services that are mentioned in the report.

Another group of intermediaries that play a role in fueling piracy (mentioned for the first time) are advertising networks. The MPAA specifically calls out the Canadian company WWWPromoter, which works with sites such as Primewire.ag, Projectfreetv.at and 123movies.to

“The companies connecting advertisers to infringing websites and inadvertently contribute to the prevalence and prosperity of infringing sites by providing funding to the operators of these sites through advertising revenue,” the MPAA writes.

The MPAA’s full report is available here (pdf). The USTR will use this input above to make up its own list of notorious markets. This will help to identify current threats and call on foreign governments to take appropriate action.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

US Court Orders Dozens of “Pirate” Site Domain Seizures

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-orders-dozens-of-pirate-site-domain-seizures-170927/

ABS-CBN, the largest media and entertainment company in the Philippines, has delivered another strike to pirate sites in the United States.

Last week a federal court in Florida signed a default judgment against 43 websites that offered copyright-infringing streams of ABS-CBN owned movies, including Star Cinema titles.

The order was signed exactly one day after the complaint was filed, in what appears to be a streamlined process.

The media company accused the websites of trademark and copyright infringement by making free streams of its content available without permission. It then asked the court for assistance to shut these sites down as soon as possible.

“Defendants’ websites operating under the Subject Domain Names are classic examples of pirate operations, having no regard whatsoever for the rights of ABS-CBN and willfully infringing ABS-CBN’s intellectual property.

“As a result, ABS-CBN requires this Court’s intervention if any meaningful stop is to be put to Defendants’ piracy,” ABS-CBN wrote.

Instead of a lengthy legal process that can take years to complete, ABS-CBN went for an “ex-parte” request for domain seizures, which means that the websites in question are not notified or involved in the process before the order is issued.

After reviewing the proposed injunction, US District Judge Beth Bloom signed off on it. This means that all the associated registrars must hand over the domain names in question.

“The domain name registrars for the Subject Domain Names shall immediately assist in changing the registrar of record for the Subject Domain Names, to a holding account with a registrar of Plaintiffs’ choosing..,” the order (pdf) reads.

In the days that followed, several streaming-site domains were indeed taken over. Movieonline.io, 1movies.tv, 123movieshd.us, 4k-movie.us, icefilms.ws and others are now linking to a notice page with information about the lawsuit instead.

The notice

Gomovies.es, which is also included, has not been transferred yet, but the operator appears to be aware of the lawsuit as the site now redirects to Gomovies.vg. Other domains, such as Onlinefullmovie.me, Putlockerm.live and Newasiantv.io remain online as well.

While the targeted sites together are good for thousands of daily visitors, they’re certainly not the biggest fish.

That said, the most significant thing about the case is not that these domain names have been taken offline. What stands out is the ability of an ex-parte request from a copyright holder to easily take out dozens of sites in one swoop.

Given ABS-CBN’s legal track record, this is likely not the last effort of this kind. The question now is if others will follow suit.

The full list of targeted domain is as follows.

1 movieonline.io
2 1movies.tv
3 gomovies.es
4 123movieshd.us
5 4k-movie.us
6 desitvflix.net
7 globalpinoymovies.com
8 icefilms.ws
9 jhonagemini.com
10 lambinganph.info
11 mrkdrama.com
12 newasiantv.me
13 onlinefullmovie.me
14 pariwiki.net
15 pinoychannel.live
16 pinoychannel.mobi
17 pinoyfullmovies.net
18 pinoyhdtorrent.com
19 pinoylibangandito.pw
20 pinoymoviepedia.ch
21 pinoysharetv.com
22 pinoytambayanhd.com
23 pinoyteleseryerewind.info
24 philnewsnetwork.com
25 pinoytvrewind.info
26 pinoytzater.com
27 subenglike.com
28 tambayantv.org
29 teleseryi.com
30 thepinoy1tv.com
31 thepinoychannel.com
32 tvbwiki.com
33 tvnaa.com
34 urpinoytv.com
35 vikiteleserye.com
36 viralsocialnetwork.com
37 watchpinoymoviesonline.com
38 pinoysteleserye.xyz
39 pinoytambayan.world
40 lambingan.lol
41 123movies.film
42 putlockerm.live
43 yonip.zone
43 yonipzone.rocks

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

Perfect 10 Takes Giganews to Supreme Court, Says It’s Worse Than Megaupload

Post Syndicated from Andy original https://torrentfreak.com/perfect-10-takes-giganews-supreme-court-says-worse-megaupload-170906/

Adult publisher Perfect 10 has developed a reputation for being a serial copyright litigant.

Over the years the company targeted a number of high-profile defendants, including Google, Amazon, Mastercard, and Visa. Around two dozen of Perfect 10’s lawsuits ended in cash settlements and defaults, in the publisher’s favor.

Perhaps buoyed by this success, the company went after Usenet provider Giganews but instead of a company willing to roll over, Perfect 10 found a highly defensive and indeed aggressive opponent. The initial copyright case filed by Perfect 10 alleged that Giganews effectively sold access to Perfect 10 content but things went badly for the publisher.

In November 2014, the U.S. District Court for the Central District of California found that Giganews was not liable for the infringing activities of its users. Perfect 10 was ordered to pay Giganews $5.6m in attorney’s fees and costs. Perfect 10 lost again at the Court of Appeals for the Ninth Circuit.

As a result of these failed actions, Giganews is owned millions by Perfect 10 but the publisher has thus far refused to pay up. That resulted in Giganews filing a $20m lawsuit, accusing Perfect 10 and President Dr. Norman Zada of fraud.

With all this litigation boiling around in the background and Perfect 10 already bankrupt as a result, one might think the story would be near to a conclusion. That doesn’t seem to be the case. In a fresh announcement, Perfect 10 says it has now appealed its case to the US Supreme Court.

“This is an extraordinarily important case, because for the first time, an appellate court has allowed defendants to copy and sell movies, songs, images, and other copyrighted works, without permission or payment to copyright holders,” says Zada.

“In this particular case, evidence was presented that defendants were copying and selling access to approximately 25,000 terabytes of unlicensed movies, songs, images, software, and magazines.”

Referencing an Amicus brief previously filed by the RIAA which described Giganews as “blatant copyright pirates,” Perfect 10 accuses the Ninth Circuit of allowing Giganews to copy and sell trillions of dollars of other people’s intellectual property “because their copying and selling was done in an automated fashion using a computer.”

Noting that “everything is done via computer” these days and with an undertone that the ruling encouraged others to infringe, Perfect 10 says there are now 88 companies similar to Giganews which rely on the automation defense to commit infringement – even involving content owned by people in the US Government.

“These exploiters of other people’s property are fearless. They are copying and selling access to pirated versions of pretty much every movie ever made, including films co-produced by treasury secretary Steven Mnuchin,” Nada says.

“You would think the justice department would do something to protect the viability of this nation’s movie and recording studios, as unfettered piracy harms jobs and tax revenues, but they have done nothing.”

But Zada doesn’t stop at blaming Usenet services, the California District Court, the Ninth Circuit, and the United States Department of Justice for his problems – Congress is to blame too.

“Copyright holders have nowhere to turn other than the Federal courts, whose judges are ridiculously overworked. For years, Congress has failed to provide the Federal courts with adequate funding. As a result, judges can make mistakes,” he adds.

For Zada, those mistakes are particularly notable, particularly since at least one other super high-profile company was shut down in the most aggressive manner possible for allegedly being involved in less piracy than Giganews.

Pointing to the now-infamous Megaupload case, Perfect 10 notes that the Department of Justice completely shut that operation down, filing charges of criminal copyright infringement against Kim Dotcom and seizing $175 million “for selling access to movies and songs which they did not own.”

“Perfect 10 provided evidence that [Giganews] offered more than 200 times as many full length movies as did megaupload.com. But our evidence fell on deaf ears,” Zada complains.

In contrast, Perfect 10 adds, a California District Court found that Giganews had done nothing wrong, allowed it to continue copying and selling access to Perfect 10’s content, and awarded the Usenet provider $5.63m in attorneys fees.

“Prior to this case, no court had ever awarded fees to an alleged infringer, unless they were found to either own the copyrights at issue, or established a fair use defense. Neither was the case here,” Zada adds.

While Perfect 10 has filed a petition with the Supreme Court, the odds of being granted a review are particularly small. Only time will tell how this case will end, but it seems unlikely that the adult publisher will enjoy a happy ending, one in which it doesn’t have to pay Giganews millions of dollars in attorney’s fees.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Police Intellectual Property Crime Unit Secures Funding Until 2019

Post Syndicated from Andy original https://torrentfreak.com/police-intellectual-property-crime-unit-secures-funding-until-2019-170823/

When compared to the wide range of offenses usually handled by the police, copyright infringement is a relatively rare offense.

Historically most connected to physical counterfeiting, in recent years infringement has regularly featured a significant online component.

Formed four years ago and run by the City of London Police, the Police Intellectual Property Crime Unit (PIPCU) has a mission to tackle IP crime wherever it may take place but with a special online focus. It is tightly linked to the music, movie, and publishing industries so can most often be viewed protecting their products from infringement.

PIPCU announced its arrival in the summer of 2013 and officially launched a few months later in December 2013, complete with £2.56million in funding from the UK government’s Intellectual Property Office (IPO). However, the unit had been already in operation for some time, writing warning letters to torrent and streaming site advising them to shut down – or else.

PIPCU’s initial funding secured the future of the unit until June 2015 but in October 2014, well in advance of that deadline, PIPCU secured another £3m from the IPO to fund the unit to September 2017.

Having received £5.56 million in public funds over three years, PIPCU needed to show some bang for its buck. As a result, the unit publicised numerous actions including streaming arrests, attempted domain seizures, torrent site closures and advertising disruptions. PIPCU also shut down several sports streaming and ebook sites plus a large number of proxies

With August 2017 already upon us, PIPCU should be officially out of funds in a month’s time but according to the Law Gazette, the unit is going nowhere.

An Intellectual Property Office (IPO) spokesperson told the publication that PIPCU has received £3.32m in additional funding from the government which runs from July 1, 2017, to June 30, 2019 – the unit’s sixth anniversary.

Much of PIPCU’s more recent activity appears to have been focused in two key areas, both operated under its ‘Operation Creative’ banner. The first concerns PIPCU’s Infringing Website List, which aims to deter advertisers from inadvertently finding ‘pirate’ sites.

Earlier this year, PIPCU claimed success after revealing a 64% drop in “mainstream advertising” revenue on 200 unauthorized platforms between January 2016 and January 2017. More recently, PIPCU revealed that gambling advertising, which is often seen on ‘pirate’ platforms, had reduced by 87% on IWL sites over the previous 12 months.

Finally, PIPCU has been taking action alongside local police forces, FACT, Sky, Virgin, BT, and The Premier League, against suppliers of so-called ‘fully loaded’ set-top boxes, many featuring Kodi bundled with illicit third party addons. However, after a fairly sustained initial flurry, the last publicized operation was in February 2017.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Defending anti-netneutrality arguments

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/defending-anti-netneutrality-arguments.html

Last week, activists proclaimed a “NetNeutrality Day”, trying to convince the FCC to regulate NetNeutrality. As a libertarian, I tweeted many reasons why NetNeutrality is stupid. NetNeutrality is exactly the sort of government regulation Libertarians hate most. Somebody tweeted the following challenge, which I thought I’d address here.

The links point to two separate cases.

  • the Comcast BitTorrent throttling case
  • a lawsuit against Time Warning for poor service
The tone of the tweet suggests that my anti-NetNeutrality stance cannot be defended in light of these cases. But of course this is wrong. The short answers are:

  • the Comcast BitTorrent throttling benefits customers
  • poor service has nothing to do with NetNeutrality

The long answers are below.

The Comcast BitTorrent Throttling

The presumption is that any sort of packet-filtering is automatically evil, and against the customer’s interests. That’s not true.
Take GoGoInflight’s internet service for airplanes. They block access to video sites like NetFlix. That’s because they often have as little as 1-mbps for the entire plane, which is enough to support many people checking email and browsing Facebook, but a single person trying to watch video will overload the internet connection for everyone. Therefore, their Internet service won’t work unless they filter video sites.
GoGoInflight breaks a lot of other NetNeutrality rules, such as providing free access to Amazon.com or promotion deals where users of a particular phone get free Internet access that everyone else pays for. And all this is allowed by FCC, allowing GoGoInflight to break NetNeutrality rules because it’s clearly in the customer interest.
Comcast’s throttling of BitTorrent is likewise clearly in the customer interest. Until the FCC stopped them, BitTorrent users were allowed unlimited downloads. Afterwards, Comcast imposed a 300-gigabyte/month bandwidth cap.
Internet access is a series of tradeoffs. BitTorrent causes congestion during prime time (6pm to 10pm). Comcast has to solve it somehow — not solving it wasn’t an option. Their options were:
  • Charge all customers more, so that the 99% not using BitTorrent subsidizes the 1% who do.
  • Impose a bandwidth cap, preventing heavy BitTorrent usage.
  • Throttle BitTorrent packets during prime-time hours when the network is congested.
Option 3 is clearly the best. BitTorrent downloads take hours, days, and sometimes weeks. BitTorrent users don’t mind throttling during prime-time congested hours. That’s preferable to the other option, bandwidth caps.
I’m a BitTorrent user, and a heavy downloader (I scan the Internet on a regular basis from cloud machines, then download the results to home, which can often be 100-gigabytes in size for a single scan). I want prime-time BitTorrent throttling rather than bandwidth caps. The EFF/FCC’s action that prevented BitTorrent throttling forced me to move to Comcast Business Class which doesn’t have bandwidth caps, charging me $100 more a month. It’s why I don’t contribute the EFF — if they had not agitated for this, taking such choices away from customers, I’d have $1200 more per year to donate to worthy causes.
Ask any user of BitTorrent which they prefer: 300gig monthly bandwidth cap or BitTorrent throttling during prime-time congested hours (6pm to 10pm). The FCC’s action did not help Comcast’s customers, it hurt them. Packet-filtering would’ve been a good thing, not a bad thing.

The Time-Warner Case
First of all, no matter how you define the case, it has nothing to do with NetNeutrality. NetNeutrality is about filtering packets, giving some priority over others. This case is about providing slow service for everyone.
Secondly, it’s not true. Time Warner provided the same access speeds as everyone else. Just because they promise 10mbps download speeds doesn’t mean you get 10mbps to NetFlix. That’s not how the Internet works — that’s not how any of this works.
To prove this, look at NetFlix’s connection speed graphis. It shows Time Warner Cable is average for the industry. It had the same congestion problems most ISPs had in 2014, and it has the same inability to provide more than 3mbps during prime-time (6pm-10pm) that all ISPs have today.

The YouTube video quality diagnostic pages show Time Warner Cable to similar to other providers around the country. It also shows the prime-time bump between 6pm and 10pm.
Congestion is an essential part of the Internet design. When an ISP like Time Warner promises you 10mbps bandwidth, that’s only “best effort”. There’s no way they can promise 10mbps stream to everybody on the Internet, especially not to a site like NetFlix that gets overloaded during prime-time.
Indeed, it’s the defining feature of the Internet compared to the old “telecommunications” network. The old phone system guaranteed you a steady 64-kbps stream between any time points in the phone network, but it cost a lot of money. Today’s Internet provide a free multi-megabit stream for free video calls (Skype, Facetime) around the world — but with the occasional dropped packets because of congestion.
Whatever lawsuit money-hungry lawyers come up with isn’t about how an ISP like Time Warner works. It’s only about how they describe the technology. They work no different than every ISP — no different than how anything is possible.
Conclusion

The short answer to the above questions is this: Comcast’s BitTorrent throttling benefits customers, and the Time Warner issue has nothing to do with NetNeutrality at all.

The tweet demonstrates that NetNeutrality really means. It has nothing to do with the facts of any case, especially the frequency that people point to ISP ills that have nothing actually to do with NetNeutrality. Instead, what NetNeutrality really about is socialism. People are convinced corporations are evil and want the government to run the Internet. The Comcast/BitTorrent case is a prime example of why this is a bad idea: government definitions of what customers want is actually far different than what customers actually want.

TVStreamCMS Brings Pirate Streaming Site Clones to The Masses

Post Syndicated from Ernesto original https://torrentfreak.com/tvstreamcms-brings-pirate-streaming-site-clones-to-the-masses-170723/

In recent years many pirates have moved from more traditional download sites and tools, to streaming portals.

These streaming sites come in all shapes and sizes, and there is fierce competition among site owners to grab the most traffic. More traffic means more money, after all.

While building a streaming from scratch is quite an operation, there are scripts on the market that allow virtually anyone to set up their own streaming index in just a few minutes.

TVStreamCMS is one of the leading players in this area. To find out more we spoke to one of the people behind the project, who prefers to stay anonymous, but for the sake of this article, we’ll call him Rick.

“The idea came up when I wanted to make my own streaming site. I saw that they make a lot of money, and many people had them,” Rick tells us.

After discovering that there were already a few streaming site scripts available, Rick saw an opportunity. None of the popular scripts at the time offered automatic updates with freshly pirated content, a gap that was waiting to be filled.

“I found out that TVStreamScript and others on ThemeForest like MTDB were available, but these were not automatized. Instead, they were kinda generic and hard to update. We wanted to make our own site, but as we made it, we also thought about reselling it.”

Soon after TVStreamCMS was born. In addition to using it for his own project, Rick also decided to offer it to others who wanted to run their own streaming portal, for a monthly subscription fee.

TVStreamCMS website

According to Rick, the script’s automated content management system has been its key selling point. The buyers don’t have to update or change much themselves, as pretty much everything is automatized.

This has generated hundreds of sales over the years, according to the developer. And several of the sites that run on the script are successfully “stealing” traffic from the original, such as gomovies.co, which ranks well above the real GoMovies in Google’s search results.

“Currently, a lot of the sites competing against the top level streaming sites are using our script. This includes 123movies.co, gomovies.co and putlockers.tv, keywords like yesmovies fmovies gomovies 123movies, even in different Languages like Portuguese, French and Italian,” Rick says.

The pirated videos that appear on these sites come from a database maintained by the TVStreamCMS team. These are hosted on their own servers, but also by third parties such as Google and Openload.

When we looked at one of the sites we noticed a few dead links, but according to Rick, these are regularly replaced.

“Dead links are maintained by our team, DMCA removals are re-uploaded, and so on. This allows users not to worry about re-uploading or adding content daily and weekly as movies and episodes release,” Rick explains.

While this all sounds fine and dandy for prospective pirates, there are some significant drawbacks.

Aside from the obvious legal risks that come with operating one of these sites, there is also a financial hurdle. The full package costs $399 plus a monthly fee of $99, and the basic option is $399 and $49 per month.

TVStreamCMS subscription plans

There are apparently plenty of site owners who don’t mind paying this kind of money. That said, not everyone is happy with the script. TorrentFreak spoke to a source at one of the larger streaming sites, who believes that these clones are misleading their users.

TVStreamCMS is not impressed by the criticism. They know very well what they are doing. Their users asked for these clone templates, and they are delivering them, so both sides can make more money.

“We’re are in the business to make money and grow the sales,” Rick says.

“So we have made templates looking like 123movies, Yesmovies, Fmovies and Putlocker to accommodate the demands of the buyers. A similar design gets buyers traffic and is very, very effective for new sites, as users who come from Google they think it is the real website.”

The fact that 123Movies changed its name to GoMovies and recently changed to a GoStream.is URL, only makes it easier for clones to get traffic, according to the developer.

“This provides us with a lot of business because every time they change their name the buyers come back and want another site with the new name. GoMovies, for instance, and now Gostream,” Rick notes.

Of course, the infringing nature of the clone sites means that there are many copyright holders who would rather see the script and its associated sites gone. Previously, the Hollywood group FACT managed to shut down TVstreamScript, taking down hundreds of sites that relied on it, and it’s likely that TVStreamCMS is being watched too.

For now, however, more and more clones continue to flood the web with pirated streams.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

GoMovies Moves to GoStream.is and Evades Google ‘Ban’

Post Syndicated from Ernesto original https://torrentfreak.com/gomovies-moves-to-gostream-is-and-evades-google-ban-170714/

Pirate video streaming sites are booming. Their relative ease of use through on-demand viewing makes them a viable alternative to P2P file-sharing, which traditionally dominated the piracy arena.

The popular movie streaming sites GoMovies, formerly known as 123movies, is one of the most-used streaming sites. While it’s built a steady userbase of millions of users over the past year, the site’s home keeps changing.

The latest move came this week. Going forward, the site will be active from GoStream.is, operating from the Icelandic gostream.is domain name.

While the site hasn’t officially commented on the reason for the move, on Twitter a site representative mentioned a Google ‘penalty’ as the main driver behind the recent change.

Penalized

When we looked at the issue more closely, we found that it’s not so much a penalty, but rather a response to a DMCA takedown request.

Earlier this week the site’s homepage was removed from Google’s search engine following a takedown notice from Warner Bros. This made it harder for users to find the site through Google, as various knockoffs were ranked higher in the search results for the “Gomovies” keyword.

In addition to relocating to a new domain name, the site has also changed the look of its homepage. Instead of a page filled with the most popular movies and TV-shows, it now lists a basic search box.

New GoMovies homepage

The homepage change is likely a response to Google’s search engine removal as well. The previous GoMovies domain was targeted by Warner Bros. because it listed a link to a pirated movie, but such links are no longer present on the new homepage.

That said, users who prefer the old look can still access it with a single click, which is prominently mentioned on the site.

Despite the domain name change, the GoMovies brand hasn’t changed. The logo and all other references to the site’s name remain intact. Confusingly, people who search for GoMovies on Google still won’t see the Gostream.is URL in the top results, but perhaps that will change in the future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Cost of Cloud Storage

Post Syndicated from Tim Nufire original https://www.backblaze.com/blog/cost-of-cloud-storage/

the cost of the cloud as a percentage of revenue

This week, we’re celebrating the one year anniversary of the launch of Backblaze B2 Cloud Storage. Today’s post is focused on giving you a peek behind the curtain about the costs of providing cloud storage. Why? Over the last 10 years, the most common question we get is still “how do you do it?” In this multi-billion dollar, global industry exhibiting exponential growth, none of the other major players seem to be willing to discuss the underlying costs. By exposing a chunk of the Backblaze financials, we hope to provide a better understanding of what it costs to run “the cloud,” and continue our tradition of sharing information for the betterment of the larger community.

Context
Backblaze built one of the industry’s largest cloud storage systems and we’re proud of that accomplishment. We bootstrapped the business and funded our growth through a combination of our own business operations and just $5.3M in equity financing ($2.8M of which was invested into the business – the other $2.5M was a tender offer to shareholders). To do this, we had to build our storage system efficiently and run as a real, self-sustaining, business. After over a decade in the data storage business, we have developed a deep understanding of cloud storage economics.

Definitions
I promise we’ll get into the costs of cloud storage soon, but some quick definitions first:

    Revenue: Money we collect from customers.
    Cost of Goods Sold (“COGS”): The costs associated with providing the service.
    Operating Expenses (“OpEx”): The costs associated with developing and selling the service.
    Income/Loss: What is left after subtracting COGS and OpEx from Revenue.

I’m going to focus today’s discussion on the Cost of Goods Sold (“COGS”): What goes into it, how it breaks down, and what percent of revenue it makes up. Backblaze is a roughly break-even business with COGS accounting for 47% of our revenue and the remaining 53% spent on our Operating Expenses (“OpEx”) like developing new features, marketing, sales, office rent, and other administrative costs that are required for us to be a functional company.

This post’s focus on COGS should let us answer the commonly asked question of “how do you provide cloud storage for such a low cost?”

Breaking Down Cloud COGS

Providing a cloud storage service requires the following components (COGS and OpEX – below we break out COGS):
cloud infrastructure costs as a percentage of revenue

  • Hardware: 23% of Revenue
  • Backblaze stores data on hard drives. Those hard drives are “wrapped” with servers so they can connect to the public and store data. We’ve discussed our approach to how this works with our Vaults and Storage Pods. Our infrastructure is purpose built for data storage. That is, we thought about how data storage ought to work, and then built it from the ground up. Other companies may use different storage media like Flash, SSD, or even tape. But it all serves the same function of being the thing that data actually is stored on. For today, we’ll think of all this as “hardware.”

    We buy storage hardware that, on average, will last 5 years (60 months) before needing to be replaced. To account for hardware costs in a way that can be compared to our monthly expenses, we amortize them and recognize 1/60th of the purchase price each month.

    Storage Pods and hard drives are not the only hardware in our environment. We also have to buy the cabinets and rails that hold the servers, core servers that manage accounts/billing/etc., switches, routers, power strips, cables, and more. (Our post on bringing up a data center goes into some of this detail.) However, Storage Pods and the drives inside them make up about 90% of all the hardware cost.

  • Data Center (Space & Power): 8% of Revenue
  • “The cloud” is a great marketing term and one that has caught on for our industry. That said, all “clouds” store data on something physical like hard drives. Those hard drives (and servers) are actual, tangible things that take up actual space on earth, not in the clouds.

    At Backblaze, we lease space in colocation facilities which offer a secure, temperature controlled, reliable home for our equipment. Other companies build their own data centers. It’s the classic rent vs buy decision; but it always ends with hardware in racks in a data center.

    Hardware also needs power to function. Not everyone realizes it, but electricity is a significant cost of running cloud storage. In fact, some data center space is billed simply as a function of an electricity bill.

    Every hard drive storing data adds incremental space and power need. This is a cost that scales with storage growth.

    I also want to make a comment on taxes. We pay sales and property tax on hardware, and it is amortized as part of the hardware section above. However, it’s valuable to think about taxes when considering the data center since the location of the hardware actually drives the amount of taxes on the hardware that gets placed inside of it.

  • People: 7% of Revenue
  • Running a data center requires humans to make sure things go smoothly. The more data we store, the more human hands we need in the data center. All drives will fail eventually. When they fail, “stuff” needs to happen to get a replacement drive physically mounted inside the data center and filled with the customer data (all customer data is redundantly stored across multiple drives). The individuals that are associated specifically with managing the data center operations are included in COGS since, as you deploy more hard drives and servers, you need more of these people.

    Customer Support is the other group of people that are part of COGS. As customers use our services, questions invariably arise. To service our customers and get questions answered expediently, we staff customer support from our headquarters in San Mateo, CA. They do an amazing job! Staffing models, internally, are a function of the number of customers and the rate of acquiring new customers.

  • Bandwidth: 3% of Revenue
  • We have over 350 PB of customer data being stored across our data centers. The bulk of that has been uploaded by customers over the Internet (the other option, our Fireball service, is 6 months old and is seeing great adoption). Uploading data over the Internet requires bandwidth – basically, an Internet connection similar to the one running to your home or office. But, for a data center, instead of contracting with Time Warner or Comcast, we go “upstream.” Effectively, we’re buying wholesale.

    Understanding how that dynamic plays out with your customer base is a significant driver of how a cloud provider sets its pricing. Being in business for a decade has explicit advantages here. Because we understand our customer behavior, and have reached a certain scale, we are able to buy bandwidth in sufficient bulk to offer the industry’s best download pricing at $0.02 / Gigabyte (compared to $0.05 from Amazon, Google, and Microsoft).

    Why does optimizing download bandwidth charges matter for customers of a data storage business? Because it has a direct relationship to you being able to retrieve and use your data, which is important.

  • Other Fees: 6% of Revenue
  • We have grouped a the remaining costs inside of “Other Fees.” This includes fees we pay to our payment processor as well as the costs of running our Restore Return Refund program.

    A payment processor is required for businesses like ours that need to accept credit cards securely over the Internet. The bulk of the money we pay to the payment processor is actually passed through to pay the credit card companies like AmEx, Visa, and Mastercard.

    The Restore Return Refund program is a unique program for our consumer and business backup business. Customers can download any and all of their files directly from our website. We also offer customers the ability to order a hard drive with some or all of their data on it, we then FedEx it to the customer wherever in the world she is. If the customer chooses, she can return the drive to us for a full refund. Customers love the program, but it does cost Backblaze money. We choose to subsidize the cost associated with this service in an effort to provide the best customer experience we can.

The Big Picture

At the beginning of the post, I mentioned that Backblaze is, effectively, a break even business. The reality is that our products drive a profitable business but those profits are invested back into the business to fund product development and growth. That means growing our team as the size and complexity of the business expands; it also means being fortunate enough to have the cash on hand to fund “reserves” of extra hardware, bandwidth, data center space, etc. In our first few years as a bootstrapped business, having sufficient buffer was a challenge. Having weathered that storm, we are particularly proud of being in a financial place where we can afford to make things a bit more predictable.

All this adds up to answer the question of how Backblaze has managed to carve out its slice of the cloud market – a market that is a key focus for some of the largest companies of our time. We have innovated a novel, purpose built storage infrastructure with our Vaults and Pods. That infrastructure allows us to keep costs very, very low. Low costs enable us to offer the world’s most affordable, reliable cloud storage.

Does reliable, affordable storage matter? For a company like Vintage Aerial, it enables them to digitize 50 years’ worth of aerial photography of rural America and share that national treasure with the world. Having the best download pricing in the storage industry means Austin City Limits, a PBS show out of Austin, can digitize and preserve over 550 concerts.

We think offering purpose built, affordable storage is important. It empowers our customers to monetize existing assets, make sure data is backed up (and not lost), and focus on their core business because we can handle their data storage needs.

The post The Cost of Cloud Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

How The Intercept Outed Reality Winner

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/how-intercept-outed-reality-winner.html

Today, The Intercept released documents on election tampering from an NSA leaker. Later, the arrest warrant request for an NSA contractor named “Reality Winner” was published, showing how they tracked her down because she had printed out the documents and sent them to The Intercept. The document posted by the Intercept isn’t the original PDF file, but a PDF containing the pictures of the printed version that was then later scanned in.

As the warrant says, she confessed while interviewed by the FBI. Had she not confessed, the documents still contained enough evidence to convict her: the printed document was digitally watermarked.

The problem is that most new printers print nearly invisibly yellow dots that track down exactly when and where documents, any document, is printed. Because the NSA logs all printing jobs on its printers, it can use this to match up precisely who printed the document.

In this post, I show how.

You can download the document from the original article here. You can then open it in a PDF viewer, such as the normal “Preview” app on macOS. Zoom into some whitespace on the document, and take a screenshot of this. On macOS, hit [Command-Shift-3] to take a screenshot of a window. There are yellow dots in this image, but you can barely see them, especially if your screen is dirty.

We need to highlight the yellow dots. Open the screenshot in an image editor, such as the “Paintbrush” program built into macOS. Now use the option to “Invert Colors” in the image, to get something like this. You should see a roughly rectangular pattern checkerboard in the whitespace.

It’s upside down, so we need to rotate it 180 degrees, or flip-horizontal and flip-vertical:

Now we go to the EFF page and manually click on the pattern so that their tool can decode the meaning:

This produces the following result:

The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20. The NSA almost certainly has a record of who used the printer at that time.

The situation is similar to how Vice outed the location of John McAfee, by publishing JPEG photographs of him with the EXIF GPS coordinates still hidden in the file. Or it’s how PDFs are often redacted by adding a black bar on top of image, leaving the underlying contents still in the file for people to read, such as in this NYTime accident with a Snowden document. Or how opening a Microsoft Office document, then accidentally saving it, leaves fingerprints identifying you behind, as repeatedly happened with the Wikileaks election leaks. These sorts of failures are common with leaks. To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.

Copiers/printers have two features put in there by the government to be evil to you. The first is that scanners/copiers (when using scanner feature) recognize a barely visible pattern on currency, so that they can’t be used to counterfeit money, as shown on this $20 below:

The second is that when they print things out, they includes these invisible dots, so documents can be tracked. In other words, those dots on bills prevent them from being scanned in, and the dots produced by printers help the government track what was printed out.

Yes, this code the government forces into our printers is a violation of our 3rd Amendment rights.


While I was writing up this post, these tweets appeared first:


Comments:
https://news.ycombinator.com/item?id=14494818

Open source energy monitoring using Raspberry Pi

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/open-source-energy-monitoring-raspberry-pi/

OpenEnergyMonitor, who make open-source tools for energy monitoring, have been using Raspberry Pi since we launched in 2012. Like Raspberry Pi, they manufacture their hardware in Wales and send it to people all over the world. We invited co-founder Glyn Hudson to tell us why they do what they do, and how Raspberry Pi helps.

Hi, I’m Glyn from OpenEnergyMonitor. The OpenEnergyMonitor project was founded out of a desire for open-source tools to help people understand and relate to their use of energy, their energy systems, and the challenge of sustainable energy.

Photo: an emonPi energy monitoring unit in an aluminium case with an aerial and an LCD display, a mobile phone showing daily energy use as a histogram, and a bunch of daffodils in a glass bottle

The next 20 years will see a revolution in our energy systems, as we switch away from fossil fuels towards a zero-carbon energy supply.

By using energy monitoring, modelling, and assessment tools, we can take an informed approach to determine the best energy-saving measures to apply. We can then check to ensure solutions achieve their expected performance over time.

We started the OpenEnergyMonitor project in 2009, and the first versions of our energy monitoring system used an Arduino with Ethernet Shield, and later a Nanode RF with an embedded Ethernet controller. These early versions were limited by a very basic TCP/IP stack; running any sort of web application locally was totally out of the question!

I can remember my excitement at getting hold of the very first version of the Raspberry Pi in early 2012. Within a few hours of tearing open the padded envelope, we had Emoncms (our open-source web logging, graphing, and visualisation application) up and running locally on the Raspberry Pi. The Pi quickly became our web-connected base station of choice (emonBase). The following year, 2013, we launched the RFM12Pi receiver board (now updated to RFM69Pi). This allowed the Raspberry Pi to receive data via low-power RF 433Mhz from our emonTx energy monitoring unit, and later from our emonTH remote temperature and humidity monitoring node.

Diagram: communication between OpenEnergyMonitor monitoring units, base station and web interface

In 2015 we went all-in with Raspberry Pi when we launched the emonPi, an all-in-one Raspberry Pi energy monitoring unit, via Kickstarter. Thanks to the hard work of the Raspberry Pi Foundation, the emonPi has enjoyed several upgrades: extra processing power from the Raspberry Pi 2, then even more power and integrated wireless LAN thanks to the Raspberry Pi 3. With all this extra processing power, we have been able to build an open software stack including Emoncms, MQTT, Node-RED, and openHAB, allowing the emonPi to function as a powerful home automation hub.

Screenshot: Emoncms Apps interface to emonPi home automation hub, with histogram of daily electricity use

Emoncms Apps interface to emonPi home automation hub

Inspired by the Raspberry Pi Foundation, we manufacture and assemble our hardware in Wales, UK, and ship worldwide via our online store.

All of our work is fully open source. We believe this is a better way of doing things: we can learn from and build upon each other’s work, creating better solutions to the challenges we face. Using Raspberry Pi has allowed us to draw on the expertise and work of many other projects. With lots of help from our fantastic community, we have built an online learning resource section of our website to help others get started: it covers things like basic AC power theory, Arduino, and the bigger picture of sustainable energy.

To learn more about OpenEnergyMonitor systems, take a look at our Getting Started User Guide. We hope you’ll join our community.

The post Open source energy monitoring using Raspberry Pi appeared first on Raspberry Pi.

Kim Dotcom Says Family Trust Could Sue Mega Investor

Post Syndicated from Andy original https://torrentfreak.com/kim-dotcom-says-family-trust-could-sue-mega-investor-170511/

One year after the raid on Megaupload and his sprawling mansion, Kim Dotcom fought back in grand fashion by launching new file-hosting site Mega.

It was a roaring success, signing up hundreds of thousands of users in the first few hours alone. Mega, it seemed, might soon be kicking at heels of the unprecedented traction of Megaupload.

While Mega continued to grow, in July 2015 Dotcom indicated that his previously warm connections with the site may have soured.

“I’m not involved in Mega anymore. Neither in a managing nor in a shareholder capacity,” he said.

Dotcom went on to claim that a then-unnamed Chinese investor (wanted in China for fraud) had used straw-men and businesses to accumulate more and more Mega shares, shares that were later seized as part of an investigation by the New Zealand government.

Mega bosses angrily denied that there had been any hostile takeover, noting that “those shareholders” who had decided not to subscribe to recent issues had “…been diluted accordingly. That has been their choice.”

But a year later and the war of words between Dotcom and Mega was still simmering, with the Chinese investor now being openly named as Bill Liu.

A notorious high-roller who allegedly gambled $293m at New Zealand’s SkyCity casino, Liu was soon being described by Dotcom as China’s “fifth most-wanted criminal” due to a huge investigation into the businessman’s dealings taking place back home.

Mega saw things a little differently, however.

“Mr Liu has a shareholding interest but has no management or board position so he certainly doesn’t control Mega,” the company insisted at the time.

Dotcom disagreed strongly with that assertion and this week, more than a year later, the topic has raised its head yet again.

“In a nutshell, Bill Liu has taken control of Mega by using straw men to buy shares for him, ultimately giving him the majority on the board,” Dotcom informs TF.

In common with the raid on Megaupload, the Mega/Liu backstory is like something out of a Hollywood movie.

This week the NZ Herald published an amazing report detailing Liu’s life since he first entered New Zealand in 2001. A section explains how he first got involved with Mega.

Tony Lentino, who was the founder of domain name registrar Instra, was also Mega’s first CEO. It’s reported that he later fell out with Dotcom and wanted to sell his shares in the company.

Bill Liu wanted to invest so Lentino went to meet him at his penthouse apartment on the 35th floor of the Metropolis tower in central Auckland.

Lentino later told police that Liu opened a bottle of Penfolds Grange wine during the meeting – no joke at $800 per bottle. That developed into a discussion about Liu buying Lentino’s stake in Mega and a somewhat interesting trip back home for Lentino.

“You want one of my cars to take home?” Liu allegedly asked Lentino.

The basement contained a Porsche, a Bentley and a Rolls-Royce – and Lentino was invited to take his pick. He took the NZ$400,000 Rolls as part of the NZ$4.2 million share in Mega he transferred to Liu.

Well, not quite to Liu, directly at least.

“When it came time to sign the deal, the shares were to be split into two parcels: one in the name of Zhao Wu Shen, a close friend of [Liu], and a trust company,” NZ Herald reports.

“It was the third transaction where Yan had been quietly buying into Mega – nothing was in his name, but he now controlled 18.8 per cent.”

It is not clear how much Liu currently owns but Lentino later told police (who believed that Liu was hiding his assets) that the Chinese businessman was the “invisible CEO” of Mega.

Speaking with TF this week, Dotcom says that Liu achieved his status by holding Mega back.

“Liu used his power to prevent Mega from monetizing its traffic via advertising sales or premium account sales and by doing so he created an artificial situation in which Mega had to raise more money to survive,” Dotcom says.

“He then pumped double-digit millions of dollars into the business via his straw men in order to dilute all other shareholders to almost zero.”

Dotcom says that Mega could’ve been “instantly profitable, ” but instead Liu intentionally forced the company into a loss-making situation, safe in the knowledge he could “turn on profitability at the push of a button.”

Dotcom says Liu chose not to do that until he directly or indirectly owned “almost all” of the shares in Mega. That, he says, came at the expense of his family, who had invested in Mega.

“The family trust that was setup for the benefit of my children owned the majority of Mega until Bill Liu entered the stage with his unlawful actions to take control of the company,” Dotcom says.

“He ran it at a loss when it could have been profitable, and then diluted other shareholders.”

According to Dotcom, the people behind his family trust are now considering their options, including legal action against Liu and others.

“The trustees of the family trust are now considering legal action against all parties involved in this dilution scam in light of the new information that has become public today from other court proceedings against Bill Liu,” Dotcom concludes.

It’s difficult to find a more colorful character than Dotcom, but Bill Liu certainly gives Dotcom a run for his money. His story can be found here, it’s almost unbelievable.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Foxtel Targets Pirate Streaming Sites in New ISP Blocking Case

Post Syndicated from Andy original https://torrentfreak.com/foxtel-targets-pirate-streaming-sites-in-new-isp-blocking-case-170508/

When the Australian government introduced new legislation to allow ‘pirate’ sites to be blocked Down Under, there was never any question that the law would go underused.

December last year following a lawsuit brought by Roadshow Films, Foxtel, Disney, Paramount, Columbia, and 20th Century Fox, the Federal Court ordered ISPs to block The Pirate Bay, Torrentz, TorrentHound, IsoHunt and streaming service SolarMovie.

This February the same rightsholders were back again, this time with even more targets in mind including ExtraTorrent, RarBG, Demonoid, LimeTorrents, YTS and EZTV, plus streaming portals 123Movies, CouchTuner, Icefilms, Movie4K, PrimeWire, Viooz, Putlocker and many more.

With blocking efforts gathering momentum, the fifth case seeking injunctions against pirate sites has just hit Australia’s Federal Court. It’s the second to be filed by Foxtel and again targets streaming sites including Yes Movies, Los Movies, Watch Series and Project Free TV.

In common with earlier cases, ISPs named in the latest application include TPG, Telstra, Optus and Vocus/M2. Once various subsidiaries are included, blocking becomes widespread across Australia, often encompassing dozens of smaller providers.

Speaking with ABC, a Foxtel spokesperson said the company has confidence that the Federal Court will ultimately order the sites to be blocked.

“Foxtel believes that the new site blocking regime is an effective measure in the fight to prevent international operators illegitimately profiting from the creative endeavours of others,” he said.

Indeed, the earlier cases brought by both the studios and record companies have pioneered a streamlined process that can be tackled relatively easily by rightsholders and presented to the court in a non-confrontational and easily understood format.

ISPs are not proving too much of a hindrance either, now that the issue of costs appears to be behind them. In Foxtel’s earlier case involving The Pirate Bay, the judge said that ISPs must be paid AUS$50 per domain blocked. That now appears to be the standard.

So what we have here is a quickly maturing process that has already developed into somewhat of a cookie-cutter site-blocking mechanism.

Applications are made against a particular batch of sites and after the court assesses the evidence, an injunction is handed down. If further similar and related sites (such as proxies and mirrors) need to be blocked, those are dealt with in a separate and simplified process.

That was highlighted last week when an application by Universal Music, Warner Music, Sony Music and J Albert & Son, resulted in a range of KickassTorrents spin-off sites being approved for blocking by the Federal Court. The ISPs in question, 20 in total, have been given two weeks to block the sites.

Whether this will have the desired effect will remain to be seen. Australians are well-versed in unblocking solutions such as VPNs. Ironically, most learned of their existence when trying to gain access to legal services such as Netflix, that were available overseas for years before hitting Aussie shores.

Since that has now been remedied with a local launch, rightsholders and companies such as Foxtel are hoping that pirate services will be less attractive options.

“We trust that Australians recognize that there are increasing numbers of ways to access content in a timely manner and at reasonable prices. [This] ensures that revenue goes back to the people who create and invest in original ideas,” a Foxtel spokesperson said.

If the United Kingdom is any template (and all signs suggest that it is), expect hundreds of similar ‘pirate’ sites to be blocked in Australia in the coming months.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The RIAA is Now Copyright Troll Rightscorp’s Biggest Customer

Post Syndicated from Andy original https://torrentfreak.com/the-riaa-is-now-copyright-troll-rightscorps-biggest-customer-170424/

Nurturing what appears to be a failing business model, anti-piracy outfit Rightscorp has been on life-support for a number of years, never making a cent while losing millions of dollars.

As a result, every annual report filed by the company is expected to reveal yet more miserable numbers. This year’s, filed two weeks late a few days ago, doesn’t break the trend. It is, however, a particularly interesting read.

For those out of the loop, Rightscorp generates revenue from monitoring BitTorrent networks, logging infringements, and sending warning notices to ISPs. It hopes those ISPs will forward notices to customers who are asked to pay $20 or $30 per offense. Once paid, Rightscorp splits this revenue with its copyright holder customers.

The company’s headline sales figures for 2016 are somewhat similar to those of the previous year. In 2015 the company generated $832,215 in revenue but in 2016 that had dropped to $778,215. While yet another reduction in revenue won’t be welcome, the company excelled in trimming its costs.

In 2015, Rightscorp’s total operating costs were almost $5.47m, something which led the company to a file an eye-watering $4.63 million operational loss.

In 2016, the company somehow managed to reduce its costs to ‘just’ $2.73m, a vast improvement over the previous year. But, despite the effort, Rightscorp still couldn’t make money in 2016. In its latest accounts, the company reveals an operational loss of $1.95m and little salvation on the bottom line.

“During the year ended December 31, 2016, the Company incurred a net loss of $1,355,747 and used cash in operations of $807,530, and at December 31, 2016, the Company had a stockholders’ deficit of $2,092,060,” the company reveals.

While a nose-diving Rightscorp has been a familiar story in recent years, there are some nuggets of information in 2016’s report that makes it stand out.

According to Rightscorp, in 2014 BMG Rights Management accounted for 76% of the company’s sales, with Warner Bros. Entertainment made up a token 13%. In 2015 it was a similar story, but during 2016, big developments took place with a brand new and extremely important customer.

“For the year ended December 31, 2016, our contract with Recording Industry Association of America accounted for approximately 44% of our sales, and our contract with BMG Rights Management accounted for 23% of our sales,” the company’s report reveals.

The fact that the RIAA is now Rightscorp’s biggest customer to the tune of $342,000 in business during 2016 is a pretty big reveal, not only for the future of the anti-piracy company but also the interests of millions of BitTorrent users around the United States.

While it’s certainly possible that the RIAA plans to start sending settlement demands to torrent users (Warner has already done so), there are very clear signs that the RIAA sees value in Rightscorp elsewhere. As shown in the table below, between 2015 and 2016 there has been a notable shift in how Rightscorp reports its revenue.

In 2015, all of Rightscorp’s revenue came from copyright settlements. In 2016, roughly 50% of its revenue (a little over the amount accounted for by the RIAA’s business) is listed as ‘consulting revenue’. It seems more than likely that the lion’s share of this revenue came from the RIAA, but why?

On Friday the RIAA filed a big lawsuit against Texas-based ISP Grande Communications. Detailed here, the multi-million suit accuses the ISP of failing to disconnect subscribers accused of infringement multiple times.

The data being used to prosecute that case was obtained by the RIAA from Rightscorp, who in turn collected that data from BitTorrent networks. The company obtained a patent under its previous Digital Rights Corp. guise which specifically covers repeat infringer identification. It has been used successfully in the ongoing case against another ISP, Cox Communications.

In short, the RIAA seems to be planning to do to Grande Communications what BMG and Rightscorp have already done to Cox. They will be seeking to show that Grande knew that its subscribers were multiple infringers yet failed to disconnect them from the Internet. This inaction, they will argue, means that Grande loses its protection from liability under the safe harbor provisions of the DMCA.

Winning the case against Grande Communications is extremely important for the RIAA and for reasons best understood by the parties involved, it clearly places value on the data held by Rightscorp. Whether the RIAA will pay another few hundred thousand dollars to the anti-piracy outfit in 2017 remans to be seen, but Rightscorp will be hoping so as it’s desperate for the cash.

The company’s year-end filing raises “substantial doubt about the Company’s ability to continue as a going concern” while noting that its management believes that the company will need at least another $500,000 to $1,000,000 to fund operations in 2017.

This new relationship between the RIAA and Rightscorp is an interesting one and one that’s likely to prove controversial. Grande Communications is being sued today, but the big question is which other ISPs will follow in the months and years to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Encrypt and Decrypt Amazon Kinesis Records Using AWS KMS

Post Syndicated from Temitayo Olajide original https://aws.amazon.com/blogs/big-data/encrypt-and-decrypt-amazon-kinesis-records-using-aws-kms/

Customers with strict compliance or data security requirements often require data to be encrypted at all times, including at rest or in transit within the AWS cloud. This post shows you how to build a real-time streaming application using Kinesis in which your records are encrypted while at rest or in transit.

Amazon Kinesis overview

The Amazon Kinesis platform enables you to build custom applications that analyze or process streaming data for specialized needs. Amazon Kinesis can continuously capture and store terabytes of data per hour from hundreds of thousands of sources such as website clickstreams, financial transactions, social media feeds, IT logs, and transaction tracking events.

Through the use of HTTPS, Amazon Kinesis Streams encrypts data in-flight between clients which protects against someone eavesdropping on records being transferred. However, the records encrypted by HTTPS are decrypted once the data enters the service. This data is stored at rest for 24 hours (configurable up to 168 hours) to ensure that your applications have enough headroom to process, replay, or catch up if they fall behind.

Walkthrough

In this post you build encryption and decryption into sample Kinesis producer and consumer applications using the Amazon Kinesis Producer Library (KPL), the Amazon Kinesis Consumer Library (KCL), AWS KMS, and the aws-encryption-sdk. The methods and the techniques used in this post to encrypt and decrypt Kinesis records can be easily replicated into your architecture. Some constraints:

  • AWS charges for the use of KMS API requests for encryption and decryption, for more information see AWS KMS Pricing.
  • You cannot use Amazon Kinesis Analytics to query Amazon Kinesis Streams with records encrypted by clients in this sample application.
  • If your application requires low latency processing, note that there will be a slight hit in latency.

The following diagram shows the architecture of the solution.

Encrypting the records at the producer

Before you call the PutRecord or PutRecords API, you will encrypt the string record by calling KinesisEncryptionUtils.toEncryptedString.

In this example, we used a sample stock sales ticker object:

example {"tickerSymbol": "AMZN", "salesPrice": "900", "orderId": "300", "timestamp": "2017-01-30 02:41:38"}. 

The method (KinesisEncryptionUtils.toEncryptedString) call takes four parameters:

  • amazonaws.encryptionsdk.AwsCrypto
  • stock sales ticker object
  • amazonaws.encryptionsdk.kms.KmsMasterKeyProvider
  • util.Map of an encryption context

A ciphertext is returned back to the main caller which is then also checked for size by calling KinesisEncryptionUtils.calculateSizeOfObject. Encryption increases the size of an object. To prevent the object from being throttled, the size of the payload (one or more records) is validated to ensure it is not greater than 1MB. In this example encrypted records sizes with payload exceeding 1MB are logged as warning. If the size is less than the limit, then either addUserRecord or PutRecord and PutRecords are called if you are using the KPL or the Kinesis Streams API respectively 

Example: Encrypting records with KPL

//Encrypting the records
String encryptedString = KinesisEncryptionUtils.toEncryptedString(crypto, ticker, prov,context);
log.info("Size of encrypted object is : "+ KinesisEncryptionUtils.calculateSizeOfObject(encryptedString));
//check if size of record is greater than 1MB
if(KinesisEncryptionUtils.calculateSizeOfObject(encryptedString) >1024000)
   log.warn("Record added is greater than 1MB and may be throttled");
//UTF-8 encoding of encrypted record
ByteBuffer data = KinesisEncryptionUtils.toEncryptedByteStream(encryptedString);
//Adding the encrypted record to stream
ListenableFuture<UserRecordResult> f = producer.addUserRecord(streamName, randomPartitionKey(), data);
Futures.addCallback(f, callback);

In the above code, the example sales ticker record is passed to the KinesisEncryptionUtils.toEncryptedString and an encrypted record is returned. The encryptedRecord value is also passed to KinesisEncryptionUtils.calculateSizeOfObject and the size of the encrypted payload is returned and checked to see if it is less than 1MB. If it is, the payload is then UTF-8 encoded (KinesisEncryptionUtils.toEncryptedByteStream), then sent to the stream for processing.

Example: Encrypting the records with Streams PutRecord

//Encrypting the records
String encryptedString = KinesisEncryptionUtils.toEncryptedString(crypto, ticker, prov, context);
log.info("Size of encrypted object is : " + KinesisEncryptionUtils.calculateSizeOfObject(encryptedString));
//check if size of record is greater than 1MB
if (KinesisEncryptionUtils.calculateSizeOfObject(encryptedString) > 1024000)
    log.warn("Record added is greater than 1MB and may be throttled");
//UTF-8 encoding of encryptyed record
ByteBuffer data = KinesisEncryptionUtils.toEncryptedByteStream(encryptedString);
putRecordRequest.setData(data);
putRecordRequest.setPartitionKey(randomPartitionKey());
//putting the record into the stream
kinesis.putRecord(putRecordRequest);

Verifying that records are encrypted

After the call to KinesisEncryptionUtils.toEncryptedString, you can print out the encrypted string record just before UTF-8 encoding. An example of what is printed to standard output when running this sample application is shown below.

[main] INFO kinesisencryption.streams.EncryptedProducerWithStreams - String Record is TickerSalesObject{tickerSymbol='FB', salesPrice='184.285409142', orderId='2a0358f1-9f8a-4bbe-86b3-c2929047e15d', timeStamp='2017-01-30 02:41:38'} and Encrypted Record String is AYADeMf6zmVg9JvIkGNv5M39rhUAbgACAAdLaW5lc2lzAARjYXJzABVhd3MtY3J5cHRvLXB1YmxpYy1rZXkAREFpUkpCaG1UOFQ3UTZQZ253dm9FSU9iUDZPdE1xTHdBZ1JjNlZxN2doMDZ3QlBEWUZndWdJSEFKaXNvT0ZPUGsrdz09AAEAB2F3cy1rbXMAS2Fybjphd3M6a21zOnVzLWVhc3QtMTo1NzM5MDY1ODEwMDI6a2V5LzM3ZGM5MGRjLTNmMWMtNGE3Ny1hNTFkLWE2NTNiMTczZmNkYgCnAQEBAHgbPoaYTiF/oIMp49yPBkZmVVotylZpUqwkkzJJicLjLQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDCCYBk+hfB3tOGVx7QIBEIA7FqaEcOWpic+gKNeT+dUe4yttB9dsZSFPAUTlz2L2zlyLXSLMh1otRH24SO485ov+TCTtRCgiA8a9rYQCAAAAAAwAABAArlGWPO8BavNSJIpJOtJekRUhOwbM+WM1NBVXB/////8AAAABXNZnRND3J7u8EZx3AAAAkfSxVPMYUv0Ovrd4AIUTmMcaiR0Z+IcJNAXqAhvMmDKpsJaQG76Q6pYExarolwT+6i87UOi6TGvAiPnH74GbkEniWe66rAF6mOra2JkffK6pBdhh95mEOGLaVPBqs2jswUTfdcBJQl9NEb7wx9XpFX8fNDF56Vly7u6f8OQ7lY6fNrOupe5QBFnLvwehhtogd72NTQ/yEbDDoPKUZN3IlWIEAGYwZAIwISFw+zdghALtarsHSIgPMs7By7/Yuda2r3hqSmqlCyCXy7HMFIQxHcEILjiLp76NAjB1D8r8TC1Zdzsfiypi5X8FvnK/6EpUyFoOOp3y4nEuLo8M2V/dsW5nh4u2/m1oMbw=

You can also verify that the record stayed encrypted in Streams by printing out the UTF-8 decoded received record immediately after the getRecords API call. An example of the print output when running the sample application is shown below.

[Thread-2] INFO kinesisencryption.utils.KinesisEncryptionUtils - Verifying object received from stream is encrypted. -Encrypted UTF-8 decoded : AYADeBJz/kt7Fm3L1lvS8Wy8jhAAbgACAAdLaW5lc2lzAARjYXJzABVhd3MtY3J5cHRvLXB1YmxpYy1rZXkAREFrM2N4K2s1ODJuOGVlNWF3TVJ1dk1UUHZQc2FHeGoxQisxb09kNWtDUExHYjJDS0lMZW5LSnlYakRmdFR4dzQyUT09AAEAB2F3cy1rbXMAS2Fybjphd3M6a21zOnVzLWVhc3QtMTo1NzM5MDY1ODEwMDI6a2V5LzM3ZGM5MGRjLTNmMWMtNGE3Ny1hNTFkLWE2NTNiMTczZmNkYgCnAQEBAHgbPoaYTiF/oIMp49yPBkZmVVotylZpUqwkkzJJicLjLQAAAH4wfAYJKoZIhvcNAQcGoG8wbQIBADBoBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDAGI3oWLlIJ2p6kffQIBEIA7JVUOTsLtEyNK8vS4GIS9iyTejuB2xhIpRXfG8o0lUfHawcrCbNbNH8XLm/8RW5JbgXo10EpOs8dSjkICAAAAAAwAABAAy64r24sGVKWN4C1gXCwJYHvZkLpJJj16SZlhpv////8AAAABg2pPFchIiaM7D9VuAAAAkwh10ul5sZQ08KsgkFszOOvFoQu95CiY7cK8H+tBloVOZglMqhhhvoIIZLr9hmI8/lQvRXzGDdo7Xkp0FAT5Jpztt8Hq/ZuLfZtNYIWOw594jShqpZt6uXMdMnpb/38R3e5zLK5vrYkM6NS4WPMFrHsOKN5tn0CDForgojRcdpmCJ8+cWLNltb2S+EJiWiyWS+ibw2vJ/RFm6WZO6nD+MXn3vyMAZzBlAjAuIUTYL1cbQ3ENxDIeXHJAWQguNPqxq4HgaCmCEI9/rn/GAKSc2nT9ln3UsVq/2dgCMQC7yNJ3DCTnppavfxTbcVS+rXaDDpZZx/ZsluMqXAFM5/FFvKRqr0dVML28tGunxmU=

Decrypting the records at the consumer

After you receive the records into your consumer as a list, you can get the data as a ByteBuffer by calling record.getData. You then decode and decrypt the byteBuffer by calling the KinesisEncryptionUtils.decryptByteStream. This method takes five parameters:

  • amazonaws.encryptionsdk.AwsCrypto
  • record ByteBuffer
  • amazonaws.encryptionsdk.kms.KmsMasterKeyProvider
  • key arn string
  • java.util.Map of your encryption context

A string representation of the ticker sales object is returned back to the caller for further processing. In this example, this representation is just printed to standard output.

[Thread-2] INFO kinesisencryption.streams.DecryptShardConsumerThread - Decrypted Text Result is TickerSalesObject{tickerSymbol='AMZN', salesPrice='304.958313333', orderId='50defaf0-1c37-4e84-85d7-bc15597355eb', timeStamp='2017-01-30 02:41:38'}

Example: Decrypting records with the KCL and Streams API

ByteBuffer buffer = record.getData();
//Decrypting the encrypted record data
String decryptedResult = KinesisEncryptionUtils.decryptByteStream(crypto,buffer,prov,this.getKeyArn(), this.getContext());
log.info("Decrypted Text Result is " + decryptedResult);

With the above code, records in the Kinesis Streams are decrypted using the same key ARN and encryption context that was previously used to encrypt it at the producer side.

Maven dependencies

To use the implementation I’ve outlined in this post, you need to use a few maven dependencies outlined below in the pom.xml together with the Bouncy Castle libraries. Bouncy Castle provides a cryptography API for Java.

 <dependency>
        <groupId>org.bouncycastle</groupId>
        <artifactId>bcprov-ext-jdk15on</artifactId>
        <version>1.54</version>
    </dependency>
<dependency>
   <groupId>com.amazonaws</groupId>
   <artifactId>aws-encryption-sdk-java</artifactId>
   <version>0.0.1</version>
</dependency>

Summary

You may incorporate above sample code snippets or use it as a guide in your application code to just start encrypting and decrypting your records to and from an Amazon Kinesis Stream.

A complete producer and consumer example application and a more detailed step-by-step example of developing an Amazon Kinesis producer and consumer application on AWS with encrypted records is available at the kinesisencryption github repository.

If you have questions or suggestions, please comment below.


About the Author

Temitayo Olajide is a Cloud Support Engineer with Amazon Web Services. He works with customers to provide architectural solutions, support and guidance to implementing high velocity streaming data applications in the cloud. In his spare time, he plays ping-pong and hangs out with family and friends

 

 


Related

Secure Amazon EMR with Encryption

 

 

Pirate Streaming Site 123Movies Rebrands as GoMovies

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-streaming-site-123movies-rebrands-as-gomovies-170329/

Pirate movie streaming sites and services continue to gain popularity, with 123movies at the forefront of this trend.

However, growth doesn’t always come easy. Over the past week the site has been suffering significant downtime, for unknown reasons. The site eventually returned during the weekend on the domain 123movieshd.to, but that wasn’t the end of the upheaval.

Yesterday the site switched to a new home, first renaming itself to Memovies, only to switch to Gomovies a few hours later.

While the site’s operators are staying quiet on the exact reason for the downtime, a site representative told TorrentFreak that GoMovies.to will be the site’s official domain, at least for the time being.

When asked about the reason for the name change, the representative said that it is in part to set itself apart from the many fake sites and proxies that appeared online recently.

The operators grew tired of the many fake sites that ranked very high in search engines with the 123movies brand. With the change, GoMovies will try to regain momentum.

The old 123movies.is and 123movies.to domain names are now linking to the new GoMovies one, and the site also confirmed the surprising brand change on its official Twitter account.

“We’re back at GoMovies.to – This is our Official Domain – new Home for 123Movies users. Old data remains the same. Please spread.”

123movies / Gomovies on Twitter

While GoMovies has a new domain name and logo, the rest of the site is pretty much intact. In fact, most of the references to 123movies are still present on the site, even in the site title and the FAQ section.

Although the turbulent week with several days of downtime remains largely unexplained, the site representative hopes that they will remain on the new domain for a while. However, we were told that it’s possible that they will have to change again in the future.

In addition to the site’s millions of users, 123movies / GoMovies developments are also being closely watched by copyright holders, who see the site as one of their main targets.

Hollywood previously reported the site to the U.S. Government’s Trade Representative, labeling it one of the most “notorious” pirate markets, and the site is blocked by UK ISPs as well. Last week, the US Ambassador to Vietnam turned up the pressure by urging a local minister to prosecute the site’s operators, who allegedly reside there.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.