Tag Archives: adblock

Coaxing 2D platforming out of Unity

Post Syndicated from Eevee original https://eev.ee/blog/2017/10/13/coaxing-2d-platforming-out-of-unity/

An anonymous donor asked a question that I can’t even begin to figure out how to answer, but they also said anything else is fine, so here’s anything else.

I’ve been avoiding writing about game physics, since I want to save it for ✨ the book I’m writing ✨, but that book will almost certainly not touch on Unity. Here, then, is a brief run through some of the brick walls I ran into while trying to convince Unity to do 2D platforming.

This is fairly high-level — there are no blocks of code or helpful diagrams. I’m just getting this out of my head because it’s interesting. If you want more gritty details, I guess you’ll have to wait for ✨ the book ✨.

The setup

I hadn’t used Unity before. I hadn’t even used a “real” physics engine before. My games so far have mostly used LÖVE, a Lua-based engine. LÖVE includes box2d bindings, but for various reasons (not all of them good), I opted to avoid them and instead write my own physics completely from scratch. (How, you ask? ✨ Book ✨!)

I was invited to work on a Unity project, Chaos Composer, that someone else had already started. It had basic movement already implemented; I taught myself Unity’s physics system by hacking on it. It’s entirely possible that none of this is actually the best way to do anything, since I was really trying to reproduce my own homegrown stuff in Unity, but it’s the best I’ve managed to come up with.

Two recurring snags were that you can’t ask Unity to do multiple physics updates in a row, and sometimes getting the information I wanted was difficult. Working with my own code spoiled me a little, since I could invoke it at any time and ask it anything I wanted; Unity, on the other hand, is someone else’s black box with a rigid interface on top.

Also, wow, Googling for a lot of this was not quite as helpful as expected. A lot of what’s out there is just the first thing that works, and often that’s pretty hacky and imposes severe limits on the game design (e.g., “this won’t work with slopes”). Basic movement and collision are the first thing you do, which seems to me like the worst time to be locking yourself out of a lot of design options. I tried very (very, very, very) hard to minimize those kinds of constraints.

Problem 1: Movement

When I showed up, movement was already working. Problem solved!

Like any good programmer, I immediately set out to un-solve it. Given a “real” physics engine like Unity prominently features, you have two options: ⓐ treat the player as a physics object, or ⓑ don’t. The existing code went with option ⓑ, like I’d done myself with LÖVE, and like I’d seen countless people advise. Using a physics sim makes for bad platforming.

But… why? I believed it, but I couldn’t concretely defend it. I had to know for myself. So I started a blank project, drew some physics boxes, and wrote a dozen-line player controller.

Ah! Immediate enlightenment.

If the player was sliding down a wall, and I tried to move them into the wall, they would simply freeze in midair until I let go of the movement key. The trouble is that the physics sim works in terms of forces — moving the player involves giving them a nudge in some direction, like a giant invisible hand pushing them around the level. Surprise! If you press a real object against a real wall with your real hand, you’ll see the same effect — friction will cancel out gravity, and the object will stay in midair..

Platformer movement, as it turns out, doesn’t make any goddamn physical sense. What is air control? What are you pushing against? Nothing, really; we just have it because it’s nice to play with, because not having it is a nightmare.

I looked to see if there were any common solutions to this, and I only really found one: make all your walls frictionless.

Game development is full of hacks like this, and I… don’t like them. I can accept that minor hacks are necessary sometimes, but this one makes an early and widespread change to a fundamental system to “fix” something that was wrong in the first place. It also imposes an “invisible” requirement, something I try to avoid at all costs — if you forget to make a particular wall frictionless, you’ll never know unless you happen to try sliding down it.

And so, I swiftly returned to the existing code. It wasn’t too different from what I’d come up with for LÖVE: it applied gravity by hand, tracked the player’s velocity, computed the intended movement each frame, and moved by that amount. The interesting thing was that it used MovePosition, which schedules a movement for the next physics update and stops the movement if the player hits something solid.

It’s kind of a nice hybrid approach, actually; all the “physics” for conscious actors is done by hand, but the physics engine is still used for collision detection. It’s also used for collision rejection — if the player manages to wedge themselves several pixels into a solid object, for example, the physics engine will try to gently nudge them back out of it with no extra effort required on my part. I still haven’t figured out how to get that to work with my homegrown stuff, which is built to prevent overlap rather than to jiggle things out of it.

But wait, what about…

Our player is a dynamic body with rotation lock and no gravity. Why not just use a kinematic body?

I must be missing something, because I do not understand the point of kinematic bodies. I ran into this with Godot, too, which documented them the same way: as intended for use as players and other manually-moved objects. But by default, they don’t even collide with other kinematic bodies or static geometry. What? There’s a checkbox to turn this on, which I enabled, but then I found out that MovePosition doesn’t stop kinematic bodies when they hit something, so I would’ve had to cast along the intended path of movement to figure out when to stop, thus duplicating the same work the physics engine was about to do.

But that’s impossible anyway! Static geometry generally wants to be made of edge colliders, right? They don’t care about concave/convex. Imagine the player is standing on the ground near a wall and tries to move towards the wall. Both the ground and the wall are different edges from the same edge collider.

If you try to cast the player’s hitbox horizontally, parallel to the ground, you’ll only get one collision: the existing collision with the ground. Casting doesn’t distinguish between touching and hitting. And because Unity only reports one collision per collider, and because the ground will always show up first, you will never find out about the impending wall collision.

So you’re forced to either use raycasts for collision detection or decomposed polygons for world geometry, both of which are slightly worse tools for no real gain.

I ended up sticking with a dynamic body.


Oh, one other thing that doesn’t really fit anywhere else: keep track of units! If you’re adding something called “velocity” directly to something called “position”, something has gone very wrong. Acceleration is distance per time squared; velocity is distance per time; position is distance. You must multiply or divide by time to convert between them.

I never even, say, add a constant directly to position every frame; I always phrase it as velocity and multiply by Δt. It keeps the units consistent: time is always in seconds, not in tics.

Problem 2: Slopes

Ah, now we start to get off in the weeds.

A sort of pre-problem here was detecting whether we’re on a slope, which means detecting the ground. The codebase originally used a manual physics query of the area around the player’s feet to check for the ground, which seems to be somewhat common, but that can’t tell me the angle of the detected ground. (It’s also kind of error-prone, since “around the player’s feet” has to be specified by hand and may not stay correct through animations or changes in the hitbox.)

I replaced that with what I’d eventually settled on in LÖVE: detect the ground by detecting collisions, and looking at the normal of the collision. A normal is a vector that points straight out from a surface, so if you’re standing on the ground, the normal points straight up; if you’re on a 10° incline, the normal points 10° away from straight up.

Not all collisions are with the ground, of course, so I assumed something is ground if the normal pointed away from gravity. (I like this definition more than “points upwards”, because it avoids assuming anything about the direction of gravity, which leaves some interesting doors open for later on.) That’s easily detected by taking the dot product — if it’s negative, the collision was with the ground, and I now have the normal of the ground.

Actually doing this in practice was slightly tricky. With my LÖVE engine, I could cram this right into the middle of collision resolution. With Unity, not quite so much. I went through a couple iterations before I really grasped Unity’s execution order, which I guess I will have to briefly recap for this to make sense.

Unity essentially has two update cycles. It performs physics updates at fixed intervals for consistency, and updates everything else just before rendering. Within a single frame, Unity does as many fixed physics updates as it has spare time for (which might be zero, one, or more), then does a regular update, then renders. User code can implement either or both of Update, which runs during a regular update, and FixedUpdate, which runs just before Unity does a physics pass.

So my solution was:

  • At the very end of FixedUpdate, clear the actor’s “on ground” flag and ground normal.

  • During OnCollisionEnter2D and OnCollisionStay2D (which are called from within a physics pass), if there’s a collision that looks like it’s with the ground, set the “on ground” flag and ground normal. (If there are multiple ground collisions, well, good luck figuring out the best way to resolve that! At the moment I’m just taking the first and hoping for the best.)

That means there’s a brief window between the end of FixedUpdate and Unity’s physics pass during which a grounded actor might mistakenly believe it’s not on the ground, which is a bit of a shame, but there are very few good reasons for anything to be happening in that window.

Okay! Now we can do slopes.

Just kidding! First we have to do sliding.

When I first looked at this code, it didn’t apply gravity while the player was on the ground. I think I may have had some problems with detecting the ground as result, since the player was no longer pushing down against it? Either way, it seemed like a silly special case, so I made gravity always apply.

Lo! I was a fool. The player could no longer move.

Why? Because MovePosition does exactly what it promises. If the player collides with something, they’ll stop moving. Applying gravity means that the player is trying to move diagonally downwards into the ground, and so MovePosition stops them immediately.

Hence, sliding. I don’t want the player to actually try to move into the ground. I want them to move the unblocked part of that movement. For flat ground, that means the horizontal part, which is pretty much the same as discarding gravity. For sloped ground, it’s a bit more complicated!

Okay but actually it’s less complicated than you’d think. It can be done with some cross products fairly easily, but Unity makes it even easier with a couple casts. There’s a Vector3.ProjectOnPlane function that projects an arbitrary vector on a plane given by its normal — exactly the thing I want! So I apply that to the attempted movement before passing it along to MovePosition. I do the same thing with the current velocity, to prevent the player from accelerating infinitely downwards while standing on flat ground.

One other thing: I don’t actually use the detected ground normal for this. The player might be touching two ground surfaces at the same time, and I’d want to project on both of them. Instead, I use the player body’s GetContacts method, which returns contact points (and normals!) for everything the player is currently touching. I believe those contact points are tracked by the physics engine anyway, so asking for them doesn’t require any actual physics work.

(Looking at the code I have, I notice that I still only perform the slide for surfaces facing upwards — but I’d want to slide against sloped ceilings, too. Why did I do this? Maybe I should remove that.)

(Also, I’m pretty sure projecting a vector on a plane is non-commutative, which raises the question of which order the projections should happen in and what difference it makes. I don’t have a good answer.)

(I note that my LÖVE setup does something slightly different: it just tries whatever the movement ought to be, and if there’s a collision, then it projects — and tries again with the remaining movement. But I can’t ask Unity to do multiple moves in one physics update, alas.)

Okay! Now, slopes. But actually, with the above work done, slopes are most of the way there already.

One obvious problem is that the player tries to move horizontally even when on a slope, and the easy fix is to change their movement from speed * Vector2.right to speed * new Vector2(ground.y, -ground.x) while on the ground. That’s the ground normal rotated a quarter-turn clockwise, so for flat ground it still points to the right, and in general it points rightwards along the ground. (Note that it assumes the ground normal is a unit vector, but as far as I’m aware, that’s true for all the normals Unity gives you.)

Another issue is that if the player stands motionless on a slope, gravity will cause them to slowly slide down it — because the movement from gravity will be projected onto the slope, and unlike flat ground, the result is no longer zero. For conscious actors only, I counter this by adding the opposite factor to the player’s velocity as part of adding in their walking speed. This matches how the real world works, to some extent: when you’re standing on a hill, you’re exerting some small amount of effort just to stay in place.

(Note that slope resistance is not the same as friction. Okay, yes, in the real world, virtually all resistance to movement happens as a result of friction, but bracing yourself against the ground isn’t the same as being passively resisted.)

From here there are a lot of things you can do, depending on how you think slopes should be handled. You could make the player unable to walk up slopes that are too steep. You could make walking down a slope faster than walking up it. You could make jumping go along the ground normal, rather than straight up. You could raise the player’s max allowed speed while running downhill. Whatever you want, really. Armed with a normal and awareness of dot products, you can do whatever you want.

But first you might want to fix a few aggravating side effects.

Problem 3: Ground adherence

I don’t know if there’s a better name for this. I rarely even see anyone talk about it, which surprises me; it seems like it should be a very common problem.

The problem is: if the player runs up a slope which then abruptly changes to flat ground, their momentum will carry them into the air. For very fast players going off the top of very steep slopes, this makes sense, but it becomes visible even for relatively gentle slopes. It was a mild nightmare in the original release of our game Lunar Depot 38, which has very “rough” ground made up of lots of shallow slopes — so the player is very frequently slightly off the ground, which meant they couldn’t jump, for seemingly no reason. (I even had code to fix this, but I disabled it because of a silly visual side effect that I never got around to fixing.)

Anyway! The reason this is a problem is that game protagonists are generally not boxes sliding around — they have legs. We don’t go flying off the top of real-world hilltops because we put our foot down until it touches the ground.

Simulating this footfall is surprisingly fiddly to get right, especially with someone else’s physics engine. It’s made somewhat easier by Cast, which casts the entire hitbox — no matter what shape it is — in a particular direction, as if it had moved, and tells you all the hypothetical collisions in order.

So I cast the player in the direction of gravity by some distance. If the cast hits something solid with a ground-like collision normal, then the player must be close to the ground, and I move them down to touch it (and set that ground as the new ground normal).

There are some wrinkles.

Wrinkle 1: I only want to do this if the player is off the ground now, but was on the ground last frame, and is not deliberately moving upwards. That latter condition means I want to skip this logic if the player jumps, for example, but also if the player is thrust upwards by a spring or abducted by a UFO or whatever. As long as external code goes through some interface and doesn’t mess with the player’s velocity directly, that shouldn’t be too hard to track.

Wrinkle 2: When does this logic run? It needs to happen after the player moves, which means after a Unity physics pass… but there’s no callback for that point in time. I ended up running it at the beginning of FixedUpdate and the beginning of Update — since I definitely want to do it before rendering happens! That means it’ll sometimes happen twice between physics updates. (I could carefully juggle a flag to skip the second run, but I… didn’t do that. Yet?)

Wrinkle 3: I can’t move the player with MovePosition! Remember, MovePosition schedules a movement, it doesn’t actually perform one; that means if it’s called twice before the physics pass, the first call is effectively ignored. I can’t easily combine the drop with the player’s regular movement, for various fiddly reasons. I ended up doing it “by hand” using transform.Translate, which I think was the “old way” to do manual movement before MovePosition existed. I’m not totally sure if it activates triggers? For that matter, I’m not sure it even notices collisions — but since I did a full-body Cast, there shouldn’t be any anyway.

Wrinkle 4: What, exactly, is “some distance”? I’ve yet to find a satisfying answer for this. It seems like it ought to be based on the player’s current speed and the slope of the ground they’re moving along, but every time I’ve done that math, I’ve gotten totally ludicrous answers that sometimes exceed the size of a tile. But maybe that’s not wrong? Play around, I guess, and think about when the effect should “break” and the player should go flying off the top of a hill.

Wrinkle 5: It’s possible that the player will launch off a slope, hit something, and then be adhered to the ground where they wouldn’t have hit it. I don’t much like this edge case, but I don’t see a way around it either.

This problem is surprisingly awkward for how simple it sounds, and the solution isn’t entirely satisfying. Oh, well; the results are much nicer than the solution. As an added bonus, this also fixes occasional problems with running down a hill and becoming detached from the ground due to precision issues or whathaveyou.

Problem 4: One-way platforms

Ah, what a nightmare.

It took me ages just to figure out how to define one-way platforms. Only block when the player is moving downwards? Nope. Only block when the player is above the platform? Nuh-uh.

Well, okay, yes, those approaches might work for convex players and flat platforms. But what about… sloped, one-way platforms? There’s no reason you shouldn’t be able to have those. If Super Mario World can do it, surely Unity can do it almost 30 years later.

The trick is, again, to look at the collision normal. If it faces away from gravity, the player is hitting a ground-like surface, so the platform should block them. Otherwise (or if the player overlaps the platform), it shouldn’t.

Here’s the catch: Unity doesn’t have conditional collision. I can’t decide, on the fly, whether a collision should block or not. In fact, I think that by the time I get a callback like OnCollisionEnter2D, the physics pass is already over.

I could go the other way and use triggers (which are non-blocking), but then I have the opposite problem: I can’t stop the player on the fly. I could move them back to where they hit the trigger, but I envision all kinds of problems as a result. What if they were moving fast enough to activate something on the other side of the platform? What if something else moved to where I’m trying to shove them back to in the meantime? How does this interact with ground detection and listing contacts, which would rightly ignore a trigger as non-blocking?

I beat my head against this for a while, but the inability to respond to collision conditionally was a huge roadblock. It’s all the more infuriating a problem, because Unity ships with a one-way platform modifier thing. Unfortunately, it seems to have been implemented by someone who has never played a platformer. It’s literally one-way — the player is only allowed to move straight upwards through it, not in from the sides. It also tries to block the player if they’re moving downwards while inside the platform, which invokes clumsy rejection behavior. And this all seems to be built into the physics engine itself somehow, so I can’t simply copy whatever they did.

Eventually, I settled on the following. After calculating attempted movement (including sliding), just at the end of FixedUpdate, I do a Cast along the movement vector. I’m not thrilled about having to duplicate the physics engine’s own work, but I do filter to only things on a “one-way platform” physics layer, which should at least help. For each object the cast hits, I use Physics2D.IgnoreCollision to either ignore or un-ignore the collision between the player and the platform, depending on whether the collision was ground-like or not.

(A lot of people suggested turning off collision between layers, but that can’t possibly work — the player might be standing on one platform while inside another, and anyway, this should work for all actors!)

Again, wrinkles! But fewer this time. Actually, maybe just one: handling the case where the player already overlaps the platform. I can’t just check for that with e.g. OverlapCollider, because that doesn’t distinguish between overlapping and merely touching.

I came up with a fairly simple fix: if I was going to un-ignore the collision (i.e. make the platform block), and the cast distance is reported as zero (either already touching or overlapping), I simply do nothing instead. If I’m standing on the platform, I must have already set it blocking when I was approaching it from the top anyway; if I’m overlapping it, I must have already set it non-blocking to get here in the first place.

I can imagine a few cases where this might go wrong. Moving platforms, especially, are going to cause some interesting issues. But this is the best I can do with what I know, and it seems to work well enough so far.

Oh, and our player can deliberately drop down through platforms, which was easy enough to implement; I just decide the platform is always passable while some button is held down.

Problem 5: Pushers and carriers

I haven’t gotten to this yet! Oh boy, can’t wait. I implemented it in LÖVE, but my way was hilariously invasive; I’m hoping that having a physics engine that supports a handwaved “this pushes that” will help. Of course, you also have to worry about sticking to platforms, for which the recommended solution is apparently to parent the cargo to the platform, which sounds goofy to me? I guess I’ll find out when I throw myself at it later.

Overall result

I ended up with a fairly pleasant-feeling system that supports slopes and one-way platforms and whatnot, with all the same pieces as I came up with for LÖVE. The code somehow ended up as less of a mess, too, but it probably helps that I’ve been down this rabbit hole once before and kinda knew what I was aiming for this time.

Animation of a character running smoothly along the top of an irregular dinosaur skeleton

Sorry that I don’t have a big block of code for you to copy-paste into your project. I don’t think there are nearly enough narrative discussions of these fundamentals, though, so hopefully this is useful to someone. If not, well, look forward to ✨ my book, that I am writing ✨!

Cryptocurrency Miner Targeted by Anti-Virus and Adblock Tools

Post Syndicated from Ernesto original https://torrentfreak.com/cryptocurrency-miner-targeted-by-anti-virus-and-adblock-tools-170926/

Earlier this month The Pirate Bay caused some uproar by adding a Javascript-based cryptocurrency miner to its website.

The miner utilizes CPU power from visitors to generate Monero coins for the site, providing an extra revenue source.

While Pirate Bay only tested the option briefly, it inspired many others to follow suit. Streaming related sites such as Alluc, Vidoza, and Rapidvideo jumped on board, and torrent site Demonoid also ran some tests.

During the weekend, Coinhive’s miner code even appeared on the official website of Showtime. The code was quickly removed and it’s still unclear how it got there, as the company refuses to comment. It’s clear, though, that miners are a hot topic thanks to The Pirate Bay.

The revenue potential is also real. TorrentFreak spoke to Vidoza who say that with 30,000 online users throughout the day (2M unique visitors), they can make between $500 and $600. That’s when the miner is throttled at 50%. Although ads can bring in more, it’s not insignificant.

That said, all the uproar about cryptocurrency miners and their possible abuse has also attracted the attention of ad-blockers. Some people have coded new browser add-ons to block miners specifically and the popular uBlock Origin added Coinhive to its default blocklist as well. And that’s just after a few days.

Needless to say, this limits the number of miners, and thus the money that comes in. And there’s another problem with a similar effect.

In addition to ad-blockers, anti-virus tools are also flagging Coinhive. Malwarebytes is one of the companies that lists it as a malicious activity, warning users about the threat.

The anti-virus angle is one of the issues that worries Demonoid’s operator. The site is used to ad-blockers, but getting flagged by anti-virus companies is of a different order.

“The problem I see there and the reason we will likely discontinue [use of the miner] is that some anti-virus programs block it, and that might get the site on their blacklists,” Deimos informs TorrentFreak.

Demonoid’s miner announcement

Vidoza operator Eugene sees all the blocking as an unwelcome development and hopes that Coinhive will tackle it. Coinhive may want to come out in public and start to discuss the issue with ad-blockers and anti-virus companies, he says.

“They should find out under what conditions all these guys will stop blocking the script,” he notes.

The other option would be to circumvent the blocking through proxies and circumvention tools, but that might not be the best choice in the long run.

Coinhive, meanwhile, has chimed in as well. The company says that it wasn’t properly prepared for the massive attention and understands why some ad-blockers have put them on the blacklist.

“Providing a real alternative to ads and users who block them turned out to be a much harder problem. Coinhive, too, is now blocked by many ad-block browser extensions, which – we have to admit – is reasonable at this point.”

Most complaints have been targeted at sites that implemented the miner without the user’s consent. Coinhive doesn’t like this either and will take steps to prevent it in future.

“We’re a bit saddened to see that some of our customers integrate Coinhive into their pages without disclosing to their users what’s going on, let alone asking for their permission,” the Coinhive team notes.

The crypto miner provider is working on a new implementation that requires explicit consent from website visitors in order to run. This should deal with most of the negative responses.

If users start mining voluntarily, then ad-blockers and anti-virus companies should no longer have a reason to block the script. Nor will it be easy for malware peddlers to abuse it.

To be continued.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirates Leak Copy of Kim Dotcom Documentary Online

Post Syndicated from Ernesto original https://torrentfreak.com/pirates-leak-copy-of-kim-dotcom-documentary-online-170824/

In recent years, we have writen dozens of articles on Kim Dotcom, Megaupload’s shutdown, and all the intrigue surrounding the case.

It’s a story worth documenting and not just in writing. This is what the people behind the documentary Kim Dotcom: Caught in the Web realized as well.

With cooperation from the mastermind behind the defunct file-sharing site, they made a thrilling documentary that captures the essence of the story, which is far from over.

This week the film was released to the wider public, made available for sale on various online platforms including iTunes and Amazon Prime. Thus far things are going well, with the movie making its way into various top charts, including a second place in the iTunes documentary category.

However, if we believe entertainment industry rhetoric, this meteoric rise will soon be all over.

Earlier today the first pirated copies of “Caught in The Web” started to appear online. It is widely available on The Pirate Bay, for example, and shows up on various other “pirate” download and streaming sites as well.

The leaked documentary

Leaks happen every day, and this one’s not any different. That being said, people who followed the Dotcom saga may appreciate the irony, since Megaupload was a popular destination for pirates as well. So, a chunk of the site’s former users probably prefers to grab a free version. To sample, of course.

This is especially true for those who hit several roadblocks in trying to access the film from official outlets. Over the past few days, some people complained that “Caught in the Web” isn’t legally available through their preferred legal channel due to geographical restrictions.

Dotcom, still accused by the US Government of depriving copyright holders of $500 million in one of the country’s largest copyright infringement cases, responded appropriately when a Twitter follower pointed this out.

Not available

“They are wondering why people are pirating? If you’re willing to pay but you can’t find it legally, why is it your or my fault?” he wrote.

“If the Megaupload documentary is only available in the US iTunes store then I totally understand if you download or stream it elsewhere,” Dotcom added in another tweet.

The documentary is available in more countries, but not in all Amazon or iTunes stores. So, with the sympathy of the documentary’s main subject, people with no legal alternatives don’t have to feel as bad when they choose to pirate it instead.

That doesn’t make it less illegal, of course, but we doubt that the makers will actively pursue people for it.

Meanwhile, the people who were tasked with distributing the film may want to have another chat with Kim Dotcom. In recent years he has repeatedly sent out a concise list of tips on how to stop piracy.

Worth a read.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DMCA Used to Remove Ad Server URL From Easylist Ad Blocklist

Post Syndicated from Andy original https://torrentfreak.com/dmca-used-to-remove-ad-server-url-from-easylist-ad-blocklist-170811/

The default business model on the Internet is “free” for consumers. Users largely expect websites to load without paying a dime but of course, there’s no such thing as a free lunch. To this end, millions of websites are funded by advertising revenue.

Sensible sites ensure that any advertising displayed is unobtrusive to the visitor but lots seem to think that bombarding users with endless ads, popups, and other hindrances is the best way to do business. As a result, ad blockers are now deployed by millions of people online.

In order to function, ad-blocking tools – such as uBlock Origin or Adblock – utilize lists of advertising domains compiled by third parties. One of the most popular is Easylist, which is distributed by authors fanboy, MonztA, Famlam, and Khrinunder, under dual Creative Commons Attribution-ShareAlike and GNU General Public Licenses.

With the freedom afforded by those licenses, copyright tends not to figure high on the agenda for Easylist. However, a legal problem that has just raised its head is causing serious concern among those in the ad-blocking community.

Two days ago a somewhat unusual commit appeared in the Easylist repo on Github. As shown in the image below, a domain URL previously added to Easylist had been removed following a DMCA takedown notice filed with Github.

Domain text taken down by DMCA?

The DMCA notice in question has not yet been published but it’s clear that it targets the domain ‘functionalclam.com’. A user called ‘ameshkov’ helpfully points out a post by a new Github user called ‘DMCAHelper’ which coincided with the start of the takedown process more than three weeks ago.

A domain in a list circumvents copyright controls?

Aside from the curious claims of a URL “circumventing copyright access controls” (domains themselves cannot be copyrighted), the big questions are (i) who filed the complaint and (ii) who operates Functionalclam.com? The domain WHOIS is hidden but according to a helpful sleuth on Github, it’s operated by anti ad-blocking company Admiral.

Ad-blocking means money down the drain….

If that is indeed the case, we have the intriguing prospect of a startup attempting to protect its business model by using a novel interpretation of copyright law to have a domain name removed from a list. How this will pan out is unclear but a notice recently published on Functionalclam.com suggests the route the company wishes to take.

“This domain is used by digital publishers to control access to copyrighted content in accordance with the Digital Millenium Copyright Act and understand how visitors are accessing their copyrighted content,” the notice begins.

Combined with the comments by DMCAHelper on Github, this statement suggests that the complainants believe that interference with the ad display process (ads themselves could be the “copyrighted content” in question) represents a breach of section 1201 of the DMCA.

If it does, that could have huge consequences for online advertising but we will need to see the original DMCA notice to have a clearer idea of what this is all about. Thus far, Github hasn’t published it but already interest is growing. A representative from the EFF has already contacted the Easylist team, so this battle could heat up pretty quickly.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MagPi 60: the ultimate troubleshooting guide

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-60/

Hey folks, Rob from The MagPi here! It’s the last Thursday of the month, and that can only mean one thing: a brand-new The MagPi issue is out! In The MagPi 60, we’re bringing you the top troubleshooting tips for your Raspberry Pi, sourced directly from our amazing community.

The MagPi 60 cover with DVD slip case shown

The MagPi #60 comes with a huge troubleshooting guide

The MagPi 60

Our feature-length guide covers snags you might encounter while using a Raspberry Pi, and it is written for newcomers and veterans alike! Do you hit a roadblock while booting up your Pi? Are you having trouble connecting it to a network? Don’t worry – in this issue you’ll find troubleshooting advice you can use to solve your problem. And, as always, if you’re still stuck, you can head over to the Raspberry Pi forums for help.

More than troubleshooting

That’s not all though – Issue 60 also includes a disc with Raspbian-x86! This version of Raspbian for PCs contains all the recent updates and additions, such as offline Scratch 2.0 and the new Thonny IDE. And – *drumroll* – the disc version can be installed to your PC or Mac. The last time we had a Raspbian disc on the cover, many of you requested an installable version, so here you are! There is an installation guide inside the mag, so you’ll be all set to get going.

On top of that, you’ll find our usual array of amazing tutorials, projects, and reviews. There’s a giant guitar, Siri voice control, Pi Zeros turned into wireless-connected USB drives, and even a review of a new robot kit. You won’t want to miss it!

A spread from The MagPi 60 showing a giant Raspberry Pi-powered guitar

I wasn’t kidding about the giant guitar

How to get a copy

Grab your copy today in the UK from WHSmith, Sainsbury’s, Asda, and Tesco. Copies will be arriving very soon in US stores, including Barnes & Noble and Micro Center. You can also get the new issue online from our store, or digitally via our Android or iOS app. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Some of you have asked me about the goodies that we give out to subscribers. This is how it works: if you take out a twelve-month print subscription of The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

Alright, I think I’ve covered everything! So that’s it. I’ll see you next month.

Jean-Luc Picard sitting at a desk playing with a pen and sighing

The post MagPi 60: the ultimate troubleshooting guide appeared first on Raspberry Pi.

Pirate Bay Founder Wants to Save Lives With His New App

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-founder-wants-to-save-lives-with-his-new-app-170714/

Of all the early founders of The Pirate Bay, it is Peter Sunde who has remained most obviously in the public eye. Now distanced from the site, Sunde has styled himself as a public speaker and entrepreneur.

Earlier this year the Swede (who is of both Norwegian and Finnish ancestry) sold his second most famous project Flattr to the parent company of Adblock Plus. Now, however, he has another digital baby to nurture, and this one is quite interesting.

Like many countries, Sweden operates a public early warning system. Popularly known as ‘Hesa Fredrik’, it consists of extremely loud outdoor sirens accompanied by radio and television messages.

The sirens can be activated in specific areas of the country wherever the problems exist. Fire, floods, gas leaks, threats to the water system, terrorist attacks or even war could trigger the alarm.

Just recently the ‘Hesa Fredrik’ alarm was sounded in Sweden, yet there was no planned test and no emergency. The public didn’t know that though and as people struggled to find information, authority websites crashed under the strain. The earliest news report indicating that it was a false alarm appeared behind a news site’s paywall. The national police site published no information.

The false alarm

Although Sunde heard the sirens, it was an earlier incident that motivated him to find a better solution. Speaking with Swedish site Breakit, Sunde says he got the idea during the Västmanland wildfire, which burned for six weeks straight in 2014 and became the largest fire in Sweden for 40 years.

“I got the idea during Västmanland fire. It took several days before text messages were sent to everyone in the area but by then it was already out of control. I thought that was so very bad when it is so easy to build something better,” Sunde said.

Sunde’s solution is the Hesa Fredrika app, which is currently under development by himself and several former members of the Flattr team.

“The goal is for everyone to download the app and then forget about it,” Sunde says.

When one thinks about the problem Sunde is trying to solve (i.e. the lack of decent and timely information in a crisis) today’s mobile phones provide the perfect solution. Not only do most people have one (or are near someone who does), they provide the perfect platform to deliver immediately deliver emergency services advice to people in a precise location.

“It is not enough for a small text to appear in the corner of the screen. I want to build something that makes the phone vibrate and sound so that you notice it properly,” Sunde told Breakit.

But while such an app could genuinely save lives in the event of a frankly rare event, Sunde has bigger ideas for the software that could extend its usefulness significantly.

Users will also be invited to add information about themselves, such as their doctor’s name or if they are a blood donor. The app user could then be messaged if there was an urgent need for a particular match. But while the app will be rolled out soon, it won’t be rushed.

“Since it is extremely important to the quality of the messages, we want as many partnerships as possible before we launch something,” Sunde says, adding that in true Pirate Bay style, it will be completely free for everyone.

“So it will remain forever,” he says. “My philosophy is such that I do not want people to pay for things that can save their lives.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Burner laptops for DEF CON

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/burner-laptops-for-def-con.html

Hacker summer camp (Defcon, Blackhat, BSidesLV) is upon us, so I thought I’d write up some quick notes about bringing a “burner” laptop. Chrome is your best choice in terms of security, but I need Windows/Linux tools, so I got a Windows laptop.

I chose the Asus e200ha for $199 from Amazon with free (and fast) shipping. There are similar notebooks with roughly the same hardware and price from other manufacturers (HP, Dell, etc.), so I’m not sure how this compares against those other ones. However, it fits my needs as a “burner” laptop, namely:

  • cheap
  • lasts 10 hours easily on battery
  • weighs 2.2 pounds (1 kilogram)
  • 11.6 inch and thin

Some other specs are:

  • 4 gigs of RAM
  • 32 gigs of eMMC flash memory
  • quad core 1.44 GHz Intel Atom CPU
  • Windows 10
  • free Microsoft Office 365 for one year
  • good, large keyboard
  • good, large touchpad
  • USB 3.0
  • microSD
  • WiFi ac
  • no fans, completely silent

There are compromises, of course.

  • The Atom CPU is slow, thought it’s only noticeable when churning through heavy webpages. Adblocking addons or Brave are a necessity. Most things are usably fast, such as using Microsoft Word.
  • Crappy sound and video, though VLC does a fine job playing movies with headphones on the airplane. Using in bright sunlight will be difficult.
  • micro-HDMI, keep in mind if intending to do presos from it, you’ll need an HDMI adapter
  • It has limited storage, 32gigs in theory, about half that usable.
  • Does special Windows 10 compressed install that you can’t actually upgrade without a completely new install. It doesn’t have the latest Windows 10 Creators update. I lost a gig thinking I could compress system files.

Copying files across the 802.11ac WiFi to the disk was quite fast, several hundred megabits-per-second. The eMMC isn’t as fast as an SSD, but its a lot faster than typical SD card speeds.

The first thing I did once I got the notebook was to install the free VeraCrypt full disk encryption. The CPU has AES acceleration, so it’s fast. There is a problem with the keyboard driver during boot that makes it really hard to enter long passwords — you have to carefully type one key at a time to prevent extra keystrokes from being entered.

You can’t really install Linux on this computer, but you can use virtual machines. I installed VirtualBox and downloaded the Kali VM. I had some problems attaching USB devices to the VM. First of all, VirtualBox requires a separate downloaded extension to get USB working. Second, it conflicts with USBpcap that I installed for Wireshark.

It comes with one year of free Office 365. Obviously, Microsoft is hoping to hook the user into a longer term commitment, but in practice next year at this time I’d get another burner $200 laptop rather than spend $99 on extending the Office 365 license.

Let’s talk about the CPU. It’s Intel’s “Atom” processor, not their mainstream (Core i3 etc.) processor. Even though it has roughly the same GHz as the processor in a 11inch MacBook Air and twice the cores, it’s noticeably and painfully slower. This is especially noticeable on ad-heavy web pages, while other things seem to work just fine. It has hardware acceleration for most video formats, though I had trouble getting Netflix to work.

The tradeoff for a slow CPU is phenomenal battery life. It seems to last forever on battery. It’s really pretty cool.

Conclusion

A Chromebook is likely more secure, but for my needs, this $200 is perfect.

Chrome’s Default ‘Ad-Blocker’ is Bad News for Torrent Sites

Post Syndicated from Ernesto original https://torrentfreak.com/chromes-default-ad-blocker-is-bad-news-for-torrent-sites-170705/

Online advertising can be quite a nuisance. Flashy and noisy banners, or intrusive pop-ups, are a thorn in the side of many Internet users.

These type of ads are particularly popular on pirate sites, so it’s no surprise that their users are more likely to have an ad-blocker installed.

The increasing popularity of these ad-blocking tools hasn’t done the income of site owners any good and the trouble on this front is about to increase.

A few weeks ago Google announced that its Chrome browser will start blocking ‘annoying’ ads in the near future, by default. This applies to all ads that don’t fall within the “better ads standards,” including popups and sticky ads.

Since Chrome is the leading browser on many pirate sites, this is expected to have a serious effect on torrent sites and other pirate platforms. TorrentFreak spoke to the operator of one of the largest torrent sites, who’s sounding the alarm bell.

The owner, who prefers not to have his site mentioned, says that it’s already hard to earn enough money to pay for hardware and hosting to keep the site afloat. This, despite millions of regular visitors.

“The torrent site economy is in a bad state. Profits are very low. Profits are f*cked up compared to previous years,” the torrent site owner says.

At the moment, 40% of the site’s users already have an ad-blocker installed, but when Chrome joins in with its default filter, it’s going to get much worse. A third of all visitors to the torrent site in question use the Chrome browser, either through mobile or desktop.

“Chrome’s ad-blocker will kill torrent sites. If they don’t at least cover their costs, no one is going to use money out of his pocket to keep them alive. I won’t be able to do so at least,” the site owner says.

It’s too early to assess how broad Chrome’s ad filtering will be, but torrent site owners may have to look for cleaner ads. That’s easier said than done though, as it’s usually the lower tier advertisers that are willing to work with these sites and they often serve more annoying ads.

The torrent site owner we spoke with isn’t very optimistic about the future. While he’s tested alternative revenue sources, he sees advertising as the only viable option. And with Chrome lining up to target part of their advertising inventory, revenue may soon dwindle.

“I’ve tested all types of ads and affiliates that are safe to work with, and advertising is the only way to cover costs. Also, most services that you can make good money promoting don’t work with torrent sites,” the torrent site owner notes.

Just a few months ago popular torrent site TorrentHound decided to shut down, citing a lack in revenue as one of the main reasons. This is by no means an isolated incident. TorrentFreak spoke to other site owners who confirm that it’s becoming harder and harder to pay the bills through advertisements.

The operator of Torlock, for example, confirms that those who are in the business to make a profit are having a hard time.

“All in all it’s a tough time for torrent sites but those that do it for the money will have a far more difficult time in the current climate than those who do this as a hobby and as a passion. We do it for the love of it so it doesn’t really affect us as much,” Torlock’s operator says.

Still, there is plenty of interest from advertisers, some of whom are trying their best to circumvent ad-blockers.

“Every day we receive emails from willing advertisers wanting to work with us so the market is definitely still there and most of them have the technology in place to circumvent adblockers, including Chrome’s default one,” he adds.

Google’s decision to ship Chrome with a default ad-blocker appears to be self-serving in part. If users see less annoying ads, they are less likely to install a third-party ad-blocker which blocks more of Google’s own advertisements.

Inadvertently, however, they may have also announced their most effective anti-piracy strategy to date.

If pirate sites are unable to generate enough revenue through advertisements, there are few options left. In theory, they could start charging visitors money, but most pirates go to these sites to avoid paying.

Asking for voluntary donations is an option, but that’s unlikely to cover the all the costs.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Torrent Sites See Traffic Boost After ExtraTorrent Shutdown

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-sites-see-traffic-boost-after-extratorrent-shutdown-170528/

boatssailWhen ExtraTorrent shut down last week, millions of people were left without their favorite spot to snatch torrents.

This meant that after the demise of KickassTorrents and Torrentz last summer, another major exodus commenced.

The search for alternative torrent sites is nicely illustrated by Google Trends. Immediately after ExtraTorrent shut down, worldwide searches for “torrent sites” shot through the roof, as seen below.

“Torrent sites” searches (30 days)

As is often the case, most users spread across sites that are already well-known to the file-sharing public.

TorrentFreak spoke to several people connected to top torrent sites who all confirmed that they had witnessed a significant visitor boost over the past week and a half. As the largest torrent site around, many see The Pirate Bay as the prime alternative.

And indeed, a TPB staffer confirms that they have seen a big wave of new visitors coming in, to the extent that it was causing “gateway errors,” making the site temporarily unreachable.

Thus far the new visitors remain rather passive though. The Pirate Bay hasn’t seen a large uptick in registrations and participation in the forum remains normal as well.

“Registrations haven’t suddenly increased or anything like that, and visitor numbers to the forum are about the same as usual,” TPB staff member Spud17 informs TorrentFreak.

Another popular torrent site, which prefers not to be named, reported a surge in traffic too. For a few days in a row, this site handled 100,000 extra unique visitors. A serious number, but the operator estimates that he only received about ten percent of ET’s total traffic.

More than 40% of these new visitors come from India, where ExtraTorrent was relatively popular. The site operator further notes that about two thirds have an adblocker, adding that this makes the new traffic pretty much useless, for those who are looking to make money.

That brings us to the last category of site owners, the opportunist copycats, who are actively trying to pull estranged ExtraTorrent visitors on board.

Earlier this week we wrote about the attempts of ExtraTorrent.cd, which falsely claims to have a copy of the ET database, to lure users. In reality, however, it’s nothing more than a Pirate Bay mirror with an ExtraTorrent skin.

And then there are the copycats over at ExtraTorrent.ag. These are the same people who successfully hijacked the EZTV and YIFY/YTS brands earlier. With ExtraTorrent.ag they now hope to expand their portfolio.

Over the past few days, we received several emails from other ExtraTorrent “copies”, all trying to get a piece of the action. Not unexpected, but pretty bold, particularly considering the fact that ExtraTorrent operator SaM specifically warned people not to fall for these fakes and clones.

With millions of people moving to new sites, it’s safe to say that the torrent ‘community’ is in turmoil once again, trying to find a new status quo. But this probably won’t last for very long.

While some of the die-hard ExtraTorrent fans will continue to mourn the loss of their home, history has told is that in general, the torrent community is quick to adapt. Until the next site goes down…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sony Files Lawsuits to Block Video Game Piracy Sites

Post Syndicated from Andy original https://torrentfreak.com/sony-files-lawsuits-block-video-game-piracy-sites-170519/

Once the preserve of countries like China whose government likes to routinely censor and control information, website blocking is now a regularly occurence elsewhere.

With commercial interests at their core, most website blocking efforts now take place under copyright law, to protect the business models of the world’s leading entertainment companies. While that usually involves those in the movie and music industries, occasionally others get involved too.

That’s now the case in Russia, where the UK division of Sony Interactive Entertainment (SIE) is currently taking steps to prevent the illegal distribution of its videogame products via online platforms.

According to local news outlet Izvestia, SIE has filed seven lawsuits at the Moscow City Court targeting sites that offer Sony titles without obtaining permission.

While they have no yet been named, the lawsuits indicate that copyright action has been taken against the sites before. This means that under Russia’s strict anti-piracy laws, these repeat offenders can be subjected to the so-called “eternal lock.” Under that regime, once ISP blockades are put in place, they stay in place forever.

Sergey Klisho, General Manager of Playstation in Russia, says that the lawsuits and subsequent court orders will enable the company to deal with the worst offenders.

“Positive changes in legislation aimed at protecting rightsholders, plus greater attention by state bodies to intellectual property rights violations, allows us today to begin to fight against piracy on the Internet,” Klisho says.

According to Vadim Ampelonsky, a spokesman for telecoms watchdog Roskomnadzor, protection of gaming titles is becoming more commonplace, with companies such as Sony and Ubisoft resorting to legal action against sites offering pirated titles.

For Sony, it appears this action might only be the beginning, with a company representative indicating that more lawsuits are likely to follow in the future. But just how effective are these blockades?

Russian torrent giant RuTracker, which is permanently blocked by all local ISPs, believes that the effect on its operations is limited. Just recently the site’s tracker ‘announce’ URLs were added to Russia’s blocklist, on top of the site’s main URLs which have been banned for some time.

That resulted in the site offering its own special app on Github this month, which allows users to automatically find proxy workarounds that render the current blocking efforts ineffective.

The tool is already proving a bit of a headache for Russian authorities. Internet Ombudsman Dmitry Marinichev says that Roskomnadzor won’t be able to ban the software since it can spread by many means.

“I do not believe that Roskomnadzor can block any application,” Marinichev says.

“You can prevent Google Play or Apple’s iTunes from distributing them. But there is still one hundred and one ways left for these applications to spread. Stopping the application itself from working on the device of a particular user is a daunting task.”

Interestingly, Marinichev also believes that targeting RuTracker is the wrong strategy, since the site itself isn’t distributing infringing content, its users are.

“Rightsholders can not punish RuTracker. They are not engaged in piracy. Piracy is carried out by the ones who distribute and duplicate. It is impossible for the law to solve technological problems,” he concludes.

It’s an opinion shared by many in the pirate community, who continue to find technical solutions to many legal roadblocks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Threatens to Bar ‘Copyright Troll’ Cases Over Lacking IP-location Evidence

Post Syndicated from Ernesto original https://torrentfreak.com/judge-threatens-to-bar-copyright-troll-cases-over-lacking-ip-location-evidence-170212/

While relatively underreported, many U.S. district courts are still swamped with lawsuits against alleged film pirates.

The copyright holders who initiate these cases generally rely on an IP address as evidence. This information is collected from BitTorrent swarms and linked to a geographical location using geolocation tools.

With this information in hand, they then ask the courts to grant a subpoena, directing Internet providers to hand over the personal details of the associated account holders.

Malibu Media, the Los Angeles-based company behind the ‘X-Art’ adult movies, is behind most of these cases. The company has filed thousands of lawsuits in recent years, targeting Internet subscribers whose accounts were allegedly used to share Malibu’s films via BitTorrent.

Increasingly, judges around the country have grown wary of these litigation efforts. This includes US Federal Judge William Alsup, who’s tasked with handling all such cases in the Northern District of California.

Responding to a recent request, Judge Alsup highlights the fact that Malibu filed a “monsoon” of hundreds of lawsuits over the past 18 months, but later dismissed many of them after without specifying a reason.

The judge is skeptical about the motivation for these dismissals. In particular, because courts have previously highlighted that Maxmind’s geolocation tools, which are cited in the complaints, may not be entirely accurate. This could mean that the cases have been filed in the wrong court.

“Malibu Media’s voluntary dismissal without prejudice of groups of its cases is not a new pattern. A sizable portion of the cases from previous waves were terminated in the same way,” Judge Alsup writes (pdf).

“The practice has just become more frequent, and it follows skepticism by the undersigned judge and others around the country about the accuracy of the Maxmind database,” he adds.

This is not the first time that geolocation tools have been called into doubt and to move the accuracy claims beyond Maxmind’s own “hearsay,” Judge Alsup now demands extra evidence.

In his order he denies the request to continue a case management conference in one of their cases. Instead, he will use that hearing to address the geolocation issues. In addition, all Malibu cases in the district may be barred if the accuracy of these tools isn’t “fully vetted.”

“That request is DENIED. Instead, Malibu Media is hereby ordered to SHOW CAUSE at that hearing, why the Court should not bar further Malibu Media cases in this district until the accuracy of the geolocation technology is fully vetted,” the order reads.

“To be clear, this order applies even if Malibu Media voluntarily dismisses this action,” Judge Alsup adds.

Denied

SJD, who follows the developments closely and first reported on the order, suspects that the IP-address ‘error rate’ may in fact be higher than most people believe. She therefore recommends defense lawyer to depose ISP employees to get to the bottom of the issue.

“If you are a defense attorney who litigates one of the BitTorrent infringement cases, I suggest deposing a Comcast employee tasked with subpoena processing. I suspect that the error rate is much higher than trolls want everyone to believe, and such testimony has a potential to become a heavy weapon in every troll victim’s arsenal,” SJD says.

In any case, it’s no secret that geolocation databases are far from perfect. Most are not updated instantly, which means that the information could be outdated, and other entries are plainly inaccurate.

This is something the residents of a Kansas farm know all too well, as their house is the default location of 600 million IP-addresses, which causes them quite a bit of trouble.

It will be interesting to see if Malibu will make any efforts to properly “vet” Maxmind’s database. It’s clear, however, that Judge Alsup will not let the company use his court before fully backing up their claims.

To be continued.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New Torrent Search Engine Abuses Wikipedia to Get Traffic

Post Syndicated from Andy original https://torrentfreak.com/new-torrent-search-engine-abuses-wikipedia-to-get-traffic-170503/

In the world of file-sharing, few will argue that the environment in 2017 is very, very different from that of 2007. Running sites is far from straightforward, with all kinds of roadblocks likely to appear along the way.

One of the early problems is getting new sites off the ground. Ten years ago it was easy to find mainstream technology sites touting the latest additions to the pirate landscape. These days, however, reporting is mainly restricted to innovative platforms or others with some particularly newsworthy aspect.

With those loose advertising opportunities now largely off-limits, new sites and those on the fringes are often taking more unusual approaches. Today another raised its head revealing a particularly poorly judged promotional effort.

Back in April, a new torrent site hit the scene. Called RapidTorrent, it’s a meta-search engine that by definition indexes other torrent sites. Like many others, it’s doing whatever it can to get noticed, but it’s probably the first to try and do that by using Wikipedia.

Early today, the Wikipedia pages of a whole range of defunct and live torrent sites were edited to include links to RapidTorrent. One of the first was the page for defunct meta-search engine BTDigg.

“In May 2017 BTDig (sic) staff launched rapidtorrent, a fast torrent search engine,” the page now reads, along with a link to the new torrent site.

Similar edits could also be found for Demonoid’s page, which was also defaced to note that “In May 2017 Demonoid launched rapidtorrent, a fast torrent search engine.”

In fact, links to the new torrent site were inserted in a range of other pages including The Pirate Bay, Mininova, isoHunt and ExtraTorrent.

While many people might like the opportunity to discover a new torrent site, there can be few who appreciate the defacing of Wikipedia to achieve that goal. Millions of people rely on the platform for information so when that is compromised by spam and what amount to lies, people are seriously misled.

Indeed, striking while the iron’s hot, the Wikipedia spam this morning also extended to the French language Wikipedia page of NYAA, a site that unexpectedly shut down only this week.

As shown in the image below, the site’s real domain has been completely removed only to be replaced with RapidTorrent’s URL.

While the other edits are bad enough, this one seems particularly cruel as people looking for information on the disappeared site (which is in the top 500 sites in the world) will now be led directly to a non-affiliated domain.

Those that do follow the link are greeted with another message on the site itself which claims that the search engine is being run by the original NYAA team, while at the same time soliciting bitcoin donations.

For new torrent sites looking for an early boost in traffic, times are indeed hard, so it’s no surprise that some turn to unorthodox methods. However, undermining free and valuable resources like Wikipedia is certainly not the way to do it, will not produce the required results, and is only likely to annoy when the deception is unveiled.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Adblock Plus Acquires Pirate Bay Founder’s Micropayment Service Flattr

Post Syndicated from Ernesto original https://torrentfreak.com/adblock-plus-acquires-pirate-bay-founders-micropayment-service-flattr-170405/

After Pirate Bay co-founder Peter Sunde cut his ties with the notorious torrent site he moved on to several new projects.

The micropayment system Flattr is one of his best-known ventures. With Flattr, people can easily send money to the websites and services they like, without having to enter their payment details time and time again.

Last year Flattr partnered with Adblock Plus to launch a new service Flattr Plus, allowing publishers to generate revenue directly from readers instead of forcing ads upon them.

Flattr Plus is built on the existing micropayment platform that was launched in 2010. Through a new browser add-on it allows users to automatically share money with website owners when an ad is blocked.

Today, the cooperation between the two companies is strengthened even further after eyeo, the parent company of Adblock Plus, aquired Flattr.

“Over the past ten months, we collaborated closely and in fact, became one team with a joint vision. So it was just natural to remove the remaining structural barriers and make it official,” Sunde says, commenting on the announcement.

“We’re excited to continue our work on the Flattr project to give back control to the users of the internet. They should decide how they want to use the internet and how they want to support the content they enjoy.”

Talking to TorrentFreak, Sunde says that he’ll stay on as an unpaid advisor. He has no official stake in Flattr so Hollywood shouldn’t expect to see any of the proceeds of the deal.

That said, he’s put a lot of work in the company over the past eight years, building it from the ground up, so it’s a big step to let someone else take over.

“It’s just that Flattr is my baby and she got married to someone who will take care of her from now,” says Sunde, summarizing his feelings.

Flattr co-founder Linus Olsson will stay on to lead the Flattr operation, and other staff members will keep their jobs as well. Sunde will have an advisory role in the company, and continues to work on various side-projects, including a new privacy service he’ll launch soon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Assert() in the hands of bad coders

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/assert-in-hands-of-bad-coders.html

Using assert() creates better code, as programmers double-check assumptions. But only if used correctly. Unfortunately, bad programmers tend to use them badly, making code worse than if no asserts were used at all. They are a nuanced concept that most programmers don’t really understand.

We saw this recently with the crash of “Bitcoin Unlimited”, a version of Bitcoin that allows more transactions. They used an assert() to check the validity of input, and when they received bad input, most of the nodes in the network crashed.

The Bitcoin Classic/Unlimited code is full of bad uses of assert. The following examples are all from the file main.cpp.



Example #1this line of code:

  1.     if (nPos >= coins->vout.size() || coins->vout[nPos].IsNull())
  2.         assert(false); 

This use of assert is silly. The code should look like this:

  1.     assert(nPos < coins->vout.size());
  2.     assert(!coins->vout[nPos].IsNull());

This is the least of their problems. It understandable that as code ages, and things are added/changed, that odd looking code like this appears. But still, it’s an example of wrong thinking about asserts. Among the problems this would cause is that if asserts were ever turned off, you’d have to deal with dead code elimination warnings in static analyzers.

Example #2line of code:

  1.     assert(view.Flush());

The code within assert is supposed to only read values, not change values. In this example, the Flush function changes things. Normally, asserts are only compiled into debug versions of the code, and removed for release versions. However, doing so for Bitcoin will cause the program to behave incorrectly, as things like the Flush() function are no longer called. That’s why they put at the top of this code, to inform people that debug must be left on.

  1. #if defined(NDEBUG)
  2. # error “Bitcoin cannot be compiled without assertions.”
  3. #endif

Example #3: line of code:

  1.     CBlockIndex* pindexNew = new CBlockIndex(block);
  2.     assert(pindexNew);

The new operator never returns NULL, but throws its own exception instead. Not only is this a misconception about what new does, it’s also a misconception about assert. The assert is supposed to check for bad code, not check errors.

Example #4: line of code

  1.     BlockMap::iterator mi = mapBlockIndex.find(inv.hash);
  2.     CBlock block;
  3.     const Consensus::Params& consensusParams = Params().GetConsensus();
  4.     if (!ReadBlockFromDisk(block, (*mi).second, consensusParams))
  5.         assert(!“cannot load block from disk”);

This is the feature that crashed Bitcoin Unlimited, and would also crash main Bitcoin nodes that use the “XTHIN” feature. The problem comes from parsing input (inv.hash). If the parsed input is bad, then the block won’t exist on the disk, and the assert will fail, and the program will crash.

Again, assert is for checking for bad code that leads to impossible conditions, not checking errors in input, or checking errors in system functions.


Conclusion

The above examples were taken from only one file in the Bitcoin Classic source code. They demonstrate the typically wrong ways bad programmers use asserts. It’d be a great example to show students of programming how not to write bad code.

More generally, though, it shows why there’s a difference between 1x and 10x programmers. 1x programmers, like those writing Bitcoin code, make the typical mistake of treating assert() as error checking. The nuance of assert is lost on them.


Updated to reflect that I’m refering to the “Bitcoin Classic” source code, which isn’t the “Bitcoin Core” source code. However, all the problems above appear to also be problems in the Bitcoin Core source code.

Brave: A Privacy Focused Browser With Built-in Torrent Streaming

Post Syndicated from Ernesto original https://torrentfreak.com/brave-a-privacy-focused-browser-with-built-in-torrent-streaming-170219/

After a reign of roughly a decade, basic old-fashioned BitTorrent clients have lost most of their appeal today.

While they’re still one of the quickest tools to transfer data over the Internet, the software became somewhat outdated with the rise of video streaming sites and services.

But what if you can have the best of both worlds without having to install any separate applications?

This is where the Brave web browser comes in. First launched two months ago, the new browser is designed for privacy conscious people who want to browse the web securely without any unnecessary clutter.

On top of that, it also supports torrent downloads out of the box, and even instant torrent streaming. To find out more, we reached out to lead developer Brian Bondy, who co-founded the project with his colleague Brendan Eich.

“Brave is a new, open source browser designed for both speed and security. It has a built-in adblocker that’s on by default to provide an ad-free and seamless browsing experience,” Bondy tells us.

Bondy says that Brave significantly improves browsing speeds while shielding users again malicious ads. It also offers a wide range of privacy and security features such as HTTPS Everywhere, script blocking, and third-party cookie blocking.

What caught our eye, however, was the built-in support for BitTorrent transfers that came out a short while ago. Powered by the novel WebTorrent technology, Brave can download torrents, through magnet links, directly from the browser.

While torrent downloading in a browser isn’t completely new (Opera has a similar feature, for example) Brave also supports torrent streaming. This means that users can view videos instantly as they would do on a streaming site.

“WebTorrent support lets Brave users stream torrents from their favorite sites right from the browser. There’s no need to use a separate program. This makes using torrents a breeze for beginners, a group that has sometimes found the technology a challenge to work with,” Bondy says.

Brave downloading

The image above shows the basic download page where users can also click on any video file to start streaming instantly. We tested the feature on a variety of magnet links, and it works very well.

On the implementation side, Brave received support from WebTorrent founder Feross Aboukhadijeh, who continues to lend a hand. Right now it is compatible with all traditional torrent clients and support for web peers will be added later.

“WebTorrent in Brave is compatible with all torrent apps. It uses TCP connections, the oldest and most widely supported way for BitTorrent clients to connect. We’re working on adding WebRTC support so that Brave users can connect to ‘web peers’,” Bondy says.

While the downloading and streaming process works well, there is also room for improvement. The user interface is fairly limited, for example, and basic features such as canceling or pausing a torrent are not available yet.

“Currently, we treat magnet links just like any other piece of web content, like a PDF file. To cancel a download, just close the tab,” Bondy notes.

What people should keep in mind though, considering Brave’s focus on privacy, is that torrent transfers are far from anonymous. Without a VPN or other anonymizer, third party tracking outfits are bound to track the downloads or streams.

In addition to torrent streaming, the browser also comes with a Bitcoin-based micropayments system called Brave Payments. This enables users to automatically and privately pay their favorite websites, without being tracked.

Those who are interested in giving the browser a spin can head over to the official website. Brave is currently available a variety of platforms including Windows, Linux, OS X, Android, and iOS.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Create Tables in Amazon Athena from Nested JSON and Mappings Using JSONSerDe

Post Syndicated from Rick Wiggins original https://aws.amazon.com/blogs/big-data/create-tables-in-amazon-athena-from-nested-json-and-mappings-using-jsonserde/

Most systems use Java Script Object Notation (JSON) to log event information. Although it’s efficient and flexible, deriving information from JSON is difficult.

In this post, you will use the tightly coupled integration of Amazon Kinesis Firehose for log delivery, Amazon S3 for log storage, and Amazon Athena with JSONSerDe to run SQL queries against these logs without the need for data transformation or insertion into a database. It’s done in a completely serverless way. There’s no need to provision any compute.

Amazon SES provides highly detailed logs for every message that travels through the service and, with SES event publishing, makes them available through Firehose. However, parsing detailed logs for trends or compliance data would require a significant investment in infrastructure and development time. Athena is a boon to these data seekers because it can query this dataset at rest, in its native format, with zero code or architecture. On top of that, it uses largely native SQL queries and syntax.

Walkthrough: Establishing a dataset

We start with a dataset of an SES send event that looks like this:

{
	"eventType": "Send",
	"mail": {
		"timestamp": "2017-01-18T18:08:44.830Z",
		"source": "[email protected]",
		"sourceArn": "arn:aws:ses:us-west-2:111222333:identity/[email protected]",
		"sendingAccountId": "111222333",
		"messageId": "01010159b2c4471e-fc6e26e2-af14-4f28-b814-69e488740023-000000",
		"destination": ["[email protected]"],
		"headersTruncated": false,
		"headers": [{
				"name": "From",
				"value": "[email protected]"
			}, {
				"name": "To",
				"value": "[email protected]"
			}, {
				"name": "Subject",
				"value": "Bounced Like a Bad Check"
			}, {
				"name": "MIME-Version",
				"value": "1.0"
			}, {
				"name": "Content-Type",
				"value": "text/plain; charset=UTF-8"
			}, {
				"name": "Content-Transfer-Encoding",
				"value": "7bit"
			}
		],
		"commonHeaders": {
			"from": ["[email protected]"],
			"to": ["[email protected]"],
			"messageId": "01010159b2c4471e-fc6e26e2-af14-4f28-b814-69e488740023-000000",
			"subject": "Test"
		},
		"tags": {
			"ses:configuration-set": ["Firehose"],
			"ses:source-ip": ["54.55.55.55"],
			"ses:from-domain": ["amazon.com"],
			"ses:caller-identity": ["root"]
		}
	},
	"send": {}
}

This dataset contains a lot of valuable information about this SES interaction. There are thousands of datasets in the same format to parse for insights. Getting this data is straightforward.

1. Create a configuration set in the SES console or CLI that uses a Firehose delivery stream to send and store logs in S3 in near real-time.
NestedJson_1

2. Use SES to send a few test emails. Be sure to define your new configuration set during the send.

To do this, when you create your message in the SES console, choose More options. This will display more fields, including one for Configuration Set.
NestedJson_2
You can also use your SES verified identity and the AWS CLI to send messages to the mailbox simulator addresses.

$ aws ses send-email --to [email protected] --from [email protected] --subject "Bounced Like a Bad Check" --text "This should bounce" --configuration-set-name Firehose

3. Select your S3 bucket to see that logs are being created.
NestedJson_3

Walkthrough: Querying with Athena

Amazon Athena is an interactive query service that makes it easy to use standard SQL to analyze data resting in Amazon S3. Athena requires no servers, so there is no infrastructure to manage. You pay only for the queries you run. This makes it perfect for a variety of standard data formats, including CSV, JSON, ORC, and Parquet.

You now need to supply Athena with information about your data and define the schema for your logs with a Hive-compliant DDL statement. Athena uses Presto, a distributed SQL engine, to run queries. It also uses Apache Hive DDL syntax to create, drop, and alter tables and partitions. Athena uses an approach known as schema-on-read, which allows you to use this schema at the time you execute the query. Essentially, you are going to be creating a mapping for each field in the log to a corresponding column in your results.

If you are familiar with Apache Hive, you might find creating tables on Athena to be pretty similar. You can create tables by writing the DDL statement in the query editor or by using the wizard or JDBC driver. An important part of this table creation is the SerDe, a short name for “Serializer and Deserializer.” Because your data is in JSON format, you will be using org.openx.data.jsonserde.JsonSerDe, natively supported by Athena, to help you parse the data. Along the way, you will address two common problems with Hive/Presto and JSON datasets:

  • Nested or multi-level JSON.
  • Forbidden characters (handled with mappings).

In the Athena Query Editor, use the following DDL statement to create your first Athena table. For  LOCATION, use the path to the S3 bucket for your logs:

CREATE EXTERNAL TABLE sesblog (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

In this DDL statement, you are declaring each of the fields in the JSON dataset along with its Presto data type. You are using Hive collection data types like Array and Struct to set up groups of objects.

Walkthrough: Nested JSON

Defining the mail key is interesting because the JSON inside is nested three levels deep. In the example, you are creating a top-level struct called mail which has several other keys nested inside. This includes fields like messageId and destination at the second level. You can also see that the field timestamp is surrounded by the backtick (`) character. timestamp is also a reserved Presto data type so you should use backticks here to allow the creation of a column of the same name without confusing the table creation command. On the third level is the data for headers. It contains a group of entries in name:value pairs. You define this as an array with the structure of <name:string,value:string> defining your schema expectations here. You must enclose `from` in the commonHeaders struct with backticks to allow this reserved word column creation.

Now that you have created your table, you can fire off some queries!

SELECT * FROM sesblog limit 10;

This output shows your two top-level columns (eventType and mail) but this isn’t useful except to tell you there is data being queried. You can use some nested notation to build more relevant queries to target data you care about.

“Which messages did I bounce from Monday’s campaign?”

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID,
       mail.timestamp as Timestamp
FROM sesblog
WHERE eventType = 'Bounce' and mail.timestamp like '2017-01-09%'

“How many messages have I bounced to a specific domain?”

SELECT COUNT(*) as Bounces 
FROM sesblog
WHERE eventType = 'Bounce' and mail.destination like '%amazonses.com%'

“Which messages did I bounce to the domain amazonses.com?”

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID 
FROM sesblog
WHERE eventType = 'Bounce' and mail.destination like '%amazonses.com%'

There are much deeper queries that can be written from this dataset to find the data relevant to your use case. You might have noticed that your table creation did not specify a schema for the tags section of the JSON event. You’ll do that next.

Walkthrough: Handling forbidden characters with mappings

Here is a major roadblock you might encounter during the initial creation of the DDL to handle this dataset: you have little control over the data format provided in the logs and Hive uses the colon (:) character for the very important job of defining data types. You need to give the JSONSerDe a way to parse these key fields in the tags section of your event. This is some of the most crucial data in an auditing and security use case because it can help you determine who was responsible for a message creation.

In the Athena query editor, use the following DDL statement to create your second Athena table. For LOCATION, use the path to the S3 bucket for your logs:

CREATE EXTERNAL TABLE sesblog2 (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,ses_source_ip:string,ses_from_domain:string,ses_caller_identity:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

In your new table creation, you have added a section for SERDEPROPERTIES. This allows you to give the SerDe some additional information about your dataset. For your dataset, you are using the mapping property to work around your data containing a column name with a colon smack in the middle of it. ses:configuration-set would be interpreted as a column named ses with the datatype of configuration-set. Unlike your earlier implementation, you can’t surround an operator like that with backticks. The JSON SERDEPROPERTIES mapping section allows you to account for any illegal characters in your data by remapping the fields during the table’s creation.

For example, you have simply defined that the column in the ses data known as ses:configuration-set will now be known to Athena and your queries as ses_configurationset. This mapping doesn’t do anything to the source data in S3. This is a Hive concept only. It won’t alter your existing data. You have set up mappings in the Properties section for the four fields in your dataset (changing all instances of colon to the better-supported underscore) and in your table creation you have used those new mapping names in the creation of the tags struct.

Now that you have access to these additional authentication and auditing fields, your queries can answer some more questions.

“Who is creating all of these bounced messages?”

SELECT eventtype as Event,
         mail.timestamp as Timestamp,
         mail.tags.ses_source_ip as SourceIP,
         mail.tags.ses_caller_identity as AuthenticatedBy,
         mail.commonHeaders."from" as FromAddress,
         mail.commonHeaders.to as ToAddress
FROM sesblog2
WHERE eventtype = 'Bounce'

Of special note here is the handling of the column mail.commonHeaders.”from”. Because from is a reserved operational word in Presto, surround it in quotation marks (“) to keep it from being interpreted as an action.

Walkthrough: Querying using SES custom tagging

What makes this mail.tags section so special is that SES will let you add your own custom tags to your outbound messages. Now you can label messages with tags that are important to you, and use Athena to report on those tags. For example, if you wanted to add a Campaign tag to track a marketing campaign, you could use the –tags flag to send a message from the SES CLI:

$ aws ses send-email --to [email protected] --from [email protected] --subject "Perfume Campaign Test" --text "Buy our Smells" --configuration-set-name Firehose --tags Name=Campaign,Value=Perfume

This results in a new entry in your dataset that includes your custom tag.

…
		"tags": {
			"ses:configuration-set": ["Firehose"],
			"Campaign": ["Perfume"],
			"ses:source-ip": ["54.55.55.55"],
			"ses:from-domain": ["amazon.com"],
			"ses:caller-identity": ["root"],
			"ses:outgoing-ip": ["54.240.27.11"]
		}
…

You can then create a third table to account for the Campaign tagging.

CREATE EXTERNAL TABLE sesblog3 (
  eventType string,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:string,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,Campaign:string,ses_source_ip:string,ses_from_domain:string,ses_caller_identity:string>
              > 
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/' 

Then you can use this custom value to begin to query which you can define on each outbound email.

SELECT eventtype as Event,
       mail.destination as Destination, 
       mail.messageId as MessageID,
       mail.tags.Campaign as Campaign
FROM sesblog3
where mail.tags.Campaign like '%Perfume%'

NestedJson_4

Walkthrough: Building your own DDL programmatically with hive-json-schema

In all of these examples, your table creation statements were based on a single SES interaction type, send. SES has other interaction types like delivery, complaint, and bounce, all which have some additional fields. I’ll leave you with this, a master DDL that can parse all the different SES eventTypes and can create one table where you can begin querying your data.

Building a properly working JSONSerDe DLL by hand is tedious and a bit error-prone, so this time around you’ll be using an open source tool commonly used by AWS Support. All you have to do manually is set up your mappings for the unsupported SES columns that contain colons.

This sample JSON file contains all possible fields from across the SES eventTypes. It has been run through hive-json-schema, which is a great starting point to build nested JSON DDLs.

Here is the resulting “master” DDL to query all types of SES logs:

CREATE EXTERNAL TABLE sesmaster (
  eventType string,
  complaint struct<arrivaldate:string, 
                   complainedrecipients:array<struct<emailaddress:string>>,
                   complaintfeedbacktype:string, 
                   feedbackid:string, 
                   `timestamp`:string, 
                   useragent:string>,
  bounce struct<bouncedrecipients:array<struct<action:string, diagnosticcode:string, emailaddress:string, status:string>>,
                bouncesubtype:string, 
                bouncetype:string, 
                feedbackid:string,
                reportingmta:string, 
                `timestamp`:string>,
  mail struct<`timestamp`:string,
              source:string,
              sourceArn:string,
              sendingAccountId:string,
              messageId:string,
              destination:string,
              headersTruncated:boolean,
              headers:array<struct<name:string,value:string>>,
              commonHeaders:struct<`from`:array<string>,to:array<string>,messageId:string,subject:string>,
              tags:struct<ses_configurationset:string,ses_source_ip:string,ses_outgoing_ip:string,ses_from_domain:string,ses_caller_identity:string>
              >,
  send string,
  delivery struct<processingtimemillis:int,
                  recipients:array<string>, 
                  reportingmta:string, 
                  smtpresponse:string, 
                  `timestamp`:string>
  )           
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
  "mapping.ses_configurationset"="ses:configuration-set",
  "mapping.ses_source_ip"="ses:source-ip", 
  "mapping.ses_from_domain"="ses:from-domain", 
  "mapping.ses_caller_identity"="ses:caller-identity",
  "mapping.ses_outgoing_ip"="ses:outgoing-ip"
  )
LOCATION 's3://<YOUR BUCKET HERE>/FH2017/'

Conclusion

In this post, you’ve seen how to use Amazon Athena in real-world use cases to query the JSON used in AWS service logs. Some of these use cases can be operational like bounce and complaint handling. Others report on trends and marketing data like querying deliveries from a campaign. Still others provide audit and security like answering the question, which machine or user is sending all of these messages? You’ve also seen how to handle both nested JSON and SerDe mappings so that you can use your dataset in its native format without making changes to the data to get your queries running.

With the new AWS QuickSight suite of tools, you also now have a data source that that can be used to build dashboards. This makes reporting on this data even easier. For information about using Athena as a QuickSight data source, see this blog post.

There are also optimizations you can make to these tables to increase query performance or to set up partitions to query only the data you need and restrict the amount of data scanned. If you only need to report on data for a finite amount of time, you could optionally set up S3 lifecycle configuration to transition old data to Amazon Glacier or to delete it altogether.

Feel free to leave questions or suggestions in the comments.

 


 

About the  Author

rick_wiggins_100Rick Wiggins is a Cloud Support Engineer for AWS Premium Support. He works with our customers to build solutions for Email, Storage and Content Delivery, helping them spend more time on their business and less time on infrastructure. In his spare time, he enjoys traveling the world with his family and volunteering at his children’s school teaching lessons in Computer Science and STEM.

 

 

Related

Migrate External Table Definitions from a Hive Metastore to Amazon Athena

exporting_hive

 

 

 

 

 

 

 

Internet Backbone Provider Cogent Blocks Pirate Bay and other “Pirate” Sites

Post Syndicated from Ernesto original https://torrentfreak.com/internet-backbone-provider-cogent-blocks-pirate-bay-and-other-pirate-sites-170209/

Internet backbone providers are an important part of the Internet ecosystem. These commercial Internet services have datacenters all over the world and help traffic of millions of people to flow from A to B.

When the average Internet user types in a domain name, a request is sent through a series of networks before it finally reaches the server of the website.

This also applies to The Pirate Bay and other pirate sites such as Primewire, Movie4k, TorrentProject and TorrentButler. However, for more than a week now the US-based backbone provider Cogent has stopped passing on traffic to these sites.

The sites in question all use CloudFlare, which assigned them the public IP-address 104.31.19.30. While this can be reached just fine by most people, users attempting to pass requests through Cogent’s network are unable to access them.

The issue is not limited to a single ISP and affects a small portion of users all over the world, the United States and Europe included. According to Cogent’s own backbone routing check, it applies to the company’s entire global network.

No route to The Pirate Bay

Since routing problems can sometimes occur by mistake, TorrentFreak reached out to Cogent to ask if the block is intentional and if so, what purpose it serves.

A Cogent spokesperson informed us that they looked into the issue but that the company “does not discuss such decisions with third parties,” while adding that they do not control the DNS records of these sites.

The fact that the IP-address of The Pirate Bay and the other sites remains inaccessible suggests that it is indeed intentional. But for now, we can only speculate what the reason or target is.

Since so many of the sites involved are accused of facilitating copyright infringement, it seems reasonable to view that as a possible cause. However, this remains unconfirmed for now.

The Pirate Bay team is aware of the issue and tells us that users affected by the roadblock should contact Cogent with their complaints, hoping that will change things.

In the meantime, people who want to access the blocked sites have no other option than to come up with a workaround of their own. According to various users the ‘roadblock’ can be bypassed with a VPN or Tor, and some proxy sites appear to work fine too.

The websites themselves can still update their DNS records and switch to a new IP-address, which some appear to have done, but if they are the target then it’s likely that their new IP-address will be blocked soon after.

The following sites are affected by the Cogent blackhole, but there may be more.

The Pirate Bay, Primewire, Movie4k, Torrentproject, Couch-tuner, Cyro.se, Watchseriesfree, Megashare, Hdmovieswatch, Torrentbutler.eu, Afdah. Movie.to, Mp3monkey, Rnbxclusive.me, Torrentcd, Moviesub, Iptorrents, Putlocker.com and Torrentz.cd.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Livestream ‘Piracy Fest’ on Facebook Shut Down by Foxtel

Post Syndicated from Ernesto original https://torrentfreak.com/livestream-piracy-fest-on-facebook-shut-down-by-foxtel-lawsuits-are-coming-170204/

boxingstreamOn Friday evening millions of Australians were tuning into to the long awaited rematch between the Australian boxers Anthony Mundine and Danny Green.

Those who wanted to watch it live couldn’t do so cheaply, as it was streamed exclusively by the pay TV provider Foxtel for AUS$59.95.

However, the Internet wouldn’t be the Internet if people didn’t try to find ways around this expensive ‘roadblock.’ And indeed, as soon as the broadcast started tens of thousands of people tuned into unauthorized live streams, including several homebrew re-broadcasts through Facebook.

While it’s not uncommon for unauthorized sports streams to appear on social media, the boxing match triggered a true piracy fest. At one point more than 150,000 fans streamed a feed that was shown from the account of Facebook user Darren Sharpe, who gained instant fame.

Unfortunately for him, this didn’t go unnoticed to the rightsholders. Foxtel was quick to track down Mr. Sharpe and rang him up during the match, a call the Facebook streamer recorded and later shared on YouTube.


“Sorry mate, I just had to chuck that on mute. So you want me to turn off my Foxtel because I can’t stream it?” Darren asked the Foxtel representative.

“No. I want you to stop streaming it on Facebook. Just keep watching the fight at home, there’s no dramas with that. Just don’t stream it on Facebook,” the Foxtel rep replied.

“Mate, I’ve got 78,000 viewers here that aren’t going to be happy with you. I just don’t see why it’s [not] legal. I’m not doing anything wrong, mate. What can you do to me?” Darren said in response.

“It’s a criminal offence against the copyright act, mate. We’ve got technical protection methods inside the box so exactly this thing can’t happen,” the representative replied.


Mr. Sharpe didn’t seem to be very impressed by the allegations, but Foxtel soon showed how serious it was. Since Facebook didn’t turn off the infringing streams right away, the pay TV provider decided to display the customer’s account numbers on the video streams, so they could disable the associated feeds.

According to Foxtel CEO Peter Tonagh, the streamers in question will soon face legal action. This means that the “free” streaming bonanza could turn out to be quite expensive after all.

ABC reports that Brett Hevers, another Facebook user whose unauthorized broadcast reached more than 150,000 people at its peak, believes he has done nothing wrong.

“I streamed the Mundine and Green fight mainly just so a few mates could watch it. A few people couldn’t afford the fee or didn’t have Foxtel so I just thought I’d put it up for them,” Hevers said.

“All of a sudden 153,000 people I think at the peak were watching it,” he adds.

Anticipating significant legal bills, fellow Facebook streamer Darren Sharpe has already decided to start a GoFundMe campaign to cover the cost. At the time of writing, the campaign has already reached over a quarter of the $10,000 goal.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Demonoid Suffers Extended Downtime Due to Hosting Issue

Post Syndicated from Ernesto original https://torrentfreak.com/demonoid-suffers-extended-downtime-due-hosting-issue-170126/

demonoid-logoAs one of the oldest torrent communities online, the semi-private Demonoid tracker has had its fair share of troubles over the years.

The site has gone offline on several occasions in the past. Most notable was the 20 months downtime streak, which began in 2012 following a DDoS attack and legal troubles in Ukraine.

Since then Demonoid has slowly but steadily rebuilt its community up to a point where it now has millions of visitors per month, bringing it back into the range of the largest torrent sites once again.

However, to the surprise of many, the site went dark again earlier this week. People who try to access the latest Dnoid.me domain will see that nothing is coming up at all.

Initially, the downtime was little to worry about. On Tuesday the Demonoid crew announced that there was going to be a planned server change, cautioning users not to panic.

Don’t panic

servermove

Not everyone had seen the announcement though, and for those who did see it, an outage of two full days for a server move seemed a bit much.

To find out more, TorrentFreak reached out to the Demonoid team via the official Twitter account. They informed us that they’ve run into some unforeseen problems, but nothing that can’t be overcome.

The team is currently working on a fix and they hope to bring the site back online as soon as possible. But, depending now how things go, it may take a couple of extra days. The team made clear that there are no legal issues, but for now they prefer to keep the finer details in-house.

The above makes it clear that Demonoid users have no other option than to patiently wait until the site returns, or find an alternative for the time being.

This is easier said than done for some. While the active Demonoid community is a bit smaller now than it was at its height, it is still a prime location for users who are sharing more obscure content that’s hard to find on public sites.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Canadian Stock Exchange Blocked Megaupload 2.0 Plans

Post Syndicated from Ernesto original https://torrentfreak.com/canadian-stock-exchange-blocked-megaupload-2-0-plans-170124/

megaupload-mu2Last Friday it was exactly five years ago that the original Megaupload service was taken offline as part of a U.S. criminal investigation.

Kim Dotcom wanted to use this special date to announce new details about its successor Megaupload 2.0 and the associated Bitcache service. However, minutes before the announcement, something got in the way.

Today, Kim Dotcom, chief “evangelist” of the service, explains what happened. The original idea was to announce a prominent merger deal with a Canadian company that would bring in an additional $12 million in capital.

Megaupload 2.0 and Bitcache already secured its initial investment round last October. Through Max Keiser’s crowdfunding platform Bank to the Future, it raised well over a million dollars from 354 investors in just two weeks.

To bring in more capital, the startup had quietly struck a stock and cash merger deal with a publicly listed company on the Canadian stock exchange, at a $100 million valuation.

This news was supposed to break last Friday, but just minutes before going public the Canadian Securities Exchange got in the way, according to Dotcom.

The Canadian company sent a draft press release of its merger plans to the exchange, which swiftly came back with some objections, effectively blocking the announcement.

“Trading of the stock was halted while waiting for a response. The Exchange demonstrated a bias against the merger and requested further detailed and intrusive information,” a statement released by Dotcom says.

Dotcom doesn’t reveal what the concerns of the Exchange were, but it’s not unlikely that the links to a pending criminal Megaupload case in the United States may play a role.

Megaupload 2.0 and Bitcache put their lawyers on the case, but the company eventually decided to back away from the planned merger.

“Bitcache feels it is important as a technology startup to stay nimble and reduce corporate complexity in favor of technology development. The experience of dealing with the Exchange has only served to encourage that view,” Dotcom’s announcement reads.

While the original plan has been scuppered, Dotcom and his team will now focus on getting the service ready for a first beta release. A proof of concept is scheduled to come out during the second quarter of the year, soon followed by a closed beta.

The first open release is penned for the end of the year according to the current planning, Dotcom informs us.

From what has been revealed thus far, Megaupload 2.0 and the associated Bitcache platform will allow people to share and store files, linking every file-transfer to a bitcoin transaction.

Unlike the original Megaupload, the new version isn’t going to store all files itself. Instead, it plans to use third-party providers such as Maidsafe and Storj.

“Megaupload 2 will be a caching provider for popular files on special high-speed servers that serve the files from ram. Long term storage will mostly be provided by numerous third-party sites that we are partnering with. You can expect more details on January 20,” Dotcom previously told us.

Prospective users who are eager to see what the service has in store have to be patient for a little longer, but Dotcom is confident that it will be a game-changer on multiple fronts.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.