Tag Archives: Locks

Enabling Two-Factor Authentication For Your Web Application

Post Syndicated from Bozho original https://techblog.bozho.net/enabling-two-factor-authentication-web-application/

It’s almost always a good idea to support two-factor authentication (2FA), especially for back-office systems. 2FA comes in many different forms, some of which include SMS, TOTP, or even hardware tokens.

Enabling them requires a similar flow:

  • The user goes to their profile page (skip this if you want to force 2fa upon registration)
  • Clicks “Enable two-factor authentication”
  • Enters some data to enable the particular 2FA method (phone number, TOTP verification code, etc.)
  • Next time they login, in addition to the username and password, the login form requests the 2nd factor (verification code) and sends that along with the credentials

I will focus on Google Authenticator, which uses a TOTP (Time-based one-time password) for generating a sequence of verification codes. The ideas is that the server and the client application share a secret key. Based on that key and on the current time, both come up with the same code. Of course, clocks are not perfectly synced, so there’s a window of a few codes that the server accepts as valid.

How to implement that with Java (on the server)? Using the GoogleAuth library. The flow is as follows:

  • The user goes to their profile page
  • Clicks “Enable two-factor authentication”
  • The server generates a secret key, stores it as part of the user profile and returns a URL to a QR code
  • The user scans the QR code with their Google Authenticator app thus creating a new profile in the app
  • The user enters the verification code shown the app in a field that has appeared together with the QR code and clicks “confirm”
  • The server marks the 2FA as enabled in the user profile
  • If the user doesn’t scan the code or doesn’t verify the process, the user profile will contain just a orphaned secret key, but won’t be marked as enabled
  • There should be an option to later disable the 2FA from their user profile page

The most important bit from theoretical point of view here is the sharing of the secret key. The crypto is symmetric, so both sides (the authenticator app and the server) have the same key. It is shared via a QR code that the user scans. If an attacker has control on the user’s machine at that point, the secret can be leaked and thus the 2FA – abused by the attacker as well. But that’s not in the threat model – in other words, if the attacker has access to the user’s machine, the damage is already done anyway.

Upon login, the flow is as follows:

  • The user enters username and password and clicks “Login”
  • Using an AJAX request the page asks the server whether this email has 2FA enabled
  • If 2FA is not enabled, just submit the username & password form
  • If 2FA is enabled, the login form is not submitted, but instead an additional field is shown to let the user input the verification code from the authenticator app
  • After the user enters the code and presses login, the form can be submitted. Either using the same login button, or a new “verify” button, or the verification input + button could be an entirely new screen (hiding the username/password inputs).
  • The server then checks again if the user has 2FA enabled and if yes, verifies the verification code. If it matches, login is successful. If not, login fails and the user is allowed to reenter the credentials and the verification code. Note here that you can have different responses depending on whether username/password are wrong or in case the code is wrong. You can also attempt to login prior to even showing the verification code input. That way is arguably better, because that way you don’t reveal to a potential attacker that the user uses 2FA.

While I’m speaking of username and password, that can apply to any other authentication method. After you get a success confirmation from an OAuth / OpenID Connect / SAML provider, or after you can a token from SecureLogin, you can request the second factor (code).

In code, the above processes look as follows (using Spring MVC; I’ve merged the controller and service layer for brevity. You can replace the @AuthenticatedPrincipal bit with your way of supplying the currently logged in user details to the controllers). Assuming the methods are in controller mapped to “/user/”:

@RequestMapping(value = "/init2fa", method = RequestMethod.POST)
@ResponseBody
public String initTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    GoogleAuthenticatorKey googleAuthenticatorKey = googleAuthenticator.createCredentials();
    user.setTwoFactorAuthKey(googleAuthenticatorKey.getKey());
    dao.update(user);
    return GoogleAuthenticatorQRGenerator.getOtpAuthURL(GOOGLE_AUTH_ISSUER, email, googleAuthenticatorKey);
}

@RequestMapping(value = "/confirm2fa", method = RequestMethod.POST)
@ResponseBody
public boolean confirmTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token, @RequestParam("code") int code) {
    User user = getLoggedInUser(token);
    boolean result = googleAuthenticator.authorize(user.getTwoFactorAuthKey(), code);
    user.setTwoFactorAuthEnabled(result);
    dao.update(user);
    return result;
}

@RequestMapping(value = "/disable2fa", method = RequestMethod.GET)
@ResponseBody
public void disableTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    user.setTwoFactorAuthKey(null);
    user.setTwoFactorAuthEnabled(false);
    dao.update(user);
}

@RequestMapping(value = "/requires2fa", method = RequestMethod.POST)
@ResponseBody
public boolean login(@RequestParam("email") String email) {
    // TODO consider verifying the password here in order not to reveal that a given user uses 2FA
    return userService.getUserDetailsByEmail(email).isTwoFactorAuthEnabled();
}

On the client side it’s simple AJAX requests to the above methods (sidenote: I kind of feel the term AJAX is no longer trendy, but I don’t know how to call them. Async? Background? Javascript?).

$("#two-fa-init").click(function() {
    $.post("/user/init2fa", function(qrImage) {
	$("#two-fa-verification").show();
	$("#two-fa-qr").prepend($('<img>',{id:'qr',src:qrImage}));
	$("#two-fa-init").hide();
    });
});

$("#two-fa-confirm").click(function() {
    var verificationCode = $("#verificationCode").val().replace(/ /g,'')
    $.post("/user/confirm2fa?code=" + verificationCode, function() {
       $("#two-fa-verification").hide();
       $("#two-fa-qr").hide();
       $.notify("Successfully enabled two-factor authentication", "success");
       $("#two-fa-message").html("Successfully enabled");
    });
});

$("#two-fa-disable").click(function() {
    $.post("/user/disable2fa", function(qrImage) {
       window.location.reload();
    });
});

The login form code depends very much on the existing login form you are using, but the point is to call the /requires2fa with the email (and password) to check if 2FA is enabled and then show a verification code input.

Overall, the implementation if two-factor authentication is simple and I’d recommend it for most systems, where security is more important than simplicity of the user experience.

The post Enabling Two-Factor Authentication For Your Web Application appeared first on Bozho's tech blog.

Backing Up Linux to Backblaze B2 with Duplicity and Restic

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/backing-linux-backblaze-b2-duplicity-restic/

Linux users have a variety of options for handling data backup. The choices range from free and open-source programs to paid commercial tools, and include applications that are purely command-line based (CLI) and others that have a graphical interface (GUI), or both.

If you take a look at our Backblaze B2 Cloud Storage Integrations page, you will see a number of offerings that enable you to back up your Linux desktops and servers to Backblaze B2. These include CloudBerry, Duplicity, Duplicacy, 45 Drives, GoodSync, HashBackup, QNAP, Restic, and Rclone, plus other choices for NAS and hybrid uses.

In this post, we’ll discuss two popular command line and open-source programs: one older, Duplicity, and a new player, Restic.

Old School vs. New School

We’re highlighting Duplicity and Restic today because they exemplify two different philosophical approaches to data backup: “Old School” (Duplicity) vs “New School” (Restic).

Old School (Duplicity)

In the old school model, data is written sequentially to the storage medium. Once a section of data is recorded, new data is written starting where that section of data ends. It’s not possible to go back and change the data that’s already been written.

This old-school model has long been associated with the use of magnetic tape, a prime example of which is the LTO (Linear Tape-Open) standard. In this “write once” model, files are always appended to the end of the tape. If a file is modified and overwritten or removed from the volume, the associated tape blocks used are not freed up: they are simply marked as unavailable, and the used volume capacity is not recovered. Data is deleted and capacity recovered only if the whole tape is reformatted. As a Linux/Unix user, you undoubtedly are familiar with the TAR archive format, which is an acronym for Tape ARchive. TAR has been around since 1979 and was originally developed to write data to sequential I/O devices with no file system of their own.

It is from the use of tape that we get the full backup/incremental backup approach to backups. A backup sequence beings with a full backup of data. Each incremental backup contains what’s been changed since the last full backup until the next full backup is made and the process starts over, filling more and more tape or whatever medium is being used.

This is the model used by Duplicity: full and incremental backups. Duplicity backs up files by producing encrypted, digitally signed, versioned, TAR-format volumes and uploading them to a remote location, including Backblaze B2 Cloud Storage. Released under the terms of the GNU General Public License (GPL), Duplicity is free software.

With Duplicity, the first archive is a complete (full) backup, and subsequent (incremental) backups only add differences from the latest full or incremental backup. Chains consisting of a full backup and a series of incremental backups can be recovered to the point in time that any of the incremental steps were taken. If any of the incremental backups are missing, then reconstructing a complete and current backup is much more difficult and sometimes impossible.

Duplicity is available under many Unix-like operating systems (such as Linux, BSD, and Mac OS X) and ships with many popular Linux distributions including Ubuntu, Debian, and Fedora. It also can be used with Windows under Cygwin.

We recently published a KB article on How to configure Backblaze B2 with Duplicity on Linux that demonstrates how to set up Duplicity with B2 and back up and restore a directory from Linux.

New School (Restic)

With the arrival of non-sequential storage medium, such as disk drives, and new ideas such as deduplication, comes the new school approach, which is used by Restic. Data can be written and changed anywhere on the storage medium. This efficiency comes largely through the use of deduplication. Deduplication is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, greatly increasing storage efficiency and flexibility.

Restic is a recently available multi-platform command line backup software program that is designed to be fast, efficient, and secure. Restic supports a variety of backends for storing backups, including a local server, SFTP server, HTTP Rest server, and a number of cloud storage providers, including Backblaze B2.

Files are uploaded to a B2 bucket as deduplicated, encrypted chunks. Each time a backup runs, only changed data is backed up. On each backup run, a snapshot is created enabling restores to a specific date or time.

Restic assumes that the storage location for repository is shared, so it always encrypts the backed up data. This is in addition to any encryption and security from the storage provider.

Restic is open source and free software and licensed under the BSD 2-Clause License and actively developed on GitHub.

There’s a lot more you can do with Restic, including adding tags, mounting a repository locally, and scripting. To learn more, you can review the documentation at https://restic.readthedocs.io.

Coincidentally with this blog post, we published a KB article, How to configure Backblaze B2 with Restic on Linux, in which we show how to set up Restic for use with B2 and how to back up and restore a home directory from Linux to B2.

Which is Right for You?

While Duplicity is a popular, widely-available, and useful program, many users of cloud storage solutions such as B2 are moving to new-school solutions like Restic that take better advantage of the non-sequential access capabilities and speed of modern storage media used by cloud storage providers.

Tell us how you’re backing up Linux

Please let us know in the comments what you’re using for Linux backups, and if you have experience using Duplicity, Restic, or other backup software with Backblaze B2.

The post Backing Up Linux to Backblaze B2 with Duplicity and Restic appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Coaxing 2D platforming out of Unity

Post Syndicated from Eevee original https://eev.ee/blog/2017/10/13/coaxing-2d-platforming-out-of-unity/

An anonymous donor asked a question that I can’t even begin to figure out how to answer, but they also said anything else is fine, so here’s anything else.

I’ve been avoiding writing about game physics, since I want to save it for ✨ the book I’m writing ✨, but that book will almost certainly not touch on Unity. Here, then, is a brief run through some of the brick walls I ran into while trying to convince Unity to do 2D platforming.

This is fairly high-level — there are no blocks of code or helpful diagrams. I’m just getting this out of my head because it’s interesting. If you want more gritty details, I guess you’ll have to wait for ✨ the book ✨.

The setup

I hadn’t used Unity before. I hadn’t even used a “real” physics engine before. My games so far have mostly used LÖVE, a Lua-based engine. LÖVE includes box2d bindings, but for various reasons (not all of them good), I opted to avoid them and instead write my own physics completely from scratch. (How, you ask? ✨ Book ✨!)

I was invited to work on a Unity project, Chaos Composer, that someone else had already started. It had basic movement already implemented; I taught myself Unity’s physics system by hacking on it. It’s entirely possible that none of this is actually the best way to do anything, since I was really trying to reproduce my own homegrown stuff in Unity, but it’s the best I’ve managed to come up with.

Two recurring snags were that you can’t ask Unity to do multiple physics updates in a row, and sometimes getting the information I wanted was difficult. Working with my own code spoiled me a little, since I could invoke it at any time and ask it anything I wanted; Unity, on the other hand, is someone else’s black box with a rigid interface on top.

Also, wow, Googling for a lot of this was not quite as helpful as expected. A lot of what’s out there is just the first thing that works, and often that’s pretty hacky and imposes severe limits on the game design (e.g., “this won’t work with slopes”). Basic movement and collision are the first thing you do, which seems to me like the worst time to be locking yourself out of a lot of design options. I tried very (very, very, very) hard to minimize those kinds of constraints.

Problem 1: Movement

When I showed up, movement was already working. Problem solved!

Like any good programmer, I immediately set out to un-solve it. Given a “real” physics engine like Unity prominently features, you have two options: ⓐ treat the player as a physics object, or ⓑ don’t. The existing code went with option ⓑ, like I’d done myself with LÖVE, and like I’d seen countless people advise. Using a physics sim makes for bad platforming.

But… why? I believed it, but I couldn’t concretely defend it. I had to know for myself. So I started a blank project, drew some physics boxes, and wrote a dozen-line player controller.

Ah! Immediate enlightenment.

If the player was sliding down a wall, and I tried to move them into the wall, they would simply freeze in midair until I let go of the movement key. The trouble is that the physics sim works in terms of forces — moving the player involves giving them a nudge in some direction, like a giant invisible hand pushing them around the level. Surprise! If you press a real object against a real wall with your real hand, you’ll see the same effect — friction will cancel out gravity, and the object will stay in midair..

Platformer movement, as it turns out, doesn’t make any goddamn physical sense. What is air control? What are you pushing against? Nothing, really; we just have it because it’s nice to play with, because not having it is a nightmare.

I looked to see if there were any common solutions to this, and I only really found one: make all your walls frictionless.

Game development is full of hacks like this, and I… don’t like them. I can accept that minor hacks are necessary sometimes, but this one makes an early and widespread change to a fundamental system to “fix” something that was wrong in the first place. It also imposes an “invisible” requirement, something I try to avoid at all costs — if you forget to make a particular wall frictionless, you’ll never know unless you happen to try sliding down it.

And so, I swiftly returned to the existing code. It wasn’t too different from what I’d come up with for LÖVE: it applied gravity by hand, tracked the player’s velocity, computed the intended movement each frame, and moved by that amount. The interesting thing was that it used MovePosition, which schedules a movement for the next physics update and stops the movement if the player hits something solid.

It’s kind of a nice hybrid approach, actually; all the “physics” for conscious actors is done by hand, but the physics engine is still used for collision detection. It’s also used for collision rejection — if the player manages to wedge themselves several pixels into a solid object, for example, the physics engine will try to gently nudge them back out of it with no extra effort required on my part. I still haven’t figured out how to get that to work with my homegrown stuff, which is built to prevent overlap rather than to jiggle things out of it.

But wait, what about…

Our player is a dynamic body with rotation lock and no gravity. Why not just use a kinematic body?

I must be missing something, because I do not understand the point of kinematic bodies. I ran into this with Godot, too, which documented them the same way: as intended for use as players and other manually-moved objects. But by default, they don’t even collide with other kinematic bodies or static geometry. What? There’s a checkbox to turn this on, which I enabled, but then I found out that MovePosition doesn’t stop kinematic bodies when they hit something, so I would’ve had to cast along the intended path of movement to figure out when to stop, thus duplicating the same work the physics engine was about to do.

But that’s impossible anyway! Static geometry generally wants to be made of edge colliders, right? They don’t care about concave/convex. Imagine the player is standing on the ground near a wall and tries to move towards the wall. Both the ground and the wall are different edges from the same edge collider.

If you try to cast the player’s hitbox horizontally, parallel to the ground, you’ll only get one collision: the existing collision with the ground. Casting doesn’t distinguish between touching and hitting. And because Unity only reports one collision per collider, and because the ground will always show up first, you will never find out about the impending wall collision.

So you’re forced to either use raycasts for collision detection or decomposed polygons for world geometry, both of which are slightly worse tools for no real gain.

I ended up sticking with a dynamic body.


Oh, one other thing that doesn’t really fit anywhere else: keep track of units! If you’re adding something called “velocity” directly to something called “position”, something has gone very wrong. Acceleration is distance per time squared; velocity is distance per time; position is distance. You must multiply or divide by time to convert between them.

I never even, say, add a constant directly to position every frame; I always phrase it as velocity and multiply by Δt. It keeps the units consistent: time is always in seconds, not in tics.

Problem 2: Slopes

Ah, now we start to get off in the weeds.

A sort of pre-problem here was detecting whether we’re on a slope, which means detecting the ground. The codebase originally used a manual physics query of the area around the player’s feet to check for the ground, which seems to be somewhat common, but that can’t tell me the angle of the detected ground. (It’s also kind of error-prone, since “around the player’s feet” has to be specified by hand and may not stay correct through animations or changes in the hitbox.)

I replaced that with what I’d eventually settled on in LÖVE: detect the ground by detecting collisions, and looking at the normal of the collision. A normal is a vector that points straight out from a surface, so if you’re standing on the ground, the normal points straight up; if you’re on a 10° incline, the normal points 10° away from straight up.

Not all collisions are with the ground, of course, so I assumed something is ground if the normal pointed away from gravity. (I like this definition more than “points upwards”, because it avoids assuming anything about the direction of gravity, which leaves some interesting doors open for later on.) That’s easily detected by taking the dot product — if it’s negative, the collision was with the ground, and I now have the normal of the ground.

Actually doing this in practice was slightly tricky. With my LÖVE engine, I could cram this right into the middle of collision resolution. With Unity, not quite so much. I went through a couple iterations before I really grasped Unity’s execution order, which I guess I will have to briefly recap for this to make sense.

Unity essentially has two update cycles. It performs physics updates at fixed intervals for consistency, and updates everything else just before rendering. Within a single frame, Unity does as many fixed physics updates as it has spare time for (which might be zero, one, or more), then does a regular update, then renders. User code can implement either or both of Update, which runs during a regular update, and FixedUpdate, which runs just before Unity does a physics pass.

So my solution was:

  • At the very end of FixedUpdate, clear the actor’s “on ground” flag and ground normal.

  • During OnCollisionEnter2D and OnCollisionStay2D (which are called from within a physics pass), if there’s a collision that looks like it’s with the ground, set the “on ground” flag and ground normal. (If there are multiple ground collisions, well, good luck figuring out the best way to resolve that! At the moment I’m just taking the first and hoping for the best.)

That means there’s a brief window between the end of FixedUpdate and Unity’s physics pass during which a grounded actor might mistakenly believe it’s not on the ground, which is a bit of a shame, but there are very few good reasons for anything to be happening in that window.

Okay! Now we can do slopes.

Just kidding! First we have to do sliding.

When I first looked at this code, it didn’t apply gravity while the player was on the ground. I think I may have had some problems with detecting the ground as result, since the player was no longer pushing down against it? Either way, it seemed like a silly special case, so I made gravity always apply.

Lo! I was a fool. The player could no longer move.

Why? Because MovePosition does exactly what it promises. If the player collides with something, they’ll stop moving. Applying gravity means that the player is trying to move diagonally downwards into the ground, and so MovePosition stops them immediately.

Hence, sliding. I don’t want the player to actually try to move into the ground. I want them to move the unblocked part of that movement. For flat ground, that means the horizontal part, which is pretty much the same as discarding gravity. For sloped ground, it’s a bit more complicated!

Okay but actually it’s less complicated than you’d think. It can be done with some cross products fairly easily, but Unity makes it even easier with a couple casts. There’s a Vector3.ProjectOnPlane function that projects an arbitrary vector on a plane given by its normal — exactly the thing I want! So I apply that to the attempted movement before passing it along to MovePosition. I do the same thing with the current velocity, to prevent the player from accelerating infinitely downwards while standing on flat ground.

One other thing: I don’t actually use the detected ground normal for this. The player might be touching two ground surfaces at the same time, and I’d want to project on both of them. Instead, I use the player body’s GetContacts method, which returns contact points (and normals!) for everything the player is currently touching. I believe those contact points are tracked by the physics engine anyway, so asking for them doesn’t require any actual physics work.

(Looking at the code I have, I notice that I still only perform the slide for surfaces facing upwards — but I’d want to slide against sloped ceilings, too. Why did I do this? Maybe I should remove that.)

(Also, I’m pretty sure projecting a vector on a plane is non-commutative, which raises the question of which order the projections should happen in and what difference it makes. I don’t have a good answer.)

(I note that my LÖVE setup does something slightly different: it just tries whatever the movement ought to be, and if there’s a collision, then it projects — and tries again with the remaining movement. But I can’t ask Unity to do multiple moves in one physics update, alas.)

Okay! Now, slopes. But actually, with the above work done, slopes are most of the way there already.

One obvious problem is that the player tries to move horizontally even when on a slope, and the easy fix is to change their movement from speed * Vector2.right to speed * new Vector2(ground.y, -ground.x) while on the ground. That’s the ground normal rotated a quarter-turn clockwise, so for flat ground it still points to the right, and in general it points rightwards along the ground. (Note that it assumes the ground normal is a unit vector, but as far as I’m aware, that’s true for all the normals Unity gives you.)

Another issue is that if the player stands motionless on a slope, gravity will cause them to slowly slide down it — because the movement from gravity will be projected onto the slope, and unlike flat ground, the result is no longer zero. For conscious actors only, I counter this by adding the opposite factor to the player’s velocity as part of adding in their walking speed. This matches how the real world works, to some extent: when you’re standing on a hill, you’re exerting some small amount of effort just to stay in place.

(Note that slope resistance is not the same as friction. Okay, yes, in the real world, virtually all resistance to movement happens as a result of friction, but bracing yourself against the ground isn’t the same as being passively resisted.)

From here there are a lot of things you can do, depending on how you think slopes should be handled. You could make the player unable to walk up slopes that are too steep. You could make walking down a slope faster than walking up it. You could make jumping go along the ground normal, rather than straight up. You could raise the player’s max allowed speed while running downhill. Whatever you want, really. Armed with a normal and awareness of dot products, you can do whatever you want.

But first you might want to fix a few aggravating side effects.

Problem 3: Ground adherence

I don’t know if there’s a better name for this. I rarely even see anyone talk about it, which surprises me; it seems like it should be a very common problem.

The problem is: if the player runs up a slope which then abruptly changes to flat ground, their momentum will carry them into the air. For very fast players going off the top of very steep slopes, this makes sense, but it becomes visible even for relatively gentle slopes. It was a mild nightmare in the original release of our game Lunar Depot 38, which has very “rough” ground made up of lots of shallow slopes — so the player is very frequently slightly off the ground, which meant they couldn’t jump, for seemingly no reason. (I even had code to fix this, but I disabled it because of a silly visual side effect that I never got around to fixing.)

Anyway! The reason this is a problem is that game protagonists are generally not boxes sliding around — they have legs. We don’t go flying off the top of real-world hilltops because we put our foot down until it touches the ground.

Simulating this footfall is surprisingly fiddly to get right, especially with someone else’s physics engine. It’s made somewhat easier by Cast, which casts the entire hitbox — no matter what shape it is — in a particular direction, as if it had moved, and tells you all the hypothetical collisions in order.

So I cast the player in the direction of gravity by some distance. If the cast hits something solid with a ground-like collision normal, then the player must be close to the ground, and I move them down to touch it (and set that ground as the new ground normal).

There are some wrinkles.

Wrinkle 1: I only want to do this if the player is off the ground now, but was on the ground last frame, and is not deliberately moving upwards. That latter condition means I want to skip this logic if the player jumps, for example, but also if the player is thrust upwards by a spring or abducted by a UFO or whatever. As long as external code goes through some interface and doesn’t mess with the player’s velocity directly, that shouldn’t be too hard to track.

Wrinkle 2: When does this logic run? It needs to happen after the player moves, which means after a Unity physics pass… but there’s no callback for that point in time. I ended up running it at the beginning of FixedUpdate and the beginning of Update — since I definitely want to do it before rendering happens! That means it’ll sometimes happen twice between physics updates. (I could carefully juggle a flag to skip the second run, but I… didn’t do that. Yet?)

Wrinkle 3: I can’t move the player with MovePosition! Remember, MovePosition schedules a movement, it doesn’t actually perform one; that means if it’s called twice before the physics pass, the first call is effectively ignored. I can’t easily combine the drop with the player’s regular movement, for various fiddly reasons. I ended up doing it “by hand” using transform.Translate, which I think was the “old way” to do manual movement before MovePosition existed. I’m not totally sure if it activates triggers? For that matter, I’m not sure it even notices collisions — but since I did a full-body Cast, there shouldn’t be any anyway.

Wrinkle 4: What, exactly, is “some distance”? I’ve yet to find a satisfying answer for this. It seems like it ought to be based on the player’s current speed and the slope of the ground they’re moving along, but every time I’ve done that math, I’ve gotten totally ludicrous answers that sometimes exceed the size of a tile. But maybe that’s not wrong? Play around, I guess, and think about when the effect should “break” and the player should go flying off the top of a hill.

Wrinkle 5: It’s possible that the player will launch off a slope, hit something, and then be adhered to the ground where they wouldn’t have hit it. I don’t much like this edge case, but I don’t see a way around it either.

This problem is surprisingly awkward for how simple it sounds, and the solution isn’t entirely satisfying. Oh, well; the results are much nicer than the solution. As an added bonus, this also fixes occasional problems with running down a hill and becoming detached from the ground due to precision issues or whathaveyou.

Problem 4: One-way platforms

Ah, what a nightmare.

It took me ages just to figure out how to define one-way platforms. Only block when the player is moving downwards? Nope. Only block when the player is above the platform? Nuh-uh.

Well, okay, yes, those approaches might work for convex players and flat platforms. But what about… sloped, one-way platforms? There’s no reason you shouldn’t be able to have those. If Super Mario World can do it, surely Unity can do it almost 30 years later.

The trick is, again, to look at the collision normal. If it faces away from gravity, the player is hitting a ground-like surface, so the platform should block them. Otherwise (or if the player overlaps the platform), it shouldn’t.

Here’s the catch: Unity doesn’t have conditional collision. I can’t decide, on the fly, whether a collision should block or not. In fact, I think that by the time I get a callback like OnCollisionEnter2D, the physics pass is already over.

I could go the other way and use triggers (which are non-blocking), but then I have the opposite problem: I can’t stop the player on the fly. I could move them back to where they hit the trigger, but I envision all kinds of problems as a result. What if they were moving fast enough to activate something on the other side of the platform? What if something else moved to where I’m trying to shove them back to in the meantime? How does this interact with ground detection and listing contacts, which would rightly ignore a trigger as non-blocking?

I beat my head against this for a while, but the inability to respond to collision conditionally was a huge roadblock. It’s all the more infuriating a problem, because Unity ships with a one-way platform modifier thing. Unfortunately, it seems to have been implemented by someone who has never played a platformer. It’s literally one-way — the player is only allowed to move straight upwards through it, not in from the sides. It also tries to block the player if they’re moving downwards while inside the platform, which invokes clumsy rejection behavior. And this all seems to be built into the physics engine itself somehow, so I can’t simply copy whatever they did.

Eventually, I settled on the following. After calculating attempted movement (including sliding), just at the end of FixedUpdate, I do a Cast along the movement vector. I’m not thrilled about having to duplicate the physics engine’s own work, but I do filter to only things on a “one-way platform” physics layer, which should at least help. For each object the cast hits, I use Physics2D.IgnoreCollision to either ignore or un-ignore the collision between the player and the platform, depending on whether the collision was ground-like or not.

(A lot of people suggested turning off collision between layers, but that can’t possibly work — the player might be standing on one platform while inside another, and anyway, this should work for all actors!)

Again, wrinkles! But fewer this time. Actually, maybe just one: handling the case where the player already overlaps the platform. I can’t just check for that with e.g. OverlapCollider, because that doesn’t distinguish between overlapping and merely touching.

I came up with a fairly simple fix: if I was going to un-ignore the collision (i.e. make the platform block), and the cast distance is reported as zero (either already touching or overlapping), I simply do nothing instead. If I’m standing on the platform, I must have already set it blocking when I was approaching it from the top anyway; if I’m overlapping it, I must have already set it non-blocking to get here in the first place.

I can imagine a few cases where this might go wrong. Moving platforms, especially, are going to cause some interesting issues. But this is the best I can do with what I know, and it seems to work well enough so far.

Oh, and our player can deliberately drop down through platforms, which was easy enough to implement; I just decide the platform is always passable while some button is held down.

Problem 5: Pushers and carriers

I haven’t gotten to this yet! Oh boy, can’t wait. I implemented it in LÖVE, but my way was hilariously invasive; I’m hoping that having a physics engine that supports a handwaved “this pushes that” will help. Of course, you also have to worry about sticking to platforms, for which the recommended solution is apparently to parent the cargo to the platform, which sounds goofy to me? I guess I’ll find out when I throw myself at it later.

Overall result

I ended up with a fairly pleasant-feeling system that supports slopes and one-way platforms and whatnot, with all the same pieces as I came up with for LÖVE. The code somehow ended up as less of a mess, too, but it probably helps that I’ve been down this rabbit hole once before and kinda knew what I was aiming for this time.

Animation of a character running smoothly along the top of an irregular dinosaur skeleton

Sorry that I don’t have a big block of code for you to copy-paste into your project. I don’t think there are nearly enough narrative discussions of these fundamentals, though, so hopefully this is useful to someone. If not, well, look forward to ✨ my book, that I am writing ✨!

RaspiReader: build your own fingerprint reader

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/raspireader-fingerprint-scanner/

Three researchers from Michigan State University have developed a low-cost, open-source fingerprint reader which can detect fake prints. They call it RaspiReader, and they’ve built it using a Raspberry Pi 3 and two Camera Modules. Joshua and his colleagues have just uploaded all the info you need to build your own version — let’s go!

GIF of fingerprint match points being aligned on fingerprint, not real output of RaspiReader software

Sadly not the real output of the RaspiReader

Falsified fingerprints

We’ve probably all seen a movie in which a burglar crosses a room full of laser tripwires and then enters the safe full of loot by tricking the fingerprint-secured lock with a fake print. Turns out, the second part is not that unrealistic: you can fake fingerprints using a range of materials, such as glue or latex.

Examples of live and fake fingerprints collected by the RaspiReader team

The RaspiReader team collected live and fake fingerprints to test the device

If the spoof print layer capping the spoofer’s finger is thin enough, it can even fool readers that detect blood flow, pulse, or temperature. This is becoming a significant security risk, not least for anyone who unlocks their smartphone using a fingerprint.

The RaspiReader

This is where Anil K. Jain comes in: Professor Jain leads a biometrics research group. Under his guidance, Joshua J. Engelsma and Kai Cao set out to develop a fingerprint reader with improved spoof-print detection. Ultimately, they aim to help the development of more secure commercial technologies. With their project, the team has also created an amazing resource for anyone who wants to build their own fingerprint reader.

So that replicating their device would be easy, they wanted to make it using inexpensive, readily available components, which is why they turned to Raspberry Pi technology.

RaspiReader fingerprint scanner by PRIP lab

The Raspireader and its output

Inside the RaspiReader’s 3D-printed housing, LEDs shine light through an acrylic prism, on top of which the user rests their finger. The prism refracts the light so that the two Camera Modules can take images from different angles. The Pi receives these images via a Multi Camera Adapter Module feeding into the CSI port. Collecting two images means the researchers’ spoof detection algorithm has more information to work with.

Comparison of live and spoof fingerprints

Real on the left, fake on the right

RaspiReader software

The Camera Adaptor uses the RPi.GPIO Python package. The RaspiReader performs image processing, and its spoof detection takes image colour and 3D friction ridge patterns into account. The detection algorithm extracts colour local binary patterns … please don’t ask me to explain! You can have a look at the researchers’ manuscript if you want to get stuck into the fine details of their project.

Build your own fingerprint reader

I’ve had my eyes glued to my inbox waiting for Josh to send me links to instructions and files for this build, and here they are (thanks, Josh)! Check out the video tutorial, which walks you through how to assemble the RaspiReader:

RaspiReader: Cost-Effective Open-Source Fingerprint Reader

Building a cost-effective, open-source, and spoof-resilient fingerprint reader for $160* in under an hour. Code: https://github.com/engelsjo/RaspiReader Links to parts: 1. PRISM – https://www.amazon.com/gp/product/B00WL3OBK4/ref=oh_aui_detailpage_o05_s00?ie=UTF8&psc=1 (Better fit) https://www.thorlabs.com/thorproduct.cfm?partnumber=PS611 2. RaspiCams – https://www.amazon.com/gp/product/B012V1HEP4/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1 3. Camera Multiplexer https://www.amazon.com/gp/product/B012UQWOOQ/ref=oh_aui_detailpage_o04_s01?ie=UTF8&psc=1 4. Raspberry Pi Kit: https://www.amazon.com/CanaKit-Raspberry-Clear-Power-Supply/dp/B01C6EQNNK/ref=sr_1_6?ie=UTF8&qid=1507058509&sr=8-6&keywords=raspberry+pi+3b Whitepaper: https://arxiv.org/abs/1708.07887 * Prices can vary based on Amazon’s pricing. P.s.

You can find a parts list with links to suppliers in the video description — the whole build costs around $160. All the STL files for the housing and the Python scripts you need to run on the Pi are available on Josh’s GitHub.

Enhance your home security

The RaspiReader is a great resource for researchers, and it would also be a terrific project to build at home! Is there a more impressive way to protect a treasured possession, or secure access to your computer, than with a DIY fingerprint scanner?

Check out this James-Bond-themed blog post for Raspberry Pi resources to help you build a high-security lair. If you want even more inspiration, watch this video about a laser-secured cookie jar which Estefannie made for us. And be sure to share your successful fingerprint scanner builds with us via social media!

The post RaspiReader: build your own fingerprint reader appeared first on Raspberry Pi.

MagPi 62: become a LEGO master builder

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/magpi-62-lego-raspberry-pi/

Hi folks, Rob here from The MagPi. I’m excited to introduce to you all issue 62 of The MagPi, in which we go block crazy with LEGO! This month’s magazine is brimming with 14 pages of magnificent Raspberry Pi projects using these ubiquitous building blocks.

LEGO of everything and get one from the shops right now!

LEGO + Raspberry Pi

In our cover feature you’ll find fun tutorials from our friends at Dexter Industries, such as a Rubik’s cube-solving robot and a special automaton that balances on two wheels. We also show you how to build a retro console case for your Pi out of LEGO, and we have eight other projects to inspire you to make your own incredible brick creations.

Weekend fun

Back at school and looking for a weekend distraction? Check out our weekend projects feature, and build yourself a smart fridge or a door trigger that plays your theme song as you enter the room! Mine is You’re Welcome from Moana. What’s yours?

We have a ton of other wonderful projects, tutorials, and reviews in this issue as well, including a GIF camera, a hydroponic garden, and a Halloween game!

MagPi 62 Halloween game article

You can’t escape our annual spooktacular puns. That would be impossi-ghoul.

Get The MagPi 62

Grab the latest issue of The MagPi from WH Smith, Tesco, Sainsbury’s, and Asda. If you live in the US, check out your local Barnes & Noble or Micro Center over the next few days. You can also get the new issue online from our store, or digitally via our Android or iOS app. And don’t forget, there’s always the free PDF as well.

Subscribe for free goodies

Some of you have asked me about the goodies that we give out to subscribers. This is how it works: if you take out a twelve-month print subscription to The MagPi, you’ll get a Pi Zero W, Pi Zero case, and adapter cables absolutely free! This offer does not currently have an end date.

Pre-order AIY Projects kits

We have news about the AIY Projects voice kit! Micro Center has opened pre-orders for the kits in the US, and Pi Hut will soon be accepting pre-orders in the UK. Pimoroni has set up a notification service in case you want to know when you can pre-order more stock from them.

Now go enjoy building some fun LEGO Pi projects, and we’ll see you next month!

The post MagPi 62: become a LEGO master builder appeared first on Raspberry Pi.

Samba 4.7.0 released

Post Syndicated from corbet original https://lwn.net/Articles/734639/rss

The Samba 4.7.0 release is out. New features include whole DB read locks
(a reliability improvement), active directory with Kerberos support,
detailed audit trails for authentication and authorization activities, a
multi-process LDAP server, better read-only domain controller support, and
more. See the release
notes
for details.

Block The Pirate Bay Within 10 Days, Dutch Court Tells ISPs

Post Syndicated from Andy original https://torrentfreak.com/block-the-pirate-bay-within-10-days-dutch-court-tells-isps-170922/

Three years ago in 2014, The Court of The Hague handed down its decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, which brought the case, the Court decided that a blockade would be ineffective and also restrict the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked while BREIN took its case to the Supreme Court, which in turn referred the matter to the EU Court of Justice for clarification. This June, the ECJ ruled that as a platform effectively communicating copyright works to the public, The Pirate Bay can indeed be blocked.

The ruling meant there were no major obstacles preventing the Dutch Supreme Court from ordering a future ISP blockade. Clearly, however, BREIN wanted a blocking decision more quickly. A decision handed down today means the anti-piracy group will achieve that in just a few days’ time.

The Hague Court of Appeal today ruled (Dutch) that the 2014 decision, which lifted the blockade against The Pirate Bay, is now largely obsolete.

“According to the Court of Appeal, the Hague Court did not give sufficient weight to the interests of the beneficiaries represented by BREIN,” BREIN said in a statement.

“The Court also wrongly looked at whether torrent traffic had been reduced by the blockade. It should have also considered whether visits to the website of The Pirate Bay itself decreased with a blockade, which speaks for itself.”

As a result, an IP address and DNS blockade of The Pirate Bay, similar to those already in place in the UK and other EU countries, will soon be put in place. BREIN says that four IP addresses will be affected along with hundreds of domain names through which the torrent platform can be reached.

The ISPs have been given just 10 days to put the blocks in place and if they fail there are fines of 2,000 euros per day, up to a maximum of one million euros.

“It is nice that obviously harmful and illegal sites like The Pirate Bay will be blocked again in the Netherlands,” says BREIN chief Tim Kuik.

“A very bad time for our culture, which was free to access via these sites, is now happily behind us.”

Today’s interim decision by the Court of Appeal will stand until the Supreme Court hands down its decision in the main case between BREIN and Ziggo / XS4ALL.

Looking forward, it seems extremely unlikely that the Supreme Court will hand down a conflicting decision, so we’re probably already looking at the beginning of the end for direct accessibility of The Pirate Bay in the Netherlands.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Catching Up on Some Recent AWS Launches and Publications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/catching-up-on-some-recent-aws-launches-and-publications/

As I have noted in the past, the AWS Blog Team is working hard to make sure that you know about as many AWS launches and publications as possible, without totally burying you in content! As part of our balancing act, we will occasionally publish catch-up posts to clear our queues and to bring more information to your attention. Here’s what I have in store for you today:

  • Monitoring for Cross-Region Replication of S3 Objects
  • Tags for Spot Fleet Instances
  • PCI DSS Compliance for 12 More Services
  • HIPAA Eligibility for WorkDocs
  • VPC Resizing
  • AppStream 2.0 Graphics Design Instances
  • AMS Connector App for ServiceNow
  • Regtech in the Cloud
  • New & Revised Quick Starts

Let’s jump right in!

Monitoring for Cross-Region Replication of S3 Objects
I told you about cross-region replication for S3 a couple of years ago. As I showed you at the time, you simply enable versioning for the source bucket and then choose a destination region and bucket. You can check the replication status manually, or you can create an inventory (daily or weekly) of the source and destination buckets.

The Cross-Region Replication Monitor (CRR Monitor for short) solution checks the replication status of objects across regions and gives you metrics and failure notifications in near real-time.

To learn more, read the CRR Monitor Implementation Guide and then use the AWS CloudFormation template to Deploy the CRR Monitor.

Tags for Spot Instances
Spot Instances and Spot Fleets (collections of Spot Instances) give you access to spare compute capacity. We recently gave you the ability to enter tags (key/value pairs) as part of your spot requests and to have those tags applied to the EC2 instances launched to fulfill the request:

To learn more, read Tag Your Spot Fleet EC2 Instances.

PCI DSS Compliance for 12 More Services
As first announced on the AWS Security Blog, we recently added 12 more services to our PCI DSS compliance program, raising the total number of in-scope services to 42. To learn more, check out our Compliance Resources.

HIPAA Eligibility for WorkDocs
In other compliance news, we announced that Amazon WorkDocs has achieved HIPAA eligibility and PCI DSS compliance in all AWS Regions where WorkDocs is available.

VPC Resizing
This feature allows you to extend an existing Virtual Private Cloud (VPC) by adding additional blocks of addresses. This gives you more flexibility and should help you to deal with growth. You can add up to four secondary /16 CIDRs per VPC. You can also edit the secondary CIDRs by deleting them and adding new ones. Simply select the VPC and choose Edit CIDRs from the menu:

Then add or remove CIDR blocks as desired:

To learn more, read about VPCs and Subnets.

AppStream 2.0 Graphics Design Instances
Powered by AMD FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology, the new Graphics Design instances for Amazon AppStream 2.0 will let you run and stream graphics applications more cost-effectively than ever. The instances are available in four sizes, with 2-16 vCPUs and 7.5 GB to 61 GB of memory.

To learn more, read Introducing Amazon AppStream 2.0 Graphics Design, a New Lower Costs Instance Type for Streaming Graphics Applications.

AMS Connector App for ServiceNow
AWS Managed Services (AMS) provides Infrastructure Operations Management for the Enterprise. Designed to accelerate cloud adoption, it automates common operations such as change requests, patch management, security and backup.

The new AMS integration App for ServiceNow lets you interact with AMS from within ServiceNow, with no need for any custom development or API integration.

To learn more, read Cloud Management Made Easier: AWS Managed Services Now Integrates with ServiceNow.

Regtech in the Cloud
Regtech (as I learned while writing this), is short for regulatory technology, and is all about using innovative technology such as cloud computing, analytics, and machine learning to address regulatory challenges.

Working together with APN Consulting Partner Cognizant, TABB Group recently published a thought leadership paper that explains why regulations and compliance pose huge challenges for our customers in the financial services, and shows how AWS can help!

New & Revised Quick Starts
Our Quick Starts team has been cranking out new solutions and making significant updates to the existing ones. Here’s a roster:

Alfresco Content Services (v2) Atlassian Confluence Confluent Platform Data Lake
Datastax Enterprise GitHub Enterprise Hashicorp Nomad HIPAA
Hybrid Data Lake with Wandisco Fusion IBM MQ IBM Spectrum Scale Informatica EIC
Magento (v2) Linux Bastion (v2) Modern Data Warehouse with Tableau MongoDB (v2)
NetApp ONTAP NGINX (v2) RD Gateway Red Hat Openshift
SAS Grid SIOS Datakeeper StorReduce SQL Server (v2)

And that’s all I have for today!

Jeff;

New UK IP Crime Report Reveals Continued Focus on ‘Pirate’ Kodi Boxes

Post Syndicated from Andy original https://torrentfreak.com/new-uk-ip-crime-report-reveals-continued-focus-on-pirate-kodi-boxes-170908/

The UK’s Intellectual Property Office has published its annual IP Crime Report, spanning the period 2016 to 2017.

It covers key events in the copyright and trademark arenas and is presented with input from the police and trading standards, plus private entities such as the BPI, Premier League, and Federation Against Copyright Theft, to name a few.

The report begins with an interesting statistic. Despite claims that many millions of UK citizens regularly engage in some kind of infringement, figures from the Ministry of Justice indicate that just 47 people were found guilty of offenses under the Copyright, Designs and Patents Act during 2016. That’s down on the 69 found guilty in the previous year.

Despite this low conviction rate, 15% of all internet users aged 12+ are reported to have consumed at least one item of illegal content between March and May 2017. Figures supplied by the Industry Trust for IP indicate that 19% of adults watch content via various IPTV devices – often referred to as set-top, streaming, Android, or Kodi boxes.

“At its cutting edge IP crime is innovative. It exploits technological loopholes before they become apparent. IP crime involves sophisticated hackers, criminal financial experts, international gangs and service delivery networks. Keeping pace with criminal innovation places a burden on IP crime prevention resources,” the report notes.

The report covers a broad range of IP crime, from counterfeit sportswear to foodstuffs, but our focus is obviously on Internet-based infringement. Various contributors cover various aspects of online activity as it affects them, including music industry group BPI.

“The main online piracy threats to the UK recorded music industry at present are from BitTorrent networks, linking/aggregator sites, stream-ripping sites, unauthorized streaming sites and cyberlockers,” the BPI notes.

The BPI’s website blocking efforts have been closely reported, with 63 infringing sites blocked to date via various court orders. However, the BPI reports that more than 700 related URLs, IP addresses, and proxy sites/ proxy aggregators have also been rendered inaccessible as part of the same action.

“Site blocking has proven to be a successful strategy as the longer the blocks are in place, the more effective they are. We have seen traffic to these sites reduce by an average of 70% or more,” the BPI reports.

While prosecutions against music pirates are a fairly rare event in the UK, the Crown Prosecution Service (CPS) Specialist Fraud Division highlights that their most significant prosecution of the past 12 months involved a prolific music uploader.

As first revealed here on TF, Wayne Evans was an uploader not only on KickassTorrents and The Pirate Bay, but also some of his own sites. Known online as OldSkoolScouse, Evans reportedly cost the UK’s Performing Rights Society more than £1m in a single year. He was sentenced in December 2016 to 12 months in prison.

While Evans has been free for some time already, the CPS places particular emphasis on the importance of the case, “since it provided sentencing guidance for the Copyright, Designs and Patents Act 1988, where before there was no definitive guideline.”

The CPS says the case was useful on a number of fronts. Despite illegal distribution of content being difficult to investigate and piracy losses proving tricky to quantify, the court found that deterrent sentences are appropriate for the kinds of offenses Evans was accused of.

The CPS notes that various factors affect the severity of such sentences, not least the length of time the unlawful activity has persisted and particularly if it has done so after the service of a cease and desist notice. Other factors include the profit made by defendants and/or the loss caused to copyright holders “so far as it can accurately be calculated.”

Importantly, however, the CPS says that beyond issues of personal mitigation and timely guilty pleas, a jail sentence is probably going to be the outcome for others engaging in this kind of activity in future. That’s something for torrent and streaming site operators and their content uploaders to consider.

“[U]nless the unlawful activity of this kind is very amateur, minor or short-lived, or in the absence of particularly compelling mitigation or other exceptional circumstances, an immediate custodial sentence is likely to be appropriate in cases of illegal distribution of copyright infringing articles,” the CPS concludes.

But while a music-related trial provided the highlight of the year for the CPS, the online infringement world is still dominated by the rise of streaming sites and the now omnipresent “fully-loaded Kodi Box” – set-top devices configured to receive copyright-infringing live TV and VOD.

In the IP Crime Report, the Intellectual Property Office references a former US Secretary of Defense to describe the emergence of the threat.

“The echoes of Donald Rumsfeld’s famous aphorism concerning ‘known knowns’ and ‘known unknowns’ reverberate across our landscape perhaps more than any other. The certainty we all share is that we must be ready to confront both ‘known unknowns’ and ‘unknown unknowns’,” the IPO writes.

“Not long ago illegal streaming through Kodi Boxes was an ‘unknown’. Now, this technology updates copyright infringement by empowering TV viewers with the technology they need to subvert copyright law at the flick of a remote control.”

While the set-top box threat has grown in recent times, the report highlights the important legal clarifications that emerged from the BREIN v Filmspeler case, which found itself before the European Court of Justice.

As widely reported, the ECJ determined that the selling of piracy-configured devices amounts to a communication to the public, something which renders their sale illegal. However, in a submission by PIPCU, the Police Intellectual Property Crime Unit, box sellers are said to cast a keen eye on the legal situation.

“Organised criminals, especially those in the UK who distribute set-top boxes, are aware of recent developments in the law and routinely exploit loopholes in it,” PIPCU reports.

“Given recent judgments on the sale of pre-programmed set-top boxes, it is now unlikely criminals would advertise the devices in a way which is clearly infringing by offering them pre-loaded or ‘fully loaded’ with apps and addons specifically designed to access subscription services for free.”

With sellers beginning to clean up their advertising, it seems likely that detection will become more difficult than when selling was considered a gray area. While that will present its own issues, PIPCU still sees problems on two fronts – a lack of clear legislation and a perception of support for ‘pirate’ devices among the public.

“There is no specific legislation currently in place for the prosecution of end users or sellers of set-top boxes. Indeed, the general public do not see the usage of these devices as potentially breaking the law,” the unit reports.

“PIPCU are currently having to try and ‘shoehorn’ existing legislation to fit the type of criminality being observed, such as conspiracy to defraud (common law) to tackle this problem. Cases are yet to be charged and results will be known by late 2017.”

Whether these prosecutions will be effective remains to be seen, but PIPCU’s comments suggest an air of caution set to a backdrop of box-sellers’ tendency to adapt to legal challenges.

“Due to the complexity of these cases it is difficult to substantiate charges under the Fraud Act (2006). PIPCU have convicted one person under the Serious Crime Act (2015) (encouraging or assisting s11 of the Fraud Act). However, this would not be applicable unless the suspect had made obvious attempts to encourage users to use the boxes to watch subscription only content,” PIPCU notes, adding;

“The selling community is close knit and adapts constantly to allow itself to operate in the gray area where current legislation is unclear and where they feel they can continue to sell ‘under the radar’.”

More generally, pirate sites as a whole are still seen as a threat. As reported last month, the current anti-piracy narrative is that pirate sites represent a danger to their users. As a result, efforts are underway to paint torrent and streaming sites as risky places to visit, with users allegedly exposed to malware and other malicious content. The scare strategy is supported by PIPCU.

“Unlike the purchase of counterfeit physical goods, consumers who buy unlicensed content online are not taking a risk. Faulty copyright doesn’t explode, burn or break. For this reason the message as to why the public should avoid copyright fraud needs to be re-focused.

“A more concerted attempt to push out a message relating to malware on pirate websites, the clear criminality and the links to organized crime of those behind the sites are crucial if public opinion is to be changed,” the unit advises.

But while the changing of attitudes is desirable for pro-copyright entities, PIPCU says that winning over the public may not prove to be an easy battle. It was given a small taste of backlash itself, after taking action against the operator of a pirate site.

“The scale of the problem regarding public opinion of online copyright crime is evidenced by our own experience. After PIPCU executed a warrant against the owner of a streaming website, a tweet about the event (read by 200,000 people) produced a reaction heavily weighted against PIPCU’s legitimate enforcement action,” PIPCU concludes.

In summary, it seems likely that more effort will be expended during the next 12 months to target the set-top box threat, but there doesn’t appear to be an abundance of confidence in existing legislation to tackle all but the most egregious offenders. That being said, a line has now been drawn in the sand – if the public is prepared to respect it.

The full IP Crime Report 2016-2017 is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia Blocks 4,000 Pirate Sites Plus 41,000 Innocent as Collateral Damage

Post Syndicated from Andy original https://torrentfreak.com/russia-blocks-4000-pirate-sites-plus-41000-innocent-as-collateral-damage-170905/

After years of criticism from both international and local rightsholders, in 2013 the Russian government decided to get tough on Internet piracy.

Under new legislation, sites engaged in Internet piracy could find themselves blocked by ISPs, rendering them inaccessible to local citizens and solving the piracy problem. Well, that was the theory, at least.

More than four years on, Russia is still grappling with a huge piracy problem that refuses to go away. It has been blocking thousands of sites at a steady rate, including RuTracker, the country’s largest torrent platform, but still the problem persists.

Now, a new report produced by Roskomsvoboda, the Center for the Protection of Digital Rights, and the Pirate Party of Russia, reveals a system that has not only failed to reach its stated aims but is also having a negative effect on the broader Internet.

“It’s already been four years since the creation of this ‘anti-piracy machine’ in Russia. The first amendments related to the fight against ‘piracy’ in the network came into force on August 1, 2013, and since then this mechanism has been twice revised,” Roskomsvoboda said in a statement.

“[These include] the emergence of additional responsibilities to restrict access to network resources and increase the number of subjects who are responsible for removing and blocking content. Since that time, several ‘purely Russian’ trends in ‘anti-piracy’ and trade in rights have also emerged.”

These revisions, which include the permanent blocking of persistently infringing sites and the planned blocking of mirror sites and anonymizers, have been widely documented. However, the researchers say that they want to shine a light on the effects of blocking procedures and subsequent actions that are causing significant issues for third-parties.

As part of the study, the authors collected data on the cases presented to the Moscow City Court by the most active plaintiffs in anti-piracy actions (mainly TV show distributors and music outfits including Sony Music Entertainment and Universal Music). They describe the court process and system overall as lacking.

“The court does not conduct a ‘triple test’ and ignores the position, rights and interests of respondents and third parties. It does not check the availability of illegal information on sites and appeals against decisions of the Moscow City Court do not bring any results,” the researchers write.

“Furthermore, the cancellation of the unlimited blocking of a site is simply impossible and in respect of hosting providers and security services, those web services are charged with all the legal costs of the case.”

The main reason behind this situation is that ‘pirate’ site operators rarely (if ever) turn up to defend themselves. If at some point they are found liable for infringement under the Criminal Code, they can be liable for up to six years in prison, hardly an incentive to enter into a copyright process voluntarily. As a result, hosts and other providers act as respondents.

This means that these third-party companies appear as defendants in the majority of cases, a position they find both “unfair and illogical.” They’re also said to be confused about how they are supposed to fulfill the blocking demands placed upon them by the Court.

“About 90% of court cases take place without the involvement of the site owner, since the requirements are imposed on the hosting provider, who is not responsible for the content of the site,” the report says.

Nevertheless, hosts and other providers have been ordered to block huge numbers of pirate sites.

According to the researchers, the total has now gone beyond 4,000 domains, but the knock on effect is much more expansive. Due to the legal requirement to block sites by both IP address and other means, third-party sites with shared IP addresses get caught up as collateral damage. The report states that more than 41,000 innocent sites have been blocked as the result of supposedly targeted court orders.

But with collateral damage mounting, the main issue as far as copyright holders are concerned is whether piracy is decreasing as a result. The report draws few conclusions on that front but notes that blocks are a blunt instrument. While they may succeed in stopping some people from accessing ‘pirate’ domains, the underlying infringement carries on regardless.

“Blocks create restrictions only for Internet users who are denied access to sites, but do not lead to the removal of illegal information or prevent intellectual property violations,” the researchers add.

With no sign of the system being overhauled to tackle the issues raised in the study (pdf, Russian), Russia is now set to introduce yet new anti-piracy measures.

As recently reported, new laws requiring search engines to remove listings for ‘pirate’ mirror sites comes into effect October 1. Exactly a month later on November 1, VPNs and anonymization tools will have to be removed too, if they fail to meet the standards required under state regulation.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Datavalet Wi-Fi Blocks TorrentFreak Over ‘Criminal Hacking Skills’

Post Syndicated from Ernesto original https://torrentfreak.com/datavalet-wi-fi-blocks-torrentfreak-over-criminal-hacking-skills-170903/

At TorrentFreak we regularly write about website blocking efforts around the globe, usually related to well-known pirate sites.

Unfortunately, our own news site is not immune to access restrictions either. While no court has ordered ISPs to block access to our articles, some are doing this voluntarily.

This is especially true for companies that provide Wi-Fi hotspots, such as Datavalet. This wireless network provider works with various large organizations, including McDonald’s, Starbucks, and airports, to offer customers free Internet access.

Or rather to a part of the public Internet, we should say.

Over the past several months, we have had several reports from people who are unable to access TorrentFreak on Datavalet’s network. Users who load our website get an ominous warning instead, suggesting that we run some kind of a criminal hacking operation.

“Access to TORRENTFREAK.COM is not permitted as it is classified as: CRIMINAL SKILLS / HACKING.”

Criminal Skills?

Although we see ourselves as skilled writing news in our small niche, which incidentally covers crime and hacking, our own hacking skills are below par. Admittedly, mistakes are easily made but Datavalet’s blocking efforts are rather persistent.

The same issue was brought to our attention several years ago. At the time, we reached out to Datavalet and a friendly senior network analyst promised that they would look into it.

“We have forwarded your concerns to the proper resources and as soon as we have an update we will let you know,” the response was. But a few years later the block is still active, or active again.

Datavalet is just one one the many networks where TorrentFreak is blocked. Often, we are categorized as a file-sharing site, probably due to the word “torrent” in our name. This recently happened at the NYC Brooklyn library, for example.

After a reader kindly informed the library that we’re a news site, we were suddenly transferred from the “Peer-to-Peer File Sharing” to the “Proxy Avoidance” category.

“It appears that the website you want to access falls under the category ‘Proxy Avoidance’. These are sites that provide information about how to bypass proxy server features or to gain access to URLs in any way that bypass the proxy server,” the library explained.

Still blocked of course.

At least we’re not the only site facing this censorship battle. Datavelet and others regularly engage in overblocking to keep their network and customers safe. For example, Reddit was recently banned because it offered “nudity,” which is another no-go area.

Living up to our “proxy avoidance” reputation, we have to mention that people who regularly face these type of restrictions may want to invest in a VPN. These are generally quite good at bypassing these type of blockades. If they are not blocked themselves, that is.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How Much Does ‘Free’ Premier League Piracy Cost These Days?

Post Syndicated from Andy original https://torrentfreak.com/how-much-does-free-premier-league-piracy-cost-these-days-170902/

Right now, the English Premier League is engaged in perhaps the most aggressively innovative anti-piracy operation the Internet has ever seen. After obtaining a new High Court order, it now has the ability to block ‘pirate’ streams of matches, in real-time, with no immediate legal oversight.

If the Premier League believes a server is streaming one of its matches, it can ask ISPs in the UK to block it, immediately. That’s unprecedented anywhere on the planet.

As previously reported, this campaign caused a lot of problems for people trying to access free and premium streams at the start of the season. Many IPTV services were blocked in the UK within minutes of matches starting, with free streams also dropping like flies. According to information obtained by TF, more than 600 illicit streams were blocked during that weekend.

While some IPTV providers and free streams continued without problems, it seems likely that it’s only a matter of time before the EPL begins to pick off more and more suppliers. To be clear, the EPL isn’t taking services or streams down, it’s only blocking them, which means that people using circumvention technologies like VPNs can get around the problem.

However, this raises the big issue again – that of continuously increasing costs. While piracy is often painted as free, it is not, and as setups get fancier, costs increase too.

Below, we take a very general view of a handful of the many ‘pirate’ configurations currently available, to work out how much ‘free’ piracy costs these days. The list is not comprehensive by any means (and excludes more obscure methods such as streaming torrents, which are always free and rarely blocked), but it gives an idea of costs and how the balance of power might eventually tip.

Basic beginner setup

On a base level, people who pirate online need at least some equipment. That could be an Android smartphone and easily installed free software such as Mobdro or Kodi. An Internet connection is a necessity and if the EPL blocks those all important streams, a VPN provider is required to circumvent the bans.

Assuming people already have a phone and the Internet, a VPN can be bought for less than £5 per month. This basic setup is certainly cheap but overall it’s an entry level experience that provides quality equal to the effort and money expended.

Equipment: Phone, tablet, PC
Comms: Fast Internet connection, decent VPN provider
Overal performance: Low quality, unpredictable, often unreliable
Cost: £5pm approx for VPN, plus Internet costs

Big screen, basic

For those who like their matches on the big screen, stepping up the chain costs more money. People need a TV with an HDMI input and a fast Internet connection as a minimum, alongside some kind of set-top device to run the necessary software.

Android devices are the most popular and are roughly split into two groups – the small standalone box type and the plug-in ‘stick’ variant such as Amazon’s Firestick.

A cheap Android set-top box

These cost upwards of £30 to £40 but the software to install on them is free. Like the phone, Mobdro is an option, but most people look to a Kodi setup with third-party addons. That said, all streams received on these setups are now vulnerable to EPL blocking so in the long-term, users will need to run a paid VPN.

The problem here is that some devices (including the 1st gen Firestick) aren’t ideal for running a VPN on top of a stream, so people will need to dump their old device and buy something more capable. That could cost another £30 to £40 and more, depending on requirements.

Importantly, none of this investment guarantees a decent stream – that’s down to what’s available on the day – but invariably the quality is low and/or intermittent, at best.

Equipment: TV, decent Android set-top box or equivalent
Comms: Fast Internet connection, decent VPN provider
Overall performance: Low to acceptable quality, unpredictable, often unreliable
Cost: £30 to £50 for set-top box, £5pm approx for VPN, plus Internet

Premium IPTV – PC or Android based

At this point, premium IPTV services come into play. People have a choice of spending varying amounts of money, depending on the quality of experience they require.

First of all, a monthly IPTV subscription with an established provider that isn’t going to disappear overnight is required, which can be a challenge to find in itself. We’re not here to review or recommend services but needless to say, like official TV packages they come in different flavors to suit varying wallet sizes. Some stick around, many don’t.

A decent one with a Sky-like EPG costs between £7 and £15 per month, depending on the quality and depth of streams, and how far in front users are prepared to commit.

Fairly typical IPTV with EPG (VOD shown)

Paying for a year in advance tends to yield better prices but with providers regularly disappearing and faltering in their service levels, people are often reluctant to do so. That said, some providers experience few problems so it’s a bit like gambling – research can improve the odds but there’s never a guarantee.

However, even when a provider, price, and payment period is decided upon, the process of paying for an IPTV service can be less than straightforward.

While some providers are happy to accept PayPal, many will only deal in credit cards, bitcoin, or other obscure payment methods. That sets up more barriers to entry that might deter the less determined customer. And, if time is indeed money, fussing around with new payment processors can be pricey, at least to begin with.

Once subscribed though, watching these streams is pretty straightforward. On a base level, people can use a phone, tablet, or set-top device to receive them, using software such as Perfect Player IPTV, for example. Currently available in free (ad supported) and premium (£2) variants, this software can be setup in a few clicks and will provide a decent user experience, complete with EPG.

Perfect Player IPTV

Those wanting to go down the PC route have more options but by far the most popular is receiving IPTV via a Kodi setup. For the complete novice, it’s not always easy to setup but some IPTV providers supply their own free addons, which streamline the process massively. These can also be used on Android-based Kodi setups, of course.

Nevertheless, if the EPL blocks the provider, a VPN is still going to be needed to access the IPTV service.

An Android tablet running Kodi

So, even if we ignore the cost of the PC and Internet connection, users could still find themselves paying between £10 and £20 per month for an IPTV service and a decent VPN. While more channels than simply football will be available from most providers, this is getting dangerously close to the £18 Sky are asking for its latest football package.

Equipment: TV, PC, or decent Android set-top box or equivalent
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overal performance: High quality, mostly reliable, user-friendly (once setup)
Cost: PC or £30/£50 for set-top box, IPTV subscription £7 to £15pm, £5pm approx for VPN, plus Internet, plus time and patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear

Premium IPTV – Deluxe setup

Moving up to the top of the range, things get even more costly. Those looking to give themselves the full home entertainment-like experience will often move away from the PC and into the living room in front of the TV, armed with a dedicated set-top box. Weapon of choice: the Mag254.

Like Amazon’s FireStick, PC or Android tablet, the Mag254 is an entirely legal, content agnostic device. However, enter the credentials provided by many illicit IPTV suppliers and users are presented with a slick Sky-like experience, far removed from anything available elsewhere. The device is operated by remote control and integrates seamlessly with any HDMI-capable TV.

Mag254 IPTV box

Something like this costs around £70 in the UK, plus the cost of a WiFi adaptor on top, if needed. The cost of the IPTV provider needs to be figured in too, plus a VPN subscription if the provider gets blocked by EPL, which is likely. However, in this respect the Mag254 has a problem – it can’t run a VPN natively. This means that if streams get blocked and people need to use a VPN, they’ll need to find an external solution.

Needless to say, this costs more money. People can either do all the necessary research and buy a VPN-capable router/modem that’s also compatible with their provider (this can stretch to a couple of hundred pounds) or they’ll need to invest in a small ‘travel’ router with VPN client features built in.

‘Travel’ router (with tablet running Mobdro for scale)

These devices are available on Amazon for around £25 and sit in between the Mag254 (or indeed any other wireless device) and the user’s own regular router. Once the details of the VPN subscription are entered into the router, all traffic passing through is encrypted and will tunnel through web blocking measures. They usually solve the problem (ymmv) but of course, this is another cost.

Equipment: Mag254 or similar, with WiFi
Comms: Fast Internet connection, IPTV subscription, decent VPN provider
Overall performance: High quality, mostly reliable, very user-friendly
Cost: Mag254 around £75 with WiFi, IPTV subscription £7 to £15pm, £5pm for VPN (plus £25 for mini router), plus Internet, plus patience for obscure payment methods.
Note: There are zero refunds when IPTV providers disappoint or disappear

Conclusion

On the whole, people who want a reliable and high-quality Premier League streaming experience cannot get one for free, no matter where they source the content. There are many costs involved, some of which cannot be avoided.

If people aren’t screwing around with annoying and unreliable Kodi streams, they’ll be paying for an IPTV provider, VPN and other equipment. Or, if they want an easy life, they’ll be paying Sky, BT or Virgin Media. That might sound harsh to many pirates but it’s the only truly reliable solution.

However, for those looking for something that’s merely adequate, costs drop significantly. Indeed, if people don’t mind the hassle of wondering whether a sub-VHS quality stream will appear before the big match and stay on throughout, it can all be done on a shoestring.

But perhaps the most important thing to note in respect of costs is the recent changes to the pricing of Premier League content in the UK. As mentioned earlier, Sky now delivers a sports package for £18pm, which sounds like the best deal offered to football fans in recent years. It will be tempting for sure and has all the hallmarks of a price point carefully calculated by Sky.

The big question is whether it will be low enough to tip significant numbers of people away from piracy. The reality is that if another couple of thousand streams get hit hard again this weekend – and the next – and the next – many pirating fans will be watching the season drift away for yet another month, unviewed. That’s got to be frustrating.

The bottom line is that high-quality streaming piracy is becoming a little bit pricey just for football so if it becomes unreliable too – and that’s the Premier League’s goal – the balance of power could tip. At this point, the EPL will need to treat its new customers with respect, in order to keep them feeling both entertained and unexploited.

Fail on those counts – especially the latter – and the cycle will start again.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Analyzing Salesforce Data with Amazon QuickSight

Post Syndicated from David McAmis original https://aws.amazon.com/blogs/big-data/analyzing-salesforce-data-with-amazon-quicksight/

Salesforce Sales Cloud is a powerful platform for managing customer data. One of the key functions that the platform provides is the ability to track customer opportunities. Opportunities in Salesforce are used to track revenue, sales pipelines, and other activities from the very first contact with a potential customer to a closed sale.

Amazon QuickSight is a rich data visualization tool that provides the ability to connect to Salesforce data and use it as a data source for creating analyses, stories, and dashboards  and easily share them with others in the organization. This post focuses on how to connect to Salesforce as a data source and create a useful opportunity dashboard, incorporating Amazon QuickSight features like relative date filters, Key Performance Indicator (KPI) charts, and more.

Walkthrough

In this post, you walk through the following tasks:

  • Creating a new data set based on Salesforce data
  • Creating your analysis and adding visuals
  • Creating an Amazon QuickSight dashboard
  • Working with filters

Note: For this walkthrough, I am using my own Salesforce.com Developer Edition account. You can sign up for your own free developer account at https://developer.salesforce.com/.

Creating a new Amazon QuickSight data set based on Salesforce data

To start, you need to create a new Amazon QuickSight data set. Sign in to Amazon QuickSight at https://quicksight.aws using the link from the home page. Enter your Amazon QuickSight account name and choose Continue. Next, enter your Email address or user name and password, then choose Sign In.

On the Amazon QuickSight start page, choose Manage Data, which takes you to a list of your data sets. Choose New Data Set, and choose Salesforce as your data source. Enter a data source name—in this example, I called mine “SFDC Opportunity.” Choose Create Data Source to open the Salesforce authentication page, where you can enter your Salesforce user name and password.

After you are authenticated to Salesforce, you are presented with a drop-down list that lets you select data from Reports or Objects. For this tutorial, choose Object. Scroll down in the list to choose the Opportunity object, and then choose Select.

To finish creating your data set, choose Visualize to go to where you can create a new Amazon QuickSight analysis from this data.

Creating your analysis and adding visuals

Now that you have acquired your data, it’s time to start working with your analysis. In Amazon Quicksight, an analysis is a container for a set of related visual stories. When you chose Visualize, a new analysis was created for you. This is where you start to create the visuals (charts, graphs, etc.) that will be the building blocks for your dashboard.

In Amazon QuickSight, Salesforce objects look like database tables. In the analysis that you just created, you can see the columns in the Fields list for the Opportunity object.

The Opportunity object in Salesforce has a number of default fields. Salesforce administrators can extend this object by adding other custom fields as required—these custom fields are usually marked with a “_c” at the end.

In the Fields List, you can see that Amazon QuickSight has divided the fields into Dimensions and Measures.  You use these to create your visualizations and dashboard. For this particular dashboard, you create five different visuals to display the data in a few different ways.

Opportunity by Stage

For the first visualization, you create a horizontal bar chart showing “Opportunity by Stage”. In the Fields List, choose the StageName dimension and the ExpectedRevenue measure. By default, this should create a horizontal bar chart for you, as shown in the following image.

Notice that this chart includes the Closed Won category, which we aren’t interested in showing. Choose the bar for Closed Won, and in the pop-up menu, choose Exclude Closed Won. This filters the chart to show only opportunities that are in progress.

It’s important to note that for this dashboard, we only want to show the opportunities that are not Closed Won. So in the menu bar on the left side, choose Filter.

By default, the filter that you just created was only applied to a single visualization. To change this, choose the filter, and then choose All Visuals from the drop-down list. This applies the filter to all visuals in the analysis.

To finish, select the chart title and rename the chart to Opportunity by Stage.

Opportunity by Month

Next, you need to create a new visual to show “Opportunity by Month.” You use a vertical bar chart to display the data. On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. For this visual, choose CloseDate from the dimensions and ExpectedRevenue from the measures.

Using the Visual Types menu, change the chart type to a Vertical Bar Chart. By default, the chart displays the revenue by year, but we want to break it down a bit further. Choose Field Wells, and using the CloseDate drop-down menu, change the Aggregate to Month.

With the change to a monthly aggregate, your chart should look something like the following:

Select the chart title and rename the chart to Opportunity by Month.

Expected Revenue

When working with Salesforce opportunities, there are two measures that are important to most sales managers—the first is the total amount associated with the opportunity, and the second is what the actual expected revenue will be. For the next visual, you use the KPI chart to display these measures.

Choose Add on the Amazon QuickSight toolbar, and then choose Add visual. From the measures, choose ExpectedRevenue, and then Amount. To change your visualization, go to the Visual Types menu and choose the Key Performance Indicator (KPI). Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue.

Opportunity by Lead Source

Next, you need to look at where the opportunity actually came from. This helps your dashboard users understand where the leads are being generated from and their value to the business. For this visual, you use a Horizontal Bar Chart.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose Amount, and for the dimensions, choose LeadSource. To change your visualization, go to the Visual Types menu and choose the Horizontal Bar Chart. Your visualization should change and be similar to the following:

Note: If you can’t read the chart labels for the bars, grab the axis line and drag to resize.

Select the chart title and rename the chart to Opportunity by Lead Source.

Expected Revenue vs. Opportunity Amount

For the last visual, you look at the individual opportunities and how they contribute to the total pipeline. A tree map is a specialized chart type that lets your dashboard users see how each opportunity amount contributes to the whole.  Additionally, you can highlight if there is a difference between the Expected Revenue and the Amount by sizing the marks by the Amount and coloring them by the Expected Amount.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose ExpectedRevenue and Amount. From the dimensions, choose Name. To change your visualization, go to the Visual Types menu and choose the Tree Map. Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue vs Opportunity Amount.

Creating an Amazon QuickSight dashboard

Now that your visuals are created, it’s time to do the fun part—actually putting your Amazon QuickSight dashboard together. To create a dashboard, resize and position your visuals on the page, using the following layout:

To resize a visual, grab the handle in the lower-right corner and drag it to the height and width that you want.

To move your visual, use the grab bar at the top of the visual, as shown here:

When you are done resizing your visuals, your canvas should look something like this:

To create a dashboard, choose Share in the Amazon QuickSight toolbar. Then choose Create Dashboard. For this dashboard, give it a name of SFDC Opportunity Dashboard, and choose Create Dashboard. You are prompted to enter the email address or user name of the users you want to share this dashboard with.

Because we are just concentrating on the design at the moment, you can choose Cancel and share your dashboard later using the Share button on the dashboard toolbar.

Working with filters

There is one more feature that you can use when viewing your dashboard to make it even more useful. Earlier, when you were working with the Analysis, you added a filter to remove any opportunities that were tagged as Closed Won. Now, as you are viewing the dashboard, you add a filter that you can use to filter on a relative date.

This feature in Amazon QuickSight allows you to choose a time period (years, quarters, months, weeks, etc.) and then select from a list of relative time periods. For example, if you choose Year, you could set the filter options to Previous Year, This Year, Year to Date, or Last N Years.

This is especially handy for a Salesforce Opportunity dashboard, as you might want to filter the data using the Close Date field to see when the opportunity is actually set to close.

To create a relative date filter, choose Filter on the toolbar. Choose the filter icon, and then choose CloseDate, as shown in the following image:

At the top of the Edit Filter pane, change the drop-down list to apply the filter to All Visuals. The default filter type is Time Range, so use the drop-down list to change the filter type to Relative Dates.  For the time period, choose Quarters. To view all the current opportunities in your dashboard, choose the option for This Quarter, and choose Apply.

With the date filter in place, you have the final component for your dashboard, which should look something like the following example:

It’s important to note that at this point, you have added the filter when viewing the dashboard. If you think this is something that other users might want to do, you can go back to your Amazon QuickSight Analysis and add the filter there—that way it will be available for all dashboard users.

Summary

In this post, you learned how to connect to Salesforce data and create a basic dashboard. You can apply the same techniques to create analyses and dashboards from all different types of Salesforce data and objects. Whether you want to analyze your Salesforce account demographics or where your leads are coming from, or evaluate any other data stored in Salesforce, Amazon QuickSight helps you quickly connect to and visualize your data with only a few clicks.

 


Additional Reading

Learn how to visualize Amazon S3 analytics data with Amazon QuickSight!


About the Author

David McAmis is a Big Data & Analytics Consultant with Amazon Web Services. He works with customers to develop scalable platforms to gather, process and analyze data on AWS.

 

 

 

 

Pirates Leak Copy of Kim Dotcom Documentary Online

Post Syndicated from Ernesto original https://torrentfreak.com/pirates-leak-copy-of-kim-dotcom-documentary-online-170824/

In recent years, we have writen dozens of articles on Kim Dotcom, Megaupload’s shutdown, and all the intrigue surrounding the case.

It’s a story worth documenting and not just in writing. This is what the people behind the documentary Kim Dotcom: Caught in the Web realized as well.

With cooperation from the mastermind behind the defunct file-sharing site, they made a thrilling documentary that captures the essence of the story, which is far from over.

This week the film was released to the wider public, made available for sale on various online platforms including iTunes and Amazon Prime. Thus far things are going well, with the movie making its way into various top charts, including a second place in the iTunes documentary category.

However, if we believe entertainment industry rhetoric, this meteoric rise will soon be all over.

Earlier today the first pirated copies of “Caught in The Web” started to appear online. It is widely available on The Pirate Bay, for example, and shows up on various other “pirate” download and streaming sites as well.

The leaked documentary

Leaks happen every day, and this one’s not any different. That being said, people who followed the Dotcom saga may appreciate the irony, since Megaupload was a popular destination for pirates as well. So, a chunk of the site’s former users probably prefers to grab a free version. To sample, of course.

This is especially true for those who hit several roadblocks in trying to access the film from official outlets. Over the past few days, some people complained that “Caught in the Web” isn’t legally available through their preferred legal channel due to geographical restrictions.

Dotcom, still accused by the US Government of depriving copyright holders of $500 million in one of the country’s largest copyright infringement cases, responded appropriately when a Twitter follower pointed this out.

Not available

“They are wondering why people are pirating? If you’re willing to pay but you can’t find it legally, why is it your or my fault?” he wrote.

“If the Megaupload documentary is only available in the US iTunes store then I totally understand if you download or stream it elsewhere,” Dotcom added in another tweet.

The documentary is available in more countries, but not in all Amazon or iTunes stores. So, with the sympathy of the documentary’s main subject, people with no legal alternatives don’t have to feel as bad when they choose to pirate it instead.

That doesn’t make it less illegal, of course, but we doubt that the makers will actively pursue people for it.

Meanwhile, the people who were tasked with distributing the film may want to have another chat with Kim Dotcom. In recent years he has repeatedly sent out a concise list of tips on how to stop piracy.

Worth a read.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

On ISO standardization of blockchains

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/on-iso-standardization-of-blockchains.html

So ISO, the primary international standards organization, is seeking to standardize blockchain technologies. On the surface, this seems a reasonable idea, creating a common standard that everyone can interoperate with.

But it can be silly idea in practice. I mean, it should not be assumed that this is a good thing to do.

The value of official standards

You don’t need the official imprimatur of a government committee for something to be a “standard”. The Internet itself is a prime example of that.

In the 1980s, the ISO and the IETF (Internet Engineering Task Force) pursued competing standards for creating a world-wide “internet”. The IETF was an informal group of technologist that had essentially no official standing.

The ISO version of the Internet failed. Their process was to bring multiple stakeholders from business, government, and universities together in committees to debate competing interests. The result was something so horrible that it could never work in practice.

The IETF succeeded. It consisted of engineers just building things. Rather than officially “standardized”, these things were “described”, so that others knew enough to build their own version that interoperated. Once lots of different people built interoperating versions of something, then it became a “standard”.

In other words, the way the Internet came to be, standardization followed interoperability — it didn’t create interoperability.

In the end, the ISO gave up on their standards and adopted the IETF standards. The ISO brought no value to the development of Internet standards. Whether they ratified the Internet’s “TCP/IP” standard, ignored it, or condemned it, the Internet would exist today anyway, and a competing ISO-blessed internetwork would not.

The same question exists for blockchain technologies. Groups are off busy innovating quickly, creating their own standards. If the ISO blesses one, or creates its own, it’s unlikely to have any impact on interoperability.

Blockchain vs. chaining blocks

The excitement over blockchains is largely driven by people who don’t know the details, who don’t understand the difference between a blockchain like Bitcoin and the problem they are trying to solve.

Consider a record keeping system, especially public records. Storing them in a blockchain seems like a natural idea.

But in fact, it’s a terrible idea. A Bitcoin-style blockchain has a lot of features you don’t want, like “proof-of-work” signing. It is also missing necessary features, like bulk storage with redundancy (backups). Sure, Bitcoin has redundancy, but by brute force, storing the blockchain in thousands of places around the Internet. This is far from what a public records system would need, which would store a lot more data with far fewer backup copies (fewer than 10).

The only real overlap between Bitcoin and a public records system is a “signing chain”. But this is something that already existed before Bitcoin. It’s what Bitcoin blockchain was built on top of — it’s not the blockchain itself.

It’s like people discovering “cryptography” for the first time when they looked at Bitcoin, ignoring the thousand year history of crypto, and now every time they see a need for “crypto” they think “Bitcoin blockchain”.

Consensus and forking

The entire point of Bitcoin, the reason it was created, was as the antithesis to centralized standardization like ISO. Standardizing blockchains misses the entire point of their existence. The Bitcoin manifesto is that standardization comes from acclamation not proclamation, and that many different standards are preferable to a single one.

This is not just a theoretical idea but one built into Bitcoin’s blockchain technology. “Consensus” is achieved by the proof-of-work mechanism, so that those who do the most work are the ones that drive the consensus. When irreconcilable differences arise, the blockchain “forks”, with each side continuing on with their now non-interoperable blockchains. Such forks are not a sin, but part of the natural evolution.

We saw this with the recent fork of Bitcoin. There are now so many transactions that they exceed the size of blocks. One group chose a change to make transactions smaller. Another group chose a change to make block sizes larger.

It is this problem, of consensus, that is the innovation that Bitcoin created with blockchains, not the chain signing of public transaction records.

Ethereum

What “blockchain standardization” is going to mean in practice is not the blockchain itself, but trying to standardize the Ethereum version. What makes Ethereum different is the “smart contracts” programming language, which has financial institutions excited.

This is a bad idea because from a cybersecurity perspective, Ethereum’s programming language is flawed. Different bugs in “smart contracts” have led to multiple $100-million hacks, such as the infamous “DAO collapse”.

While it has interesting possibilities, we should be scared of standardizing Ethereum’s language before it works.

Conclusion

People who matter are too busy innovating, creating their own blockchain standards. There is little that the ISO can do to improve this. Their official imprimatur is not needed to foster innovation and interoperability — if they are consequential at anything, it’ll just be interfering.

Court Orders Aussie ISPs to Block Dozens of Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/court-orders-aussie-isps-to-block-dozens-of-pirate-sites-170818/

Rather than taking site operators to court, copyright holders increasingly demand that Internet providers should block access to ‘pirate’ domains.

As a result, courts all around the world have ordered ISPs to block subscriber access to various pirate sites.

This is also happening in Australia where the first blockades were issued late last year. In December, the Federal Court ordered ISPs to block The Pirate Bay and several other sites, which happened soon after.

However, as is often the case with website blocking, one order is not enough as there are still plenty of pirate sites and proxies readily available. So, several rightsholders including movie studio Village Roadshow and local broadcaster Foxtel went back to court.

Today the Federal Court ruled on two applications that cover 59 pirate sites in total, including many popular torrent and streaming portals.

The first order was issued by Justice John Nicholas, who directed several Internet providers including IINet, Telstra, and TPG to block access to several pirate sites. The request came from Village Roadshow, which was backed by several major Hollywood studios.

The order directs the ISPs to stop passing on traffic to 41 torrent and streaming platforms including Demonoid, RARBG, EZTV, YTS, Gomovies, and Fmovies. The full list of blocked domains is even longer, as it also covers several proxies.

“The infringement or facilitation of infringement by the Online Locations is flagrant and reflect a blatant disregard for the rights of copyright owners,” the order reads.

“By way of illustration, one of the Online Locations is accessible via the domain name ‘istole.it’ and it and many others include notices encouraging users to implement technology to frustrate any legal action that might be taken by copyright owners.”

In a separate order handed down by Federal Court Judge Stephen Burley, another 17 sites are ordered blocked following a request from Foxtel. This includes popular pirate sites such as 1337x, Torlock, Putlocker, YesMovies, Vumoo, and LosMovies.

The second order also includes a wide variety of alternative locations, including proxies, which brings the total number of targeted domain names to more than 160.

As highlighted by SHM, the orders coincide with the launch of a new anti-piracy campaign dubbed “The Price of Piracy,” which is organized by Creative Content Australia. Lori Flekser, Executive director of the non-profit organization, believes that the blockades will help to significantly deter piracy.

“Not only is there decreasing traffic to pirate sites but there is a subsequent increase in traffic to legal sites,” she said.

At the same time, she warns people not to visit proxy and mirror sites, as these could be dangerous. This message is also repeated by her organization’s campaign, which warns that pirate sites can be filled with ransomware, spyware, trojans, viruses, bots, rootkits and worms.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Analyzing AWS Cost and Usage Reports with Looker and Amazon Athena

Post Syndicated from Dillon Morrison original https://aws.amazon.com/blogs/big-data/analyzing-aws-cost-and-usage-reports-with-looker-and-amazon-athena/

This is a guest post by Dillon Morrison at Looker. Looker is, in their own words, “a new kind of analytics platform–letting everyone in your business make better decisions by getting reliable answers from a tool they can use.” 

As the breadth of AWS products and services continues to grow, customers are able to more easily move their technology stack and core infrastructure to AWS. One of the attractive benefits of AWS is the cost savings. Rather than paying upfront capital expenses for large on-premises systems, customers can instead pay variables expenses for on-demand services. To further reduce expenses AWS users can reserve resources for specific periods of time, and automatically scale resources as needed.

The AWS Cost Explorer is great for aggregated reporting. However, conducting analysis on the raw data using the flexibility and power of SQL allows for much richer detail and insight, and can be the better choice for the long term. Thankfully, with the introduction of Amazon Athena, monitoring and managing these costs is now easier than ever.

In the post, I walk through setting up the data pipeline for cost and usage reports, Amazon S3, and Athena, and discuss some of the most common levers for cost savings. I surface tables through Looker, which comes with a host of pre-built data models and dashboards to make analysis of your cost and usage data simple and intuitive.

Analysis with Athena

With Athena, there’s no need to create hundreds of Excel reports, move data around, or deploy clusters to house and process data. Athena uses Apache Hive’s DDL to create tables, and the Presto querying engine to process queries. Analysis can be performed directly on raw data in S3. Conveniently, AWS exports raw cost and usage data directly into a user-specified S3 bucket, making it simple to start querying with Athena quickly. This makes continuous monitoring of costs virtually seamless, since there is no infrastructure to manage. Instead, users can leverage the power of the Athena SQL engine to easily perform ad-hoc analysis and data discovery without needing to set up a data warehouse.

After the data pipeline is established, cost and usage data (the recommended billing data, per AWS documentation) provides a plethora of comprehensive information around usage of AWS services and the associated costs. Whether you need the report segmented by product type, user identity, or region, this report can be cut-and-sliced any number of ways to properly allocate costs for any of your business needs. You can then drill into any specific line item to see even further detail, such as the selected operating system, tenancy, purchase option (on-demand, spot, or reserved), and so on.

Walkthrough

By default, the Cost and Usage report exports CSV files, which you can compress using gzip (recommended for performance). There are some additional configuration options for tuning performance further, which are discussed below.

Prerequisites

If you want to follow along, you need the following resources:

Enable the cost and usage reports

First, enable the Cost and Usage report. For Time unit, select Hourly. For Include, select Resource IDs. All options are prompted in the report-creation window.

The Cost and Usage report dumps CSV files into the specified S3 bucket. Please note that it can take up to 24 hours for the first file to be delivered after enabling the report.

Configure the S3 bucket and files for Athena querying

In addition to the CSV file, AWS also creates a JSON manifest file for each cost and usage report. Athena requires that all of the files in the S3 bucket are in the same format, so we need to get rid of all these manifest files. If you’re looking to get started with Athena quickly, you can simply go into your S3 bucket and delete the manifest file manually, skip the automation described below, and move on to the next section.

To automate the process of removing the manifest file each time a new report is dumped into S3, which I recommend as you scale, there are a few additional steps. The folks at Concurrency labs wrote a great overview and set of scripts for this, which you can find in their GitHub repo.

These scripts take the data from an input bucket, remove anything unnecessary, and dump it into a new output bucket. We can utilize AWS Lambda to trigger this process whenever new data is dropped into S3, or on a nightly basis, or whatever makes most sense for your use-case, depending on how often you’re querying the data. Please note that enabling the “hourly” report means that data is reported at the hour-level of granularity, not that a new file is generated every hour.

Following these scripts, you’ll notice that we’re adding a date partition field, which isn’t necessary but improves query performance. In addition, converting data from CSV to a columnar format like ORC or Parquet also improves performance. We can automate this process using Lambda whenever new data is dropped in our S3 bucket. Amazon Web Services discusses columnar conversion at length, and provides walkthrough examples, in their documentation.

As a long-term solution, best practice is to use compression, partitioning, and conversion. However, for purposes of this walkthrough, we’re not going to worry about them so we can get up-and-running quicker.

Set up the Athena query engine

In your AWS console, navigate to the Athena service, and click “Get Started”. Follow the tutorial and set up a new database (we’ve called ours “AWS Optimizer” in this example). Don’t worry about configuring your initial table, per the tutorial instructions. We’ll be creating a new table for cost and usage analysis. Once you walked through the tutorial steps, you’ll be able to access the Athena interface, and can begin running Hive DDL statements to create new tables.

One thing that’s important to note, is that the Cost and Usage CSVs also contain the column headers in their first row, meaning that the column headers would be included in the dataset and any queries. For testing and quick set-up, you can remove this line manually from your first few CSV files. Long-term, you’ll want to use a script to programmatically remove this row each time a new file is dropped in S3 (every few hours typically). We’ve drafted up a sample script for ease of reference, which we run on Lambda. We utilize Lambda’s native ability to invoke the script whenever a new object is dropped in S3.

For cost and usage, we recommend using the DDL statement below. Since our data is in CSV format, we don’t need to use a SerDe, we can simply specify the “separatorChar, quoteChar, and escapeChar”, and the structure of the files (“TEXTFILE”). Note that AWS does have an OpenCSV SerDe as well, if you prefer to use that.

 

CREATE EXTERNAL TABLE IF NOT EXISTS cost_and_usage	 (
identity_LineItemId String,
identity_TimeInterval String,
bill_InvoiceId String,
bill_BillingEntity String,
bill_BillType String,
bill_PayerAccountId String,
bill_BillingPeriodStartDate String,
bill_BillingPeriodEndDate String,
lineItem_UsageAccountId String,
lineItem_LineItemType String,
lineItem_UsageStartDate String,
lineItem_UsageEndDate String,
lineItem_ProductCode String,
lineItem_UsageType String,
lineItem_Operation String,
lineItem_AvailabilityZone String,
lineItem_ResourceId String,
lineItem_UsageAmount String,
lineItem_NormalizationFactor String,
lineItem_NormalizedUsageAmount String,
lineItem_CurrencyCode String,
lineItem_UnblendedRate String,
lineItem_UnblendedCost String,
lineItem_BlendedRate String,
lineItem_BlendedCost String,
lineItem_LineItemDescription String,
lineItem_TaxType String,
product_ProductName String,
product_accountAssistance String,
product_architecturalReview String,
product_architectureSupport String,
product_availability String,
product_bestPractices String,
product_cacheEngine String,
product_caseSeverityresponseTimes String,
product_clockSpeed String,
product_currentGeneration String,
product_customerServiceAndCommunities String,
product_databaseEdition String,
product_databaseEngine String,
product_dedicatedEbsThroughput String,
product_deploymentOption String,
product_description String,
product_durability String,
product_ebsOptimized String,
product_ecu String,
product_endpointType String,
product_engineCode String,
product_enhancedNetworkingSupported String,
product_executionFrequency String,
product_executionLocation String,
product_feeCode String,
product_feeDescription String,
product_freeQueryTypes String,
product_freeTrial String,
product_frequencyMode String,
product_fromLocation String,
product_fromLocationType String,
product_group String,
product_groupDescription String,
product_includedServices String,
product_instanceFamily String,
product_instanceType String,
product_io String,
product_launchSupport String,
product_licenseModel String,
product_location String,
product_locationType String,
product_maxIopsBurstPerformance String,
product_maxIopsvolume String,
product_maxThroughputvolume String,
product_maxVolumeSize String,
product_maximumStorageVolume String,
product_memory String,
product_messageDeliveryFrequency String,
product_messageDeliveryOrder String,
product_minVolumeSize String,
product_minimumStorageVolume String,
product_networkPerformance String,
product_operatingSystem String,
product_operation String,
product_operationsSupport String,
product_physicalProcessor String,
product_preInstalledSw String,
product_proactiveGuidance String,
product_processorArchitecture String,
product_processorFeatures String,
product_productFamily String,
product_programmaticCaseManagement String,
product_provisioned String,
product_queueType String,
product_requestDescription String,
product_requestType String,
product_routingTarget String,
product_routingType String,
product_servicecode String,
product_sku String,
product_softwareType String,
product_storage String,
product_storageClass String,
product_storageMedia String,
product_technicalSupport String,
product_tenancy String,
product_thirdpartySoftwareSupport String,
product_toLocation String,
product_toLocationType String,
product_training String,
product_transferType String,
product_usageFamily String,
product_usagetype String,
product_vcpu String,
product_version String,
product_volumeType String,
product_whoCanOpenCases String,
pricing_LeaseContractLength String,
pricing_OfferingClass String,
pricing_PurchaseOption String,
pricing_publicOnDemandCost String,
pricing_publicOnDemandRate String,
pricing_term String,
pricing_unit String,
reservation_AvailabilityZone String,
reservation_NormalizedUnitsPerReservation String,
reservation_NumberOfReservations String,
reservation_ReservationARN String,
reservation_TotalReservedNormalizedUnits String,
reservation_TotalReservedUnits String,
reservation_UnitsPerReservation String,
resourceTags_userName String,
resourceTags_usercostcategory String  


)
    ROW FORMAT DELIMITED
      FIELDS TERMINATED BY ','
      ESCAPED BY '\\'
      LINES TERMINATED BY '\n'

STORED AS TEXTFILE
    LOCATION 's3://<<your bucket name>>';

Once you’ve successfully executed the command, you should see a new table named “cost_and_usage” with the below properties. Now we’re ready to start executing queries and running analysis!

Start with Looker and connect to Athena

Setting up Looker is a quick process, and you can try it out for free here (or download from Amazon Marketplace). It takes just a few seconds to connect Looker to your Athena database, and Looker comes with a host of pre-built data models and dashboards to make analysis of your cost and usage data simple and intuitive. After you’re connected, you can use the Looker UI to run whatever analysis you’d like. Looker translates this UI to optimized SQL, so any user can execute and visualize queries for true self-service analytics.

Major cost saving levers

Now that the data pipeline is configured, you can dive into the most popular use cases for cost savings. In this post, I focus on:

  • Purchasing Reserved Instances vs. On-Demand Instances
  • Data transfer costs
  • Allocating costs over users or other Attributes (denoted with resource tags)

On-Demand, Spot, and Reserved Instances

Purchasing Reserved Instances vs On-Demand Instances is arguably going to be the biggest cost lever for heavy AWS users (Reserved Instances run up to 75% cheaper!). AWS offers three options for purchasing instances:

  • On-Demand—Pay as you use.
  • Spot (variable cost)—Bid on spare Amazon EC2 computing capacity.
  • Reserved Instances—Pay for an instance for a specific, allotted period of time.

When purchasing a Reserved Instance, you can also choose to pay all-upfront, partial-upfront, or monthly. The more you pay upfront, the greater the discount.

If your company has been using AWS for some time now, you should have a good sense of your overall instance usage on a per-month or per-day basis. Rather than paying for these instances On-Demand, you should try to forecast the number of instances you’ll need, and reserve them with upfront payments.

The total amount of usage with Reserved Instances versus overall usage with all instances is called your coverage ratio. It’s important not to confuse your coverage ratio with your Reserved Instance utilization. Utilization represents the amount of reserved hours that were actually used. Don’t worry about exceeding capacity, you can still set up Auto Scaling preferences so that more instances get added whenever your coverage or utilization crosses a certain threshold (we often see a target of 80% for both coverage and utilization among savvy customers).

Calculating the reserved costs and coverage can be a bit tricky with the level of granularity provided by the cost and usage report. The following query shows your total cost over the last 6 months, broken out by Reserved Instance vs other instance usage. You can substitute the cost field for usage if you’d prefer. Please note that you should only have data for the time period after the cost and usage report has been enabled (though you can opt for up to 3 months of historical data by contacting your AWS Account Executive). If you’re just getting started, this query will only show a few days.

 

SELECT 
	DATE_FORMAT(from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate),'%Y-%m') AS "cost_and_usage.usage_start_month",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0) AS "cost_and_usage.total_unblended_cost",
	COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_reserved_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_on_ris",
	COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_non_reserved_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_on_non_ris"
FROM aws_optimizer.cost_and_usage  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1
ORDER BY 2 DESC
LIMIT 500

The resulting table should look something like the image below (I’m surfacing tables through Looker, though the same table would result from querying via command line or any other interface).

With a BI tool, you can create dashboards for easy reference and monitoring. New data is dumped into S3 every few hours, so your dashboards can update several times per day.

It’s an iterative process to understand the appropriate number of Reserved Instances needed to meet your business needs. After you’ve properly integrated Reserved Instances into your purchasing patterns, the savings can be significant. If your coverage is consistently below 70%, you should seriously consider adjusting your purchase types and opting for more Reserved instances.

Data transfer costs

One of the great things about AWS data storage is that it’s incredibly cheap. Most charges often come from moving and processing that data. There are several different prices for transferring data, broken out largely by transfers between regions and availability zones. Transfers between regions are the most costly, followed by transfers between Availability Zones. Transfers within the same region and same availability zone are free unless using elastic or public IP addresses, in which case there is a cost. You can find more detailed information in the AWS Pricing Docs. With this in mind, there are several simple strategies for helping reduce costs.

First, since costs increase when transferring data between regions, it’s wise to ensure that as many services as possible reside within the same region. The more you can localize services to one specific region, the lower your costs will be.

Second, you should maximize the data you’re routing directly within AWS services and IP addresses. Transfers out to the open internet are the most costly and least performant mechanisms of data transfers, so it’s best to keep transfers within AWS services.

Lastly, data transfers between private IP addresses are cheaper than between elastic or public IP addresses, so utilizing private IP addresses as much as possible is the most cost-effective strategy.

The following query provides a table depicting the total costs for each AWS product, broken out transfer cost type. Substitute the “lineitem_productcode” field in the query to segment the costs by any other attribute. If you notice any unusually high spikes in cost, you’ll need to dig deeper to understand what’s driving that spike: location, volume, and so on. Drill down into specific costs by including “product_usagetype” and “product_transfertype” in your query to identify the types of transfer costs that are driving up your bill.

SELECT 
	cost_and_usage.lineitem_productcode  AS "cost_and_usage.product_code",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost), 0) AS "cost_and_usage.total_unblended_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_data_transfer_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer-In')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_inbound_data_transfer_cost",
	COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer-Out')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0) AS "cost_and_usage.total_outbound_data_transfer_cost"
FROM aws_optimizer.cost_and_usage  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1
ORDER BY 2 DESC
LIMIT 500

When moving between regions or over the open web, many data transfer costs also include the origin and destination location of the data movement. Using a BI tool with mapping capabilities, you can get a nice visual of data flows. The point at the center of the map is used to represent external data flows over the open internet.

Analysis by tags

AWS provides the option to apply custom tags to individual resources, so you can allocate costs over whatever customized segment makes the most sense for your business. For a SaaS company that hosts software for customers on AWS, maybe you’d want to tag the size of each customer. The following query uses custom tags to display the reserved, data transfer, and total cost for each AWS service, broken out by tag categories, over the last 6 months. You’ll want to substitute the cost_and_usage.resourcetags_customersegment and cost_and_usage.customer_segment with the name of your customer field.

 

SELECT * FROM (
SELECT *, DENSE_RANK() OVER (ORDER BY z___min_rank) as z___pivot_row_rank, RANK() OVER (PARTITION BY z__pivot_col_rank ORDER BY z___min_rank) as z__pivot_col_ordering FROM (
SELECT *, MIN(z___rank) OVER (PARTITION BY "cost_and_usage.product_code") as z___min_rank FROM (
SELECT *, RANK() OVER (ORDER BY CASE WHEN z__pivot_col_rank=1 THEN (CASE WHEN "cost_and_usage.total_unblended_cost" IS NOT NULL THEN 0 ELSE 1 END) ELSE 2 END, CASE WHEN z__pivot_col_rank=1 THEN "cost_and_usage.total_unblended_cost" ELSE NULL END DESC, "cost_and_usage.total_unblended_cost" DESC, z__pivot_col_rank, "cost_and_usage.product_code") AS z___rank FROM (
SELECT *, DENSE_RANK() OVER (ORDER BY CASE WHEN "cost_and_usage.customer_segment" IS NULL THEN 1 ELSE 0 END, "cost_and_usage.customer_segment") AS z__pivot_col_rank FROM (
SELECT 
	cost_and_usage.lineitem_productcode  AS "cost_and_usage.product_code",
	cost_and_usage.resourcetags_customersegment  AS "cost_and_usage.customer_segment",
	COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0) AS "cost_and_usage.total_unblended_cost",
	1.0 * (COALESCE(SUM(CASE WHEN REGEXP_LIKE(cost_and_usage.product_usagetype, 'DataTransfer')    THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.percent_spend_data_transfers_unblended",
	1.0 * (COALESCE(SUM(CASE WHEN (CASE
         WHEN cost_and_usage.lineitem_lineitemtype = 'DiscountedUsage' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'RIFee' THEN 'RI Line Item'
         WHEN cost_and_usage.lineitem_lineitemtype = 'Fee' THEN 'RI Line Item'
         ELSE 'Non RI Line Item'
        END = 'Non RI Line Item') THEN cost_and_usage.lineitem_unblendedcost  ELSE NULL END), 0)) / NULLIF((COALESCE(SUM(cost_and_usage.lineitem_unblendedcost ), 0)),0)  AS "cost_and_usage.unblended_percent_spend_on_ris"
FROM aws_optimizer.cost_and_usage_raw  AS cost_and_usage

WHERE 
	(((from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) >= ((DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))) AND (from_iso8601_timestamp(cost_and_usage.lineitem_usagestartdate)) < ((DATE_ADD('month', 6, DATE_ADD('month', -5, DATE_TRUNC('MONTH', CAST(NOW() AS DATE))))))))
GROUP BY 1,2) ww
) bb WHERE z__pivot_col_rank <= 16384
) aa
) xx
) zz
 WHERE z___pivot_row_rank <= 500 OR z__pivot_col_ordering = 1 ORDER BY z___pivot_row_rank

The resulting table in this example looks like the results below. In this example, you can tell that we’re making poor use of Reserved Instances because they represent such a small portion of our overall costs.

Again, using a BI tool to visualize these costs and trends over time makes the analysis much easier to consume and take action on.

Summary

Saving costs on your AWS spend is always an iterative, ongoing process. Hopefully with these queries alone, you can start to understand your spending patterns and identify opportunities for savings. However, this is just a peek into the many opportunities available through analysis of the Cost and Usage report. Each company is different, with unique needs and usage patterns. To achieve maximum cost savings, we encourage you to set up an analytics environment that enables your team to explore all potential cuts and slices of your usage data, whenever it’s necessary. Exploring different trends and spikes across regions, services, user types, etc. helps you gain comprehensive understanding of your major cost levers and consistently implement new cost reduction strategies.

Note that all of the queries and analysis provided in this post were generated using the Looker data platform. If you’re already a Looker customer, you can get all of this analysis, additional pre-configured dashboards, and much more using Looker Blocks for AWS.


About the Author

Dillon Morrison leads the Platform Ecosystem at Looker. He enjoys exploring new technologies and architecting the most efficient data solutions for the business needs of his company and their customers. In his spare time, you’ll find Dillon rock climbing in the Bay Area or nose deep in the docs of the latest AWS product release at his favorite cafe (“Arlequin in SF is unbeatable!”).

 

 

 

Raspbian Stretch has arrived for Raspberry Pi

Post Syndicated from Simon Long original https://www.raspberrypi.org/blog/raspbian-stretch/

It’s now just under two years since we released the Jessie version of Raspbian. Those of you who know that Debian run their releases on a two-year cycle will therefore have been wondering when we might be releasing the next version, codenamed Stretch. Well, wonder no longer – Raspbian Stretch is available for download today!

Disney Pixar Toy Story Raspbian Stretch Raspberry Pi

Debian releases are named after characters from Disney Pixar’s Toy Story trilogy. In case, like me, you were wondering: Stretch is a purple octopus from Toy Story 3. Hi, Stretch!

The differences between Jessie and Stretch are mostly under-the-hood optimisations, and you really shouldn’t notice any differences in day-to-day use of the desktop and applications. (If you’re really interested, the technical details are in the Debian release notes here.)

However, we’ve made a few small changes to our image that are worth mentioning.

New versions of applications

Version 3.0.1 of Sonic Pi is included – this includes a lot of new functionality in terms of input/output. See the Sonic Pi release notes for more details of exactly what has changed.

Raspbian Stretch Raspberry Pi

The Chromium web browser has been updated to version 60, the most recent stable release. This offers improved memory usage and more efficient code, so you may notice it running slightly faster than before. The visual appearance has also been changed very slightly.

Raspbian Stretch Raspberry Pi

Bluetooth audio

In Jessie, we used PulseAudio to provide support for audio over Bluetooth, but integrating this with the ALSA architecture used for other audio sources was clumsy. For Stretch, we are using the bluez-alsa package to make Bluetooth audio work with ALSA itself. PulseAudio is therefore no longer installed by default, and the volume plugin on the taskbar will no longer start and stop PulseAudio. From a user point of view, everything should still work exactly as before – the only change is that if you still wish to use PulseAudio for some other reason, you will need to install it yourself.

Better handling of other usernames

The default user account in Raspbian has always been called ‘pi’, and a lot of the desktop applications assume that this is the current user. This has been changed for Stretch, so now applications like Raspberry Pi Configuration no longer assume this to be the case. This means, for example, that the option to automatically log in as the ‘pi’ user will now automatically log in with the name of the current user instead.

One other change is how sudo is handled. By default, the ‘pi’ user is set up with passwordless sudo access. We are no longer assuming this to be the case, so now desktop applications which require sudo access will prompt for the password rather than simply failing to work if a user without passwordless sudo uses them.

Scratch 2 SenseHAT extension

In the last Jessie release, we added the offline version of Scratch 2. While Scratch 2 itself hasn’t changed for this release, we have added a new extension to allow the SenseHAT to be used with Scratch 2. Look under ‘More Blocks’ and choose ‘Add an Extension’ to load the extension.

This works with either a physical SenseHAT or with the SenseHAT emulator. If a SenseHAT is connected, the extension will control that in preference to the emulator.

Raspbian Stretch Raspberry Pi

Fix for Broadpwn exploit

A couple of months ago, a vulnerability was discovered in the firmware of the BCM43xx wireless chipset which is used on Pi 3 and Pi Zero W; this potentially allows an attacker to take over the chip and execute code on it. The Stretch release includes a patch that addresses this vulnerability.

There is also the usual set of minor bug fixes and UI improvements – I’ll leave you to spot those!

How to get Raspbian Stretch

As this is a major version upgrade, we recommend using a clean image; these are available from the Downloads page on our site as usual.

Upgrading an existing Jessie image is possible, but is not guaranteed to work in every circumstance. If you wish to try upgrading a Jessie image to Stretch, we strongly recommend taking a backup first – we can accept no responsibility for loss of data from a failed update.

To upgrade, first modify the files /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list. In both files, change every occurrence of the word ‘jessie’ to ‘stretch’. (Both files will require sudo to edit.)

Then open a terminal window and execute

sudo apt-get update
sudo apt-get -y dist-upgrade

Answer ‘yes’ to any prompts. There may also be a point at which the install pauses while a page of information is shown on the screen – hold the ‘space’ key to scroll through all of this and then hit ‘q’ to continue.

Finally, if you are not using PulseAudio for anything other than Bluetooth audio, remove it from the image by entering

sudo apt-get -y purge pulseaudio*

The post Raspbian Stretch has arrived for Raspberry Pi appeared first on Raspberry Pi.