Tag Archives: ajax

User Authentication Best Practices Checklist

Post Syndicated from Bozho original https://techblog.bozho.net/user-authentication-best-practices-checklist/

User authentication is the functionality that every web application shared. We should have perfected that a long time ago, having implemented it so many times. And yet there are so many mistakes made all the time.

Part of the reason for that is that the list of things that can go wrong is long. You can store passwords incorrectly, you can have a vulnerably password reset functionality, you can expose your session to a CSRF attack, your session can be hijacked, etc. So I’ll try to compile a list of best practices regarding user authentication. OWASP top 10 is always something you should read, every year. But that might not be enough.

So, let’s start. I’ll try to be concise, but I’ll include as much of the related pitfalls as I can cover – e.g. what could go wrong with the user session after they login:

  • Store passwords with bcrypt/scrypt/PBKDF2. No MD5 or SHA, as they are not good for password storing. Long salt (per user) is mandatory (the aforementioned algorithms have it built in). If you don’t and someone gets hold of your database, they’ll be able to extract the passwords of all your users. And then try these passwords on other websites.
  • Use HTTPS. Period. (Otherwise user credentials can leak through unprotected networks). Force HTTPS if user opens a plain-text version.
  • Mark cookies as secure. Makes cookie theft harder.
  • Use CSRF protection (e.g. CSRF one-time tokens that are verified with each request). Frameworks have such functionality built-in.
  • Disallow framing (X-Frame-Options: DENY). Otherwise your website may be included in another website in a hidden iframe and “abused” through javascript.
  • Have a same-origin policy
  • Logout – let your users logout by deleting all cookies and invalidating the session. This makes usage of shared computers safer (yes, users should ideally use private browsing sessions, but not all of them are that savvy)
  • Session expiry – don’t have forever-lasting sessions. If the user closes your website, their session should expire after a while. “A while” may still be a big number depending on the service provided. For ajax-heavy website you can have regular ajax-polling that keeps the session alive while the page stays open.
  • Remember me – implementing “remember me” (on this machine) functionality is actually hard due to the risks of a stolen persistent cookie. Spring-security uses this approach, which I think should be followed if you wish to implement more persistent logins.
  • Forgotten password flow – the forgotten password flow should rely on sending a one-time (or expiring) link to the user and asking for a new password when it’s opened. 0Auth explain it in this post and Postmark gives some best pracitces. How the link is formed is a separate discussion and there are several approaches. Store a password-reset token in the user profile table and then send it as parameter in the link. Or do not store anything in the database, but send a few params: userId:expiresTimestamp:hmac(userId+expiresTimestamp). That way you have expiring links (rather than one-time links). The HMAC relies on a secret key, so the links can’t be spoofed. It seems there’s no consensus, as the OWASP guide has a bit different approach
  • One-time login links – this is an option used by Slack, which sends one-time login links instead of asking users for passwords. It relies on the fact that your email is well guarded and you have access to it all the time. If your service is not accessed to often, you can have that approach instead of (rather than in addition to) passwords.
  • Limit login attempts – brute-force through a web UI should not be possible; therefore you should block login attempts if they become too many. One approach is to just block them based on IP. The other one is to block them based on account attempted. (Spring example here). Which one is better – I don’t know. Both can actually be combined. Instead of fully blocking the attempts, you may add a captcha after, say, the 5th attempt. But don’t add the captcha for the first attempt – it is bad user experience.
  • Don’t leak information through error messages – you shouldn’t allow attackers to figure out if an email is registered or not. If an email is not found, upon login report just “Incorrect credentials”. On passwords reset, it may be something like “If your email is registered, you should have received a password reset email”. This is often at odds with usability – people don’t often remember the email they used to register, and the ability to check a number of them before getting in might be important. So this rule is not absolute, though it’s desirable, especially for more critical systems.
  • Make sure you use JWT only if it’s really necessary and be careful of the pitfalls.
  • Consider using a 3rd party authentication – OpenID Connect, OAuth by Google/Facebook/Twitter (but be careful with OAuth flaws as well). There’s an associated risk with relying on a 3rd party identity provider, and you still have to manage cookies, logout, etc., but some of the authentication aspects are simplified.
  • For high-risk or sensitive applications use 2-factor authentication. There’s a caveat with Google Authenticator though – if you lose your phone, you lose your accounts (unless there’s a manual process to restore it). That’s why Authy seems like a good solution for storing 2FA keys.

I’m sure I’m missing something. And you see it’s complicated. Sadly we’re still at the point where the most common functionality – authenticating users – is so tricky and cumbersome, that you almost always get at least some of it wrong.

The post User Authentication Best Practices Checklist appeared first on Bozho's tech blog.

SUPER game night 3: GAMES MADE QUICK??? 2.0

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/23/super-game-night-3-games-made-quick-2-0/

Game night continues with a smorgasbord of games from my recent game jam, GAMES MADE QUICK??? 2.0!

The idea was to make a game in only a week while watching AGDQ, as an alternative to doing absolutely nothing for a week while watching AGDQ. (I didn’t submit a game myself; I was chugging along on my Anise game, which isn’t finished yet.)

I can’t very well run a game jam and not play any of the games, so here’s some of them in no particular order! Enjoy!

These are impressions, not reviews. I try to avoid major/ending spoilers, but big plot points do tend to leave impressions.

Weather Quest, by timlmul

short · rpg · jan 2017 · (lin)/mac/win · free on itch · jam entry

Weather Quest is its author’s first shipped game, written completely from scratch (the only vendored code is a micro OO base). It’s very short, but as someone who has also written LÖVE games completely from scratch, I can attest that producing something this game-like in a week is a fucking miracle. Bravo!

For reference, a week into my first foray, I think I was probably still writing my own Tiled importer like an idiot.

Only Mac and Windows builds are on itch, but it’s a LÖVE game, so Linux folks can just grab a zip from GitHub and throw that at love.

FINAL SCORE: ⛅☔☀

Pancake Numbers Simulator, by AnorakThePrimordial

short · sim · jan 2017 · lin/mac/win · free on itch · jam entry

Given a stack of N pancakes (of all different sizes and in no particular order), the Nth pancake number is the most flips you could possibly need to sort the pancakes in order with the smallest on top. A “flip” is sticking a spatula under one of the pancakes and flipping the whole sub-stack over. There’s, ah, a video embedded on the game page with some visuals.

Anyway, this game lets you simulate sorting a stack via pancake flipping, which is surprisingly satisfying! I enjoy cleaning up little simulated messes, such as… incorrectly-sorted pancakes, I guess?

This probably doesn’t work too well as a simulator for solving the general problem — you’d have to find an optimal solution for every permutation of N pancakes to be sure you were right. But it’s a nice interactive illustration of the problem, and if you know the pancake number for your stack size of choice (which I wish the game told you — for seven pancakes, it’s 8), then trying to restore a stack in that many moves makes for a nice quick puzzle.

FINAL SCORE: \(\frac{18}{11}\)

Framed Animals, by chridd

short · metroidvania · jan 2017 · web/win · free on itch · jam entry

The concept here was to kill the frames, save the animals, which is a delightfully literal riff on a long-running AGDQ/SGDQ donation incentive — people vote with their dollars to decide whether Super Metroid speedrunners go out of their way to free the critters who show you how to walljump and shinespark. Super Metroid didn’t have a showing at this year’s AGDQ, and so we have this game instead.

It’s rough, but clever, and I got really into it pretty quickly — each animal you save gives you a new ability (in true Metroid style), and you get to test that ability out by playing as the animal, with only that ability and no others, to get yourself back to the most recent save point.

I did, tragically, manage to get myself stuck near what I think was about to be the end of the game, so some of the animals will remain framed forever. What an unsatisfying conclusion.

Gravity feels a little high given the size of the screen, and like most tile-less platformers, there’s not really any way to gauge how high or long your jump is before you leap. But I’m only even nitpicking because I think this is a great idea and I hope the author really does keep working on it.

FINAL SCORE: $136,596.69

Battle 4 Glory, by Storyteller Games

short · fighter · jan 2017 · win · free on itch · jam entry

This is a Smash Bros-style brawler, complete with the four players, the 2D play area in a 3D world, and the random stage obstacles showing up. I do like the Smash style, despite not otherwise being a fan of fighting games, so it’s nice to see another game chase that aesthetic.

Alas, that’s about as far as it got — which is pretty far for a week of work! I don’t know what more to say, though. The environments are neat, but unless I’m missing something, the only actions at your disposal are jumping and very weak melee attacks. I did have a good few minutes of fun fruitlessly mashing myself against the bumbling bots, as you can see.

FINAL SCORE: 300%

Icnaluferu Guild, Year Sixteen, by CHz

short · adventure · jan 2017 · web · free on itch · jam entry

Here we have the first of several games made with bitsy, a micro game making tool that basically only supports walking around, talking to people, and picking up items.

I tell you this because I think half of my appreciation for this game is in the ways it wriggled against those limits to emulate a Zelda-like dungeon crawler. Everything in here is totally fake, and you can’t really understand just how fake unless you’ve tried to make something complicated with bitsy.

It’s pretty good. The dialogue is entertaining (the rest of your party develops distinct personalities solely through oneliners, somehow), the riffs on standard dungeon fare are charming, and the Link’s Awakening-esque perspective walls around the edges of each room are fucking glorious.

FINAL SCORE: 2 bits

The Lonely Tapes, by JTHomeslice

short · rpg · jan 2017 · web · free on itch · jam entry

Another bitsy entry, this one sees you play as a Wal— sorry, a JogDawg, which has lost its cassette tapes and needs to go recover them!

(A cassette tape is like a VHS, but for music.)

(A VHS is—)

I have the sneaking suspicion that I missed out on some musical in-jokes, due to being uncultured swine. I still enjoyed the game — it’s always clear when someone is passionate about the thing they’re writing about, and I could tell I was awash in that aura even if some of it went over my head. You know you’ve done good if someone from way outside your sphere shows up and still has a good time.

FINAL SCORE: Nine… Inch Nails? They’re a band, right? God I don’t know write your own damn joke

Pirate Kitty-Quest, by TheKoolestKid

short · adventure · jan 2017 · win · free on itch · jam entry

I completely forgot I’d even given “my birthday” and “my cat” as mostly-joking jam themes until I stumbled upon this incredible gem. I don’t think — let me just check here and — yeah no this person doesn’t even follow me on Twitter. I have no idea who they are?

BUT THEY MADE A GAME ABOUT ANISE AS A PIRATE, LOOKING FOR TREASURE

PIRATE. ANISE

PIRATE ANISE!!!

This game wins the jam, hands down. 🏆

FINAL SCORE: Yarr, eight pieces o’ eight

CHIPS Mario, by NovaSquirrel

short · platformer · jan 2017 · (lin/mac)/win · free on itch · jam entry

You see this? This is fucking witchcraft.

This game is made with MegaZeux. MegaZeux games look like THIS. Text-mode, bound to a grid, with two colors per cell. That’s all you get.

Until now, apparently?? The game is a tech demo of “unbound” sprites, which can be drawn on top of the character grid without being aligned to it. And apparently have looser color restrictions.

The collision is a little glitchy, which isn’t surprising for a MegaZeux platformer; I had some fun interactions with platforms a couple times. But hey, goddamn, it’s free-moving Mario, in MegaZeux, what the hell.

(I’m looking at the most recently added games on DigitalMZX now, and I notice that not only is this game in the first slot, but NovaSquirrel’s MegaZeux entry for Strawberry Jam last February is still in the seventh slot. RIP, MegaZeux. I’m surprised a major feature like this was even added if the community has largely evaporated?)

FINAL SCORE: n/a, disqualified for being probably summoned from the depths of Hell

d!¢< pic, by 573 Games

short · story · jan 2017 · web · free on itch · jam entry

This is a short story about not sending dick pics. It’s very short, so I can’t say much without spoiling it, but: you are generally prompted to either text something reasonable, or send a dick pic. You should not send a dick pic.

It’s a fascinating artifact, not because of the work itself, but because it’s so terse that I genuinely can’t tell what the author was even going for. And this is the kind of subject where the author was, surely, going for something. Right? But was it genuinely intended to be educational, or was it tongue-in-cheek about how some dudes still don’t get it? Or is it side-eying the player who clicks the obviously wrong option just for kicks, which is the same reason people do it for real? Or is it commentary on how “send a dick pic” is a literal option for every response in a real conversation, too, and it’s not that hard to just not do it — unless you are one of the kinds of people who just feels a compulsion to try everything, anything, just because you can? Or is it just a quick Twine and I am way too deep in this? God, just play the thing, it’s shorter than this paragraph.

I’m also left wondering when it is appropriate to send a dick pic. Presumably there is a correct time? Hopefully the author will enter Strawberry Jam 2 to expound upon this.

FINAL SCORE: 3½” 😉

Marble maze, by Shtille

short · arcade · jan 2017 · win · free on itch · jam entry

Ah, hm. So this is a maze navigated by rolling a marble around. You use WASD to move the marble, and you can also turn the camera with the arrow keys.

The trouble is… the marble’s movement is always relative to the world, not the camera. That means if you turn the camera 30° and then try to move the marble, it’ll move at a 30° angle from your point of view.

That makes navigating a maze, er, difficult.

Camera-relative movement is the kind of thing I take so much for granted that I wouldn’t even think to do otherwise, and I think it’s valuable to look at surprising choices that violate fundamental conventions, so I’m trying to take this as a nudge out of my comfort zone. What could you design in an interesting way that used world-relative movement? Probably not the player, but maybe something else in the world, as long as you had strong landmarks? Hmm.

FINAL SCORE: ᘔ

Refactor: flight, by fluffy

short · arcade · jan 2017 · lin/mac/win · free on itch · jam entry

Refactor is a game album, which is rather a lot what it sounds like, and Flight is one of the tracks. Which makes this a single, I suppose.

It’s one of those games where you move down an oddly-shaped tunnel trying not to hit the walls, but with some cute twists. Coins and gems hop up from the bottom of the screen in time with the music, and collecting them gives you points. Hitting a wall costs you some points and kills your momentum, but I don’t think outright losing is possible, which is great for me!

Also, the monk cycles through several animal faces. I don’t know why, and it’s very good. One of those odd but memorable details that sits squarely on the intersection of abstract, mysterious, and a bit weird, and refuses to budge from that spot.

The music is great too? Really chill all around.

FINAL SCORE: 🎵🎵🎵🎵

The Adventures of Klyde

short · adventure · jan 2017 · web · free on itch · jam entry

Another bitsy game, this one starring a pig (humorously symbolized by a giant pig nose with ears) who must collect fruit and solve some puzzles.

This is charmingly nostalgic for me — it reminds me of some standard fare in engines like MegaZeux, where the obvious things to do when presented with tiles and pickups were to make mazes. I don’t mean that in a bad way; the maze is the fundamental environmental obstacle.

A couple places in here felt like invisible teleport mazes I had to brute-force, but I might have been missing a hint somewhere. I did make it through with only a little trouble, but alas — I stepped in a bad warp somewhere and got sent to the upper left corner of the starting screen, which is surrounded by walls. So Klyde’s new life is being trapped eternally in a nowhere space.

FINAL SCORE: 19/20 apples

And more

That was only a third of the games, and I don’t think even half of the ones I’ve played. I’ll have to do a second post covering the rest of them? Maybe a third?

Or maybe this is a ludicrous format for commenting on several dozen games and I should try to narrow it down to the ones that resonated the most for Strawberry Jam 2? Maybe??

Physics cheats

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/06/physics-cheats/

Anonymous asks:

something about how we tweak physics to “work” better in games?

Ho ho! Work. Get it? Like in physics…?

Hitboxes

Hitbox” is perhaps not the most accurate term, since the shape used for colliding with the environment and the shape used for detecting damage might be totally different. They’re usually the same in simple platformers, though, and that’s what most of my games have been.

The hitbox is the biggest physics fudge by far, and it exists because of a single massive approximation that (most) games make: you’re controlling a single entity in the abstract, not a physical body in great detail.

That is: when you walk with your real-world meat shell, you perform a complex dance of putting one foot in front of the other, a motion you spent years perfecting. When you walk in a video game, you press a single “walk” button. Your avatar may play an animation that moves its legs back and forth, but since you’re not actually controlling the legs independently (and since simulating them is way harder), the game just treats you like a simple shape. Fairly often, this is a box, or something very box-like.

An Eevee sprite standing on faux ground; the size of the underlying image and the hitbox are outlined

Since the player has no direct control over the exact placement of their limbs, it would be slightly frustrating to have them collide with the world. This is especially true in cases like the above, where the tail and left ear protrude significantly out from the main body. If that Eevee wanted to stand against a real-world wall, she would simply tilt her ear or tail out of the way, so there’s no reason for the ear to block her from standing against a game wall. To compensate for this, the ear and tail are left out of the collision box entirely and will simply jut into a wall if necessary — a goofy affordance that’s so common it doesn’t even register as unusual. As a bonus (assuming this same box is used for combat), she won’t take damage from projectiles that merely graze past an ear.

(One extra consideration for sprite games in particular: the hitbox ought to be horizontally symmetric around the sprite’s pivot — i.e. the point where the entity is truly considered to be standing — so that the hitbox doesn’t abruptly move when the entity turns around!)

Corners

Treating the player (and indeed most objects) as a box has one annoying side effect: boxes have corners. Corners can catch on other corners, even by a single pixel. Real-world bodies tend to be a bit rounder and squishier and this can tolerate grazing a corner; even real-world boxes will simply rotate a bit.

Ah, but in our faux physics world, we generally don’t want conscious actors (such as the player) to rotate, even with a realistic physics simulator! Real-world bodies are made of parts that will generally try to keep you upright, after all; you don’t tilt back and forth much.

One way to handle corners is to simply remove them from conscious actors. A hitbox doesn’t have to be a literal box, after all. A popular alternative — especially in Unity where it’s a standard asset — is the pill-shaped capsule, which has semicircles/hemispheres on the top and bottom and a cylindrical body in 3D. No corners, no problem.

Of course, that introduces a new problem: now the player can’t balance precariously on edges without their rounded bottom sliding them off. Alas.

If you’re stuck with corners, then, you may want to use a corner bump, a term I just made up. If the player would collide with a corner, but the collision is only by a few pixels, just nudge them to the side a bit and carry on.

An Eevee sprite trying to move sideways into a shallow ledge; the game bumps her upwards slightly, so she steps onto it instead

When the corner is horizontal, this creates stairs! This is, more or less kinda, how steps work in Doom: when the player tries to cross from one sector into another, if the height difference is 24 units or less, the game simply bumps them upwards to the height of the new floor and lets them continue on.

Implementing this in a game without Doom’s notion of sectors is a little trickier. In fact, I still haven’t done it. Collision detection based on rejection gets it for free, kinda, but it’s not very deterministic and it breaks other things. But that’s a whole other post.

Gravity

Gravity is pretty easy. Everything accelerates downwards all the time. What’s interesting are the exceptions.

Jumping

Jumping is a giant hack.

Think about how actual jumping works: you tense your legs, which generally involves bending your knees first, and then spring upwards. In a platformer, you can just leap whenever you feel like it, which is nonsense. Also you go like twenty feet into the air?

Worse, most platformers allow variable-height jumping, where your jump is lower if you let go of the jump button while you’re in the air. Normally, one would expect to have to decide how much force to put into the jump beforehand.

But of course this is about convenience of controls: when jumping is your primary action, you want to be able to do it immediately, without any windup for how high you want to jump.

(And then there’s double jumping? Come on.)

Air control is a similar phenomenon: usually you’d jump in a particular direction by controlling how you push off the ground with your feet, but in a video game, you don’t have feet! You only have the box. The compromise is to let you control your horizontal movement to a limit degree in midair, even though that doesn’t make any sense. (It’s way more fun, though, and overall gives you more movement options, which are good to have in an interactive medium.)

Air control also exposes an obvious place that game physics collide with the realistic model of serious physics engines. I’ve mentioned this before, but: if you use Real Physics™ and air control yourself into a wall, you might find that you’ll simply stick to the wall until you let go of the movement buttons. Why? Remember, player movement acts as though an external force were pushing you around (and from the perspective of a Real™ physics engine, this is exactly how you’d implement it) — so air-controlling into a wall is equivalent to pushing a book against a wall with your hand, and the friction with the wall holds you in place. Oops.

Ground sticking

Another place game physics conflict with physics engines is with running to the top of a slope. On a real hill, of course, you land on top of the slope and are probably glad of it; slopes are hard to climb!

An Eevee moves to the top of a slope, and rather than step onto the flat top, she goes flying off into the air

In a video game, you go flying. Because you’re a box. With momentum. So you hit the peak and keep going in the same direction. Which is diagonally upwards.

Projectiles

To make them more predictable, projectiles generally aren’t subject to gravity, at least as far as I’ve seen. The real world does not have such an exemption. The real world imposes gravity even on sniper rifles, which in a video game are often implemented as an instant trace unaffected by anything in the world because the bullet never actually exists in the world.

Resistance

Ah. Welcome to hell.

Water

Water is an interesting case, and offhand I don’t know the gritty details of how games implement it. In the real world, water applies a resistant drag force to movement — and that force is proportional to the square of velocity, which I’d completely forgotten until right now. I am almost positive that no game handles that correctly. But then, in real-world water, you can push against the water itself for movement, and games don’t simulate that either. What’s the rough equivalent?

The Sonic Physics Guide suggests that Sonic handles it by basically halving everything: acceleration, max speed, friction, etc. When Sonic enters water, his speed is cut; when Sonic exits water, his speed is increased.

That last bit feels validating — I could swear Metroid Prime did the same thing, and built my own solution around it, but couldn’t remember for sure. It makes no sense, of course, for a jump to become faster just because you happened to break the surface of the water, but it feels fantastic.

The thing I did was similar, except that I didn’t want to add a multiplier in a dozen places when you happen to be underwater (and remember which ones need it to be squared, etc.). So instead, I calculate everything completely as normal, so velocity is exactly the same as it would be on dry land — but the distance you would move gets halved. The effect seems to be pretty similar to most platformers with water, at least as far as I can tell. It hasn’t shown up in a published game and I only added this fairly recently, so I might be overlooking some reason this is a bad idea.

(One reason that comes to mind is that velocity is now a little white lie while underwater, so anything relying on velocity for interesting effects might be thrown off. Or maybe that’s correct, because velocity thresholds should be halved underwater too? Hm!)

Notably, air is also a fluid, so it should behave the same way (just with different constants). I definitely don’t think any games apply air drag that’s proportional to the square of velocity.

Friction

Friction is, in my experience, a little handwaved. Probably because real-world friction is so darn complicated.

Consider that in the real world, we want very high friction on the surfaces we walk on — shoes and tires are explicitly designed to increase it, even. We move by bracing a back foot against the ground and using that to push ourselves forward, so we want the ground to resist our push as much as possible.

In a game world, we are a box. We move by being pushed by some invisible outside force, so if the friction between ourselves and the ground is too high, we won’t be able to move at all! That’s complete nonsense physically, but it turns out to be handy in some cases — for example, highish friction can simulate walking through deep mud, which should be difficult due to fluid drag and low friction.

But the best-known example of the fakeness of game friction is video game ice. Walking on real-world ice is difficult because the low friction means low grip; your feet are likely to slip out from under you, and you’ll simply fall down and have trouble moving at all. In a video game, you can’t fall down, so you have the opposite experience: you spend most of your time sliding around uncontrollably. Yet ice is so common in video games (and perhaps so uncommon in places I’ve lived) that I, at least, had never really thought about this disparity until an hour or so ago.

Game friction vs real-world friction

Real-world friction is a force. It’s the normal force (which is the force exerted by the object on the surface) times some constant that depends on how the two materials interact.

Force is mass times acceleration, and platformers often ignore mass, so friction ought to be an acceleration — applied against the object’s movement, but never enough to push it backwards.

I haven’t made any games where variable friction plays a significant role, but my gut instinct is that low friction should mean the player accelerates more slowly but has a higher max speed, and high friction should mean the opposite. I see from my own source code that I didn’t even do what I just said, so let’s defer to some better-made and well-documented games: Sonic and Doom.

In Sonic, friction is a fixed value subtracted from the player’s velocity (regardless of direction) each tic. Sonic has a fixed framerate, so the units are really pixels per tic squared (i.e. acceleration), multiplied by an implicit 1 tic per tic. So far, so good.

But Sonic’s friction only applies if the player isn’t pressing or . Hang on, that isn’t friction at all; that’s just deceleration! That’s equivalent to jogging to a stop. If friction were lower, Sonic would take longer to stop, but otherwise this is only tangentially related to friction.

(In fairness, this approach would decently emulate friction for non-conscious sliding objects, which are never going to be pressing movement buttons. Also, we don’t have the Sonic source code, and the name “friction” is a fan invention; the Sonic Physics Guide already uses “deceleration” to describe the player’s acceleration when turning around.)

Okay, let’s try Doom. In Doom, the default friction is 90.625%.

Hang on, what?

Yes, in Doom, friction is a multiplier applied every tic. Doom runs at 35 tics per second, so this is a multiplier of 0.032 per second. Yikes!

This isn’t anything remotely like real friction, but it’s much easier to implement. With friction as acceleration, the game has to know both the direction of movement (so it can apply friction in the opposite direction) and the magnitude (so it doesn’t overshoot and launch the object in the other direction). That means taking a semi-costly square root and also writing extra code to cap the amount of friction. With a multiplier, neither is necessary; just multiply the whole velocity vector and you’re done.

There are some downsides. One is that objects will never actually stop, since multiplying by 3% repeatedly will never produce a result of zero — though eventually the speed will become small enough to either slip below a “minimum speed” threshold or simply no longer fit in a float representation. Another is that the units are fairly meaningless: with Doom’s default friction of 90.625%, about how long does it take for the player to stop? I have no idea, partly because “stop” is ambiguous here! If friction were an acceleration, I could divide it into the player’s max speed to get a time.

All that aside, what are the actual effects of changing Doom’s friction? What an excellent question that’s surprisingly tricky to answer. (Note that friction can’t be changed in original Doom, only in the Boom port and its derivatives.) Here’s what I’ve pieced together.

Doom’s “friction” is really two values. “Friction” itself is a multiplier applied to moving objects on every tic, but there’s also a move factor which defaults to \(\frac{1}{32} = 0.03125\) and is derived from friction for custom values.

Every tic, the player’s velocity is multiplied by friction, and then increased by their speed times the move factor.

$$
v(n) = v(n – 1) \times friction + speed \times move factor
$$

Eventually, the reduction from friction will balance out the speed boost. That happens when \(v(n) = v(n – 1)\), so we can rearrange it to find the player’s effective max speed:

$$
v = v \times friction + speed \times move factor \\
v – v \times friction = speed \times move factor \\
v = speed \times \frac{move factor}{1 – friction}
$$

For vanilla Doom’s move factor of 0.03125 and friction of 0.90625, that becomes:

$$
v = speed \times \frac{\frac{1}{32}}{1 – \frac{29}{32}} = speed \times \frac{\frac{1}{32}}{\frac{3}{32}} = \frac{1}{3} \times speed
$$

Curiously, “speed” is three times the maximum speed an actor can actually move. Doomguy’s run speed is 50, so in practice he moves a third of that, or 16⅔ units per tic. (Of course, this isn’t counting SR40, a bug that lets Doomguy run ~40% faster than intended diagonally.)

So now, what if you change friction? Even more curiously, the move factor is calculated completely differently depending on whether friction is higher or lower than the default Doom amount:

$$
move factor = \begin{cases}
\frac{133 – 128 \times friction}{544} &≈ 0.244 – 0.235 \times friction & \text{ if } friction \ge \frac{29}{32} \\
\frac{81920 \times friction – 70145}{1048576} &≈ 0.078 \times friction – 0.067 & \text{ otherwise }
\end{cases}
$$

That’s pretty weird? Complicating things further is that low friction (which means muddy terrain, remember) has an extra multiplier on its move factor, depending on how fast you’re already going — the idea is apparently that you have a hard time getting going, but it gets easier as you find your footing. The extra multiplier maxes out at 8, which makes the two halves of that function meet at the vanilla Doom value.

A graph of the relationship between friction and move factor

That very top point corresponds to the move factor from the original game. So no matter what you do to friction, the move factor becomes lower. At 0.85 and change, you can no longer move at all; below that, you move backwards.

From the formula above, it’s easy to see what changes to friction and move factor will do to Doomguy’s stable velocity. Move factor is in the numerator, so increasing it will increase stable velocity — but it can’t increase, so stable velocity can only ever decrease. Friction is in the denominator, but it’s subtracted from 1, so increasing friction will make the denominator a smaller value less than 1, i.e. increase stable velocity. Combined, we get this relationship between friction and stable velocity.

A graph showing stable velocity shooting up dramatically as friction increases

As friction approaches 1, stable velocity grows without bound. This makes sense, given the definition of \(v(n)\) — if friction is 1, the velocity from the previous tic isn’t reduced at all, so we just keep accelerating freely.

All of this is why I’m wary of using multipliers.

Anyway, this leaves me with one last question about the effects of Doom’s friction: how long does it take to reach stable velocity? Barring precision errors, we’ll never truly reach stable velocity, but let’s say within 5%. First we need a closed formula for the velocity after some number of tics. This is a simple recurrence relation, and you can write a few terms out yourself if you want to be sure this is right.

$$
v(n) = v_0 \times friction^n + speed \times move factor \times \frac{friction^n – 1}{friction – 1}
$$

Our initial velocity is zero, so the first term disappears. Set this equal to the stable formula and solve for n:

$$
speed \times move factor \times \frac{friction^n – 1}{friction – 1} = (1 – 5\%) \times speed \times \frac{move factor}{1 – friction} \\
friction^n – 1 = -(1 – 5\%) \\
n = \frac{\ln 5\%}{\ln friction}
$$

Speed” and move factor disappear entirely, which makes sense, and this is purely a function of friction (and how close we want to get). For vanilla Doom, that comes out to 30.4, which is a little less than a second. For other values of friction:

A graph of time to stability which leaps upwards dramatically towards the right

As friction increases (which in Doom terms means the surface is more slippery), it takes longer and longer to reach stable speed, which is in turn greater and greater. For lesser friction (i.e. mud), stable speed is lower, but reached fairly quickly. (Of course, the extra “getting going” multiplier while in mud adds some extra time here, but including that in the graph is a bit more complicated.)

I think this matches with my instincts above. How fascinating!

What’s that? This is way too much math and you hate it? Then don’t use multipliers in game physics.

Uh

That was a hell of a diversion!

I guess the goofiest stuff in basic game physics is really just about mapping player controls to in-game actions like jumping and deceleration; the rest consists of hacks to compensate for representing everything as a box.

Random with care

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/02/random-with-care/

Hi! Here are a few loose thoughts about picking random numbers.

A word about crypto

DON’T ROLL YOUR OWN CRYPTO

This is all aimed at frivolous pursuits like video games. Hell, even video games where money is at stake should be deferring to someone who knows way more than I do. Otherwise you might find out that your deck shuffles in your poker game are woefully inadequate and some smartass is cheating you out of millions. (If your random number generator has fewer than 226 bits of state, it can’t even generate every possible shuffling of a deck of cards!)

Use the right distribution

Most languages have a random number primitive that spits out a number uniformly in the range [0, 1), and you can go pretty far with just that. But beware a few traps!

Random pitches

Say you want to pitch up a sound by a random amount, perhaps up to an octave. Your audio API probably has a way to do this that takes a pitch multiplier, where I say “probably” because that’s how the only audio API I’ve used works.

Easy peasy. If 1 is unchanged and 2 is pitched up by an octave, then all you need is rand() + 1. Right?

No! Pitch is exponential — within the same octave, the “gap” between C and C♯ is about half as big as the gap between B and the following C. If you pick a pitch multiplier uniformly, you’ll have a noticeable bias towards the higher pitches.

One octave corresponds to a doubling of pitch, so if you want to pick a random note, you want 2 ** rand().

Random directions

For two dimensions, you can just pick a random angle with rand() * TAU.

If you want a vector rather than an angle, or if you want a random direction in three dimensions, it’s a little trickier. You might be tempted to just pick a random point where each component is rand() * 2 - 1 (ranging from −1 to 1), but that’s not quite right. A direction is a point on the surface (or, equivalently, within the volume) of a sphere, and picking each component independently produces a point within the volume of a cube; the result will be a bias towards the corners of the cube, where there’s much more extra volume beyond the sphere.

No? Well, just trust me. I don’t know how to make a diagram for this.

Anyway, you could use the Pythagorean theorem a few times and make a huge mess of things, or it turns out there’s a really easy way that even works for two or four or any number of dimensions. You pick each coordinate from a Gaussian (normal) distribution, then normalize the resulting vector. In other words, using Python’s random module:

1
2
3
4
5
6
def random_direction():
    x = random.gauss(0, 1)
    y = random.gauss(0, 1)
    z = random.gauss(0, 1)
    r = math.sqrt(x*x + y*y + z*z)
    return x/r, y/r, z/r

Why does this work? I have no idea!

Note that it is possible to get zero (or close to it) for every component, in which case the result is nonsense. You can re-roll all the components if necessary; just check that the magnitude (or its square) is less than some epsilon, which is equivalent to throwing away a tiny sphere at the center and shouldn’t affect the distribution.

Beware Gauss

Since I brought it up: the Gaussian distribution is a pretty nice one for choosing things in some range, where the middle is the common case and should appear more frequently.

That said, I never use it, because it has one annoying drawback: the Gaussian distribution has no minimum or maximum value, so you can’t really scale it down to the range you want. In theory, you might get any value out of it, with no limit on scale.

In practice, it’s astronomically rare to actually get such a value out. I did a hundred million trials just to see what would happen, and the largest value produced was 5.8.

But, still, I’d rather not knowingly put extremely rare corner cases in my code if I can at all avoid it. I could clamp the ends, but that would cause unnatural bunching at the endpoints. I could reroll if I got a value outside some desired range, but I prefer to avoid rerolling when I can, too; after all, it’s still (astronomically) possible to have to reroll for an indefinite amount of time. (Okay, it’s really not, since you’ll eventually hit the period of your PRNG. Still, though.) I don’t bend over backwards here — I did just say to reroll when picking a random direction, after all — but when there’s a nicer alternative I’ll gladly use it.

And lo, there is a nicer alternative! Enter the beta distribution. It always spits out a number in [0, 1], so you can easily swap it in for the standard normal function, but it takes two “shape” parameters α and β that alter its behavior fairly dramatically.

With α = β = 1, the beta distribution is uniform, i.e. no different from rand(). As α increases, the distribution skews towards the right, and as β increases, the distribution skews towards the left. If α = β, the whole thing is symmetric with a hump in the middle. The higher either one gets, the more extreme the hump (meaning that value is far more common than any other). With a little fiddling, you can get a number of interesting curves.

Screenshots don’t really do it justice, so here’s a little Wolfram widget that lets you play with α and β live:

Note that if α = 1, then 1 is a possible value; if β = 1, then 0 is a possible value. You probably want them both greater than 1, which clamps the endpoints to zero.

Also, it’s possible to have either α or β or both be less than 1, but this creates very different behavior: the corresponding endpoints become poles.

Anyway, something like α = β = 3 is probably close enough to normal for most purposes but already clamped for you. And you could easily replicate something like, say, NetHack’s incredibly bizarre rnz function.

Random frequency

Say you want some event to have an 80% chance to happen every second. You (who am I kidding, I) might be tempted to do something like this:

1
2
if random() < 0.8 * dt:
    do_thing()

In an ideal world, dt is always the same and is equal to 1 / f, where f is the framerate. Replace that 80% with a variable, say P, and every tic you have a P / f chance to do the… whatever it is.

Each second, f tics pass, so you’ll make this check f times. The chance that any check succeeds is the inverse of the chance that every check fails, which is \(1 – \left(1 – \frac{P}{f}\right)^f\).

For P of 80% and a framerate of 60, that’s a total probability of 55.3%. Wait, what?

Consider what happens if the framerate is 2. On the first tic, you roll 0.4 twice — but probabilities are combined by multiplying, and splitting work up by dt only works for additive quantities. You lose some accuracy along the way. If you’re dealing with something that multiplies, you need an exponent somewhere.

But in this case, maybe you don’t want that at all. Each separate roll you make might independently succeed, so it’s possible (but very unlikely) that the event will happen 60 times within a single second! Or 200 times, if that’s someone’s framerate.

If you explicitly want something to have a chance to happen on a specific interval, you have to check on that interval. If you don’t have a gizmo handy to run code on an interval, it’s easy to do yourself with a time buffer:

1
2
3
4
5
6
timer += dt
# here, 1 is the "every 1 seconds"
while timer > 1:
    timer -= 1
    if random() < 0.8:
        do_thing()

Using while means rolls still happen even if you somehow skipped over an entire second.

(For the curious, and the nerds who already noticed: the expression \(1 – \left(1 – \frac{P}{f}\right)^f\) converges to a specific value! As the framerate increases, it becomes a better and better approximation for \(1 – e^{-P}\), which for the example above is 0.551. Hey, 60 fps is pretty accurate — it’s just accurately representing something nowhere near what I wanted. Er, you wanted.)

Rolling your own

Of course, you can fuss with the classic [0, 1] uniform value however you want. If I want a bias towards zero, I’ll often just square it, or multiply two of them together. If I want a bias towards one, I’ll take a square root. If I want something like a Gaussian/normal distribution, but with clearly-defined endpoints, I might add together n rolls and divide by n. (The normal distribution is just what you get if you roll infinite dice and divide by infinity!)

It’d be nice to be able to understand exactly what this will do to the distribution. Unfortunately, that requires some calculus, which this post is too small to contain, and which I didn’t even know much about myself until I went down a deep rabbit hole while writing, and which in many cases is straight up impossible to express directly.

Here’s the non-calculus bit. A source of randomness is often graphed as a PDF — a probability density function. You’ve almost certainly seen a bell curve graphed, and that’s a PDF. They’re pretty nice, since they do exactly what they look like: they show the relative chance that any given value will pop out. On a bog standard bell curve, there’s a peak at zero, and of course zero is the most common result from a normal distribution.

(Okay, actually, since the results are continuous, it’s vanishingly unlikely that you’ll get exactly zero — but you’re much more likely to get a value near zero than near any other number.)

For the uniform distribution, which is what a classic rand() gives you, the PDF is just a straight horizontal line — every result is equally likely.


If there were a calculus bit, it would go here! Instead, we can cheat. Sometimes. Mathematica knows how to work with probability distributions in the abstract, and there’s a free web version you can use. For the example of squaring a uniform variable, try this out:

1
PDF[TransformedDistribution[u^2, u \[Distributed] UniformDistribution[{0, 1}]], u]

(The \[Distributed] is a funny tilde that doesn’t exist in Unicode, but which Mathematica uses as a first-class operator. Also, press shiftEnter to evaluate the line.)

This will tell you that the distribution is… \(\frac{1}{2\sqrt{u}}\). Weird! You can plot it:

1
Plot[%, {u, 0, 1}]

(The % refers to the result of the last thing you did, so if you want to try several of these, you can just do Plot[PDF[…], u] directly.)

The resulting graph shows that numbers around zero are, in fact, vastly — infinitely — more likely than anything else.

What about multiplying two together? I can’t figure out how to get Mathematica to understand this, but a great amount of digging revealed that the answer is -ln x, and from there you can plot them both on Wolfram Alpha. They’re similar, though squaring has a much better chance of giving you high numbers than multiplying two separate rolls — which makes some sense, since if either of two rolls is a low number, the product will be even lower.

What if you know the graph you want, and you want to figure out how to play with a uniform roll to get it? Good news! That’s a whole thing called inverse transform sampling. All you have to do is take an integral. Good luck!


This is all extremely ridiculous. New tactic: Just Simulate The Damn Thing. You already have the code; run it a million times, make a histogram, and tada, there’s your PDF. That’s one of the great things about computers! Brute-force numerical answers are easy to come by, so there’s no excuse for producing something like rnz. (Though, be sure your histogram has sufficiently narrow buckets — I tried plotting one for rnz once and the weird stuff on the left side didn’t show up at all!)

By the way, I learned something from futzing with Mathematica here! Taking the square root (to bias towards 1) gives a PDF that’s a straight diagonal line, nothing like the hyperbola you get from squaring (to bias towards 0). How do you get a straight line the other way? Surprise: \(1 – \sqrt{1 – u}\).

Okay, okay, here’s the actual math

I don’t claim to have a very firm grasp on this, but I had a hell of a time finding it written out clearly, so I might as well write it down as best I can. This was a great excuse to finally set up MathJax, too.

Say \(u(x)\) is the PDF of the original distribution and \(u\) is a representative number you plucked from that distribution. For the uniform distribution, \(u(x) = 1\). Or, more accurately,

$$
u(x) = \begin{cases}
1 & \text{ if } 0 \le x \lt 1 \\
0 & \text{ otherwise }
\end{cases}
$$

Remember that \(x\) here is a possible outcome you want to know about, and the PDF tells you the relative probability that a roll will be near it. This PDF spits out 1 for every \(x\), meaning every number between 0 and 1 is equally likely to appear.

We want to do something to that PDF, which creates a new distribution, whose PDF we want to know. I’ll use my original example of \(f(u) = u^2\), which creates a new PDF \(v(x)\).

The trick is that we need to work in terms of the cumulative distribution function for \(u\). Where the PDF gives the relative chance that a roll will be (“near”) a specific value, the CDF gives the relative chance that a roll will be less than a specific value.

The conventions for this seem to be a bit fuzzy, and nobody bothers to explain which ones they’re using, which makes this all the more confusing to read about… but let’s write the CDF with a capital letter, so we have \(U(x)\). In this case, \(U(x) = x\), a straight 45° line (at least between 0 and 1). With the definition I gave, this should make sense. At some arbitrary point like 0.4, the value of the PDF is 1 (0.4 is just as likely as anything else), and the value of the CDF is 0.4 (you have a 40% chance of getting a number from 0 to 0.4).

Calculus ahoy: the PDF is the derivative of the CDF, which means it measures the slope of the CDF at any point. For \(U(x) = x\), the slope is always 1, and indeed \(u(x) = 1\). See, calculus is easy.

Okay, so, now we’re getting somewhere. What we want is the CDF of our new distribution, \(V(x)\). The CDF is defined as the probability that a roll \(v\) will be less than \(x\), so we can literally write:

$$V(x) = P(v \le x)$$

(This is why we have to work with CDFs, rather than PDFs — a PDF gives the chance that a roll will be “nearby,” whatever that means. A CDF is much more concrete.)

What is \(v\), exactly? We defined it ourselves; it’s the do something applied to a roll from the original distribution, or \(f(u)\).

$$V(x) = P\!\left(f(u) \le x\right)$$

Now the first tricky part: we have to solve that inequality for \(u\), which means we have to do something, backwards to \(x\).

$$V(x) = P\!\left(u \le f^{-1}(x)\right)$$

Almost there! We now have a probability that \(u\) is less than some value, and that’s the definition of a CDF!

$$V(x) = U\!\left(f^{-1}(x)\right)$$

Hooray! Now to turn these CDFs back into PDFs, all we need to do is differentiate both sides and use the chain rule. If you never took calculus, don’t worry too much about what that means!

$$v(x) = u\!\left(f^{-1}(x)\right)\left|\frac{d}{dx}f^{-1}(x)\right|$$

Wait! Where did that absolute value come from? It takes care of whether \(f(x)\) increases or decreases. It’s the least interesting part here by far, so, whatever.

There’s one more magical part here when using the uniform distribution — \(u(\dots)\) is always equal to 1, so that entire term disappears! (Note that this only works for a uniform distribution with a width of 1; PDFs are scaled so the entire area under them sums to 1, so if you had a rand() that could spit out a number between 0 and 2, the PDF would be \(u(x) = \frac{1}{2}\).)

$$v(x) = \left|\frac{d}{dx}f^{-1}(x)\right|$$

So for the specific case of modifying the output of rand(), all we have to do is invert, then differentiate. The inverse of \(f(u) = u^2\) is \(f^{-1}(x) = \sqrt{x}\) (no need for a ± since we’re only dealing with positive numbers), and differentiating that gives \(v(x) = \frac{1}{2\sqrt{x}}\). Done! This is also why square root comes out nicer; inverting it gives \(x^2\), and differentiating that gives \(2x\), a straight line.

Incidentally, that method for turning a uniform distribution into any distribution — inverse transform sampling — is pretty much the same thing in reverse: integrate, then invert. For example, when I saw that taking the square root gave \(v(x) = 2x\), I naturally wondered how to get a straight line going the other way, \(v(x) = 2 – 2x\). Integrating that gives \(2x – x^2\), and then you can use the quadratic formula (or just ask Wolfram Alpha) to solve \(2x – x^2 = u\) for \(x\) and get \(f(u) = 1 – \sqrt{1 – u}\).

Multiply two rolls is a bit more complicated; you have to write out the CDF as an integral and you end up doing a double integral and wow it’s a mess. The only thing I’ve retained is that you do a division somewhere, which then gets integrated, and that’s why it ends up as \(-\ln x\).

And that’s quite enough of that! (Okay but having math in my blog is pretty cool and I will definitely be doing more of this, sorry, not sorry.)

Random vs varied

Sometimes, random isn’t actually what you want. We tend to use the word “random” casually to mean something more like chaotic, i.e., with no discernible pattern. But that’s not really random. In fact, given how good humans can be at finding incidental patterns, they aren’t all that unlikely! Consider that when you roll two dice, they’ll come up either the same or only one apart almost half the time. Coincidence? Well, yes.

If you ask for randomness, you’re saying that any outcome — or series of outcomes — is acceptable, including five heads in a row or five tails in a row. Most of the time, that’s fine. Some of the time, it’s less fine, and what you really want is variety. Here are a couple examples and some fairly easy workarounds.

NPC quips

The nature of games is such that NPCs will eventually run out of things to say, at which point further conversation will give the player a short brush-off quip — a slight nod from the designer to the player that, hey, you hit the end of the script.

Some NPCs have multiple possible quips and will give one at random. The trouble with this is that it’s very possible for an NPC to repeat the same quip several times in a row before abruptly switching to another one. With only a few options to choose from, getting the same option twice or thrice (especially across an entire game, which may have numerous NPCs) isn’t all that unlikely. The notion of an NPC quip isn’t very realistic to start with, but having someone repeat themselves and then abruptly switch to something else is especially jarring.

The easy fix is to show the quips in order! Paradoxically, this is more consistently varied than choosing at random — the original “order” is likely to be meaningless anyway, and it already has the property that the same quip can never appear twice in a row.

If you like, you can shuffle the list of quips every time you reach the end, but take care here — it’s possible that the last quip in the old order will be the same as the first quip in the new order, so you may still get a repeat. (Of course, you can just check for this case and swap the first quip somewhere else if it bothers you.)

That last behavior is, in fact, the canonical way that Tetris chooses pieces — the game simply shuffles a list of all 7 pieces, gives those to you in shuffled order, then shuffles them again to make a new list once it’s exhausted. There’s no avoidance of duplicates, though, so you can still get two S blocks in a row, or even two S and two Z all clumped together, but no more than that. Some Tetris variants take other approaches, such as actively avoiding repeats even several pieces apart or deliberately giving you the worst piece possible.

Random drops

Random drops are often implemented as a flat chance each time. Maybe enemies have a 5% chance to drop health when they die. Legally speaking, over the long term, a player will see health drops for about 5% of enemy kills.

Over the short term, they may be desperate for health and not survive to see the long term. So you may want to put a thumb on the scale sometimes. Games in the Metroid series, for example, have a somewhat infamous bias towards whatever kind of drop they think you need — health if your health is low, missiles if your missiles are low.

I can’t give you an exact approach to use, since it depends on the game and the feeling you’re going for and the variables at your disposal. In extreme cases, you might want to guarantee a health drop from a tough enemy when the player is critically low on health. (Or if you’re feeling particularly evil, you could go the other way and deny the player health when they most need it…)

The problem becomes a little different, and worse, when the event that triggers the drop is relatively rare. The pathological case here would be something like a raid boss in World of Warcraft, which requires hours of effort from a coordinated group of people to defeat, and which has some tiny chance of dropping a good item that will go to only one of those people. This is why I stopped playing World of Warcraft at 60.

Dialing it back a little bit gives us Enter the Gungeon, a roguelike where each room is a set of encounters and each floor only has a dozen or so rooms. Initially, you have a 1% chance of getting a reward after completing a room — but every time you complete a room and don’t get a reward, the chance increases by 9%, up to a cap of 80%. Once you get a reward, the chance resets to 1%.

The natural question is: how frequently, exactly, can a player expect to get a reward? We could do math, or we could Just Simulate The Damn Thing.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
from collections import Counter
import random

histogram = Counter()

TRIALS = 1000000
chance = 1
rooms_cleared = 0
rewards_found = 0
while rewards_found < TRIALS:
    rooms_cleared += 1
    if random.random() * 100 < chance:
        # Reward!
        rewards_found += 1
        histogram[rooms_cleared] += 1
        rooms_cleared = 0
        chance = 1
    else:
        chance = min(80, chance + 9)

for gaps, count in sorted(histogram.items()):
    print(f"{gaps:3d} | {count / TRIALS * 100:6.2f}%", '#' * (count // (TRIALS // 100)))
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  1 |   0.98%
  2 |   9.91% #########
  3 |  17.00% ################
  4 |  20.23% ####################
  5 |  19.21% ###################
  6 |  15.05% ###############
  7 |   9.69% #########
  8 |   5.07% #####
  9 |   2.09% ##
 10 |   0.63%
 11 |   0.12%
 12 |   0.03%
 13 |   0.00%
 14 |   0.00%
 15 |   0.00%

We’ve got kind of a hilly distribution, skewed to the left, which is up in this histogram. Most of the time, a player should see a reward every three to six rooms, which is maybe twice per floor. It’s vanishingly unlikely to go through a dozen rooms without ever seeing a reward, so a player should see at least one per floor.

Of course, this simulated a single continuous playthrough; when starting the game from scratch, your chance at a reward always starts fresh at 1%, the worst it can be. If you want to know about how many rewards a player will get on the first floor, hey, Just Simulate The Damn Thing.

1
2
3
4
5
6
7
  0 |   0.01%
  1 |  13.01% #############
  2 |  56.28% ########################################################
  3 |  27.49% ###########################
  4 |   3.10% ###
  5 |   0.11%
  6 |   0.00%

Cool. Though, that’s assuming exactly 12 rooms; it might be worth changing that to pick at random in a way that matches the level generator.

(Enter the Gungeon does some other things to skew probability, which is very nice in a roguelike where blind luck can make or break you. For example, if you kill a boss without having gotten a new gun anywhere else on the floor, the boss is guaranteed to drop a gun.)

Critical hits

I suppose this is the same problem as random drops, but backwards.

Say you have a battle sim where every attack has a 6% chance to land a devastating critical hit. Presumably the same rules apply to both the player and the AI opponents.

Consider, then, that the AI opponents have exactly the same 6% chance to ruin the player’s day. Consider also that this gives them an 0.4% chance to critical hit twice in a row. 0.4% doesn’t sound like much, but across an entire playthrough, it’s not unlikely that a player might see it happen and find it incredibly annoying.

Perhaps it would be worthwhile to explicitly forbid AI opponents from getting consecutive critical hits.

In conclusion

An emerging theme here has been to Just Simulate The Damn Thing. So consider Just Simulating The Damn Thing. Even a simple change to a random value can do surprising things to the resulting distribution, so unless you feel like differentiating the inverse function of your code, maybe test out any non-trivial behavior and make sure it’s what you wanted. Probability is hard to reason about.

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

Enabling Two-Factor Authentication For Your Web Application

Post Syndicated from Bozho original https://techblog.bozho.net/enabling-two-factor-authentication-web-application/

It’s almost always a good idea to support two-factor authentication (2FA), especially for back-office systems. 2FA comes in many different forms, some of which include SMS, TOTP, or even hardware tokens.

Enabling them requires a similar flow:

  • The user goes to their profile page (skip this if you want to force 2fa upon registration)
  • Clicks “Enable two-factor authentication”
  • Enters some data to enable the particular 2FA method (phone number, TOTP verification code, etc.)
  • Next time they login, in addition to the username and password, the login form requests the 2nd factor (verification code) and sends that along with the credentials

I will focus on Google Authenticator, which uses a TOTP (Time-based one-time password) for generating a sequence of verification codes. The ideas is that the server and the client application share a secret key. Based on that key and on the current time, both come up with the same code. Of course, clocks are not perfectly synced, so there’s a window of a few codes that the server accepts as valid.

How to implement that with Java (on the server)? Using the GoogleAuth library. The flow is as follows:

  • The user goes to their profile page
  • Clicks “Enable two-factor authentication”
  • The server generates a secret key, stores it as part of the user profile and returns a URL to a QR code
  • The user scans the QR code with their Google Authenticator app thus creating a new profile in the app
  • The user enters the verification code shown the app in a field that has appeared together with the QR code and clicks “confirm”
  • The server marks the 2FA as enabled in the user profile
  • If the user doesn’t scan the code or doesn’t verify the process, the user profile will contain just a orphaned secret key, but won’t be marked as enabled
  • There should be an option to later disable the 2FA from their user profile page

The most important bit from theoretical point of view here is the sharing of the secret key. The crypto is symmetric, so both sides (the authenticator app and the server) have the same key. It is shared via a QR code that the user scans. If an attacker has control on the user’s machine at that point, the secret can be leaked and thus the 2FA – abused by the attacker as well. But that’s not in the threat model – in other words, if the attacker has access to the user’s machine, the damage is already done anyway.

Upon login, the flow is as follows:

  • The user enters username and password and clicks “Login”
  • Using an AJAX request the page asks the server whether this email has 2FA enabled
  • If 2FA is not enabled, just submit the username & password form
  • If 2FA is enabled, the login form is not submitted, but instead an additional field is shown to let the user input the verification code from the authenticator app
  • After the user enters the code and presses login, the form can be submitted. Either using the same login button, or a new “verify” button, or the verification input + button could be an entirely new screen (hiding the username/password inputs).
  • The server then checks again if the user has 2FA enabled and if yes, verifies the verification code. If it matches, login is successful. If not, login fails and the user is allowed to reenter the credentials and the verification code. Note here that you can have different responses depending on whether username/password are wrong or in case the code is wrong. You can also attempt to login prior to even showing the verification code input. That way is arguably better, because that way you don’t reveal to a potential attacker that the user uses 2FA.

While I’m speaking of username and password, that can apply to any other authentication method. After you get a success confirmation from an OAuth / OpenID Connect / SAML provider, or after you can a token from SecureLogin, you can request the second factor (code).

In code, the above processes look as follows (using Spring MVC; I’ve merged the controller and service layer for brevity. You can replace the @AuthenticatedPrincipal bit with your way of supplying the currently logged in user details to the controllers). Assuming the methods are in controller mapped to “/user/”:

@RequestMapping(value = "/init2fa", method = RequestMethod.POST)
@ResponseBody
public String initTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    GoogleAuthenticatorKey googleAuthenticatorKey = googleAuthenticator.createCredentials();
    user.setTwoFactorAuthKey(googleAuthenticatorKey.getKey());
    dao.update(user);
    return GoogleAuthenticatorQRGenerator.getOtpAuthURL(GOOGLE_AUTH_ISSUER, email, googleAuthenticatorKey);
}

@RequestMapping(value = "/confirm2fa", method = RequestMethod.POST)
@ResponseBody
public boolean confirmTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token, @RequestParam("code") int code) {
    User user = getLoggedInUser(token);
    boolean result = googleAuthenticator.authorize(user.getTwoFactorAuthKey(), code);
    user.setTwoFactorAuthEnabled(result);
    dao.update(user);
    return result;
}

@RequestMapping(value = "/disable2fa", method = RequestMethod.GET)
@ResponseBody
public void disableTwoFactorAuth(@AuthenticationPrincipal LoginAuthenticationToken token) {
    User user = getLoggedInUser(token);
    user.setTwoFactorAuthKey(null);
    user.setTwoFactorAuthEnabled(false);
    dao.update(user);
}

@RequestMapping(value = "/requires2fa", method = RequestMethod.POST)
@ResponseBody
public boolean login(@RequestParam("email") String email) {
    // TODO consider verifying the password here in order not to reveal that a given user uses 2FA
    return userService.getUserDetailsByEmail(email).isTwoFactorAuthEnabled();
}

On the client side it’s simple AJAX requests to the above methods (sidenote: I kind of feel the term AJAX is no longer trendy, but I don’t know how to call them. Async? Background? Javascript?).

$("#two-fa-init").click(function() {
    $.post("/user/init2fa", function(qrImage) {
	$("#two-fa-verification").show();
	$("#two-fa-qr").prepend($('<img>',{id:'qr',src:qrImage}));
	$("#two-fa-init").hide();
    });
});

$("#two-fa-confirm").click(function() {
    var verificationCode = $("#verificationCode").val().replace(/ /g,'')
    $.post("/user/confirm2fa?code=" + verificationCode, function() {
       $("#two-fa-verification").hide();
       $("#two-fa-qr").hide();
       $.notify("Successfully enabled two-factor authentication", "success");
       $("#two-fa-message").html("Successfully enabled");
    });
});

$("#two-fa-disable").click(function() {
    $.post("/user/disable2fa", function(qrImage) {
       window.location.reload();
    });
});

The login form code depends very much on the existing login form you are using, but the point is to call the /requires2fa with the email (and password) to check if 2FA is enabled and then show a verification code input.

Overall, the implementation if two-factor authentication is simple and I’d recommend it for most systems, where security is more important than simplicity of the user experience.

The post Enabling Two-Factor Authentication For Your Web Application appeared first on Bozho's tech blog.

Wanted: Front End Developer

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-front-end-developer/

Want to work at a company that helps customers in over 150 countries around the world protect the memories they hold dear? Do you want to challenge yourself with a business that serves consumers, SMBs, Enterprise, and developers? If all that sounds interesting, you might be interested to know that Backblaze is looking for a Front End Developer​!

Backblaze is a 10 year old company. Providing great customer experiences is the “secret sauce” that enables us to successfully compete against some of technology’s giants. We’ll finish the year at ~$20MM ARR and are a profitable business. This is an opportunity to have your work shine at scale in one of the fastest growing verticals in tech – Cloud Storage.

You will utilize HTML, ReactJS, CSS and jQuery to develop intuitive, elegant user experiences. As a member of our Front End Dev team, you will work closely with our web development, software design, and marketing teams.

On a day to day basis, you must be able to convert image mockups to HTML or ReactJS – There’s some production work that needs to get done. But you will also be responsible for helping build out new features, rethink old processes, and enabling third party systems to empower our marketing/sales/ and support teams.

Our Front End Developer must be proficient in:

  • HTML, ReactJS
  • UTF-8, Java Properties, and Localized HTML (Backblaze runs in 11 languages!)
  • JavaScript, CSS, Ajax
  • jQuery, Bootstrap
  • JSON, XML
  • Understanding of cross-browser compatibility issues and ways to work around them
  • Basic SEO principles and ensuring that applications will adhere to them
  • Learning about third party marketing and sales tools through reading documentation. Our systems include Google Tag Manager, Google Analytics, Salesforce, and Hubspot

Struts, Java, JSP, Servlet and Apache Tomcat are a plus, but not required.

We’re looking for someone that is:

  • Passionate about building friendly, easy to use Interfaces and APIs.
  • Likes to work closely with other engineers, support, and marketing to help customers.
  • Is comfortable working independently on a mutually agreed upon prioritization queue (we don’t micromanage, we do make sure tasks are reasonably defined and scoped).
  • Diligent with quality control. Backblaze prides itself on giving our team autonomy to get work done, do the right thing for our customers, and keep a pace that is sustainable over the long run. As such, we expect everyone that checks in code that is stable. We also have a small QA team that operates as a secondary check when needed.

Backblaze Employees Have:

  • Good attitude and willingness to do whatever it takes to get the job done
  • Strong desire to work for a small fast, paced company
  • Desire to learn and adapt to rapidly changing technologies and work environment
  • Comfort with well behaved pets in the office

This position is located in San Mateo, California. Regular attendance in the office is expected. Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy.

If this sounds like you
Send an email to [email protected] with:

  1. Front End Dev​ in the subject line
  2. Your resume attached
  3. An overview of your relevant experience

The post Wanted: Front End Developer appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

timeShift(GrafanaBuzz, 1w) Issue 3

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/07/07/timeshiftgrafanabuzz-1w-issue-3/

Many in the US were on holiday for Independence Day earlier this week, but that didn’t slow us down: team Stockholm even shipped a new Grafana release. This issue of timeShift has plenty of great articles to highlight. If you know of a recent article about Grafana, or are writing one yourself, please get in touch, we’d be happy to feature it here.


Grafana 4.4 Released

Grafana v4.4 is now Available for download

Dashboard history and version control is here! A big thanks to Walmart Labs for their massive code contribution.

Check out what’s new in Grafana 4.4 in the release announcement.


From the Blogosphere

Plugins and Dashboards

We are excited that there have been over 100,000 plugin installations since we launched the new plugable architecture in Grafana v3. You can discover and install plugins in your own on-premises or Hosted Grafana instance from our website. Below are some recent additions and updates.

Zabbix Updated to v3.5.0 CHANGELOG.md

  • rate() function, which calculates per-second rate for growing counters.
  • Template query format. New format is {group}{host}{app}{item}. It allows to use names with dot.
  • Improved performance of groupBy() functions (at 6-10x faster than old).
  • lots of bug fixes and more

In addition to the plugins available for download, there are hundreds of pre-made dashboards ready for you to import into Grafana to get up and running quickly. Check out some of the popular dashboards.

Server Metrics (Collectd) Collectd/Graphite Server metrics dashboard (Load,CPU, Memory, Temp etc).

Data Source: Graphite | Collector: Collectd

Apache Overview System stats for uptime, cpu count, RAM, free memory %, and panels for load, I/O and network traffic. Apache workers and scoreboard panels and uptime and CPU load single stats.

Data Source: InfluxDB | Collector: Telegraf

Node Exporter Server Metrics A simple dashboard configured to be able to view multiple servers side by side.

Data Source: Prometheus | Collector: Nodeexporter

This week’s MVC (Most Valuable Contributor)

Each week we’ll recognize a Grafana contributor and thank them for all of their PRs, bug reports and feedback. Many of the fixes and improvements come from our fantastic community!

ryantxu (Ryan McKinley)

Ryan has contributed PR’s to Grafana as well as being the author of 4 well-maintained plugins (Ajax Panel, Discrete Panel, Plotly Panel and Influx Admin plugins). Thank you for all your hard work!

What do you think?

Anything in particular you’d like to see in this series of posts? Too long? Too short? Boring? Let us know. Comment on this article below, or post something at our community forum. With your help, we can make this a worthwhile resource.

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Raspberry Pi as retail product display

Post Syndicated from Matt Richardson original https://www.raspberrypi.org/blog/raspberry-pi-as-retail-product-display/

Digitec Galaxus is an electronics retailer in Switzerland. Among other things, they sell Raspberry Pis and related accessories, including our official 7” Touch Display. Many of their customers likely noticed that they haven’t had the Touch Display in stock recently, but there’s an interesting reason.

381A9940_kl21

The retailer wanted to replace their tablet-based digital product labels with something more robust, so they turned to Raspberry Pi 2 with the 7” Touch Display. Each store has 105 screens, which means that the staff of Digitec Galaxus assembled 840 custom Pi-based digital product labels. The screens enable their customers to view up-to-date product information, price, and product ratings from their community as they look at the product up-close.

To pull this off, the engineering team used Raspbian Jesse Lite and installed Chromium. They wrote a startup script which launches Chromium in kiosk mode and handles adjusting the display’s backlight. The browser loads a local HTML page and uses JavaScript to download the most up-to-date content using an AJAX call. When a keyboard is connected, the staff can set the parameters for the display, which are stored as cookies in the browser. For good measure, the team also introduced many levels of fault tolerance into their design. Just as one example, the boot script starts Chromium in a loop to ensure that it will be relaunched automatically if it crashes. It can also handle sudden loss of power and network connectivity issues.

IMG_810121

Whether it’s a young person’s learning computer, the brains of a DIY home automation project, or a node in a factory sensor network, we beam with pride when see our little computer being used in so many different ways. This project in particular is a great example of how those that sell Raspberry Pi products can harness Pi’s power for their own operations.

The post Raspberry Pi as retail product display appeared first on Raspberry Pi.

How to turn Node.js projects into AWS Lambda microservices easily with ClaudiaJS

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/how-to-turn-node-js-projects-into-aws-lambda-microservices-easily-with-claudiajs/

This is a guest post by Gojko Adzic, creator of ClaudiaJS

 

While working on MindMup 2.0, we started moving parts of our API and back-end infrastructure from Heroku to AWS Lambda. The first Lambda function we created required a shell script of about 120 lines of AWS command-line calls to properly set up, and the second one had a similar number with just minor tweaks. Instead of duplicating this work for each service, we decided to create an open-source tool that can handle the deployment process for us.

Enter Claudia.js: an open-source deployment tool for Node.js microservices that makes getting started with AWS Lambda and Amazon API Gateway very easy for JavaScript developers.

Claudia takes care of AWS deployment workflows, simplifying and automating many error-prone tasks, so that you can focus on solving important business problems rather than worrying about infrastructure code. Claudia sets everything up the way JavaScript developers expect out of the box, and significantly shortens the learning curve required to get Node.js projects running inside Lambda.

Hello World

Here’s a quick ‘hello world’ example.

Create a directory, and initialize a new NPM project

npm init

Next, create app.js with the following code:

var ApiBuilder = require('claudia-api-builder'),
  api = new ApiBuilder();
module.exports = api;

api.get('/hello', function () {
  return 'hello world';
});

Add the Claudia API Builder as a project dependency:

npm install claudia-api-builder --save

Finally, install Claudia.js in your global path:

npm install -g claudia

That’s pretty much it. You can now install your new microservice in AWS by running the following command:

claudia create --region us-east-1 --api-module app

In a few moments, Claudia will respond with the details of the newly-installed Lambda function and REST API.

{
  "lambda": {
    "role": "test-executor",
    "name": "test",
    "region": "us-east-1"
  },
  "api": {
    "id": "8x7uh8ho5k",
    "module": "app",
    "url": "https://8x7uh8ho5k.execute-api.us-east-1.amazonaws.com/latest"
  }
}

The result contains the root URL of your new API Gateway resource. Claudia automatically created an endpoint resource for /hello, so just add /hello to the URL, and try it out in a browser or from the console. You should see the ‘hello world’ response.

That’s it! Your first Claudia-deployed Lambda function and API Gateway endpoint is now live on AWS!

What happened in the background?

In the background, Claudia.js executed the following steps:

  • Created a copy of the project.
  • Packaged all the NPM dependencies.
  • Tested that the API is deployable.
  • Zipped up your application and deployed it to Lambda.
  • Created the correct IAM access privileges.
  • Configured an API Gateway endpoint with the /hello resource.
  • Linked the new resource to the previously-deployed Lambda function.
  • Installed the correct API Gateway transformation templates.

Finally, it saved the resulting configuration into a local file (claudia.json), so that you can easily update the function without remembering any of those details.

Try this next:

Install the superb module as a project dependency:

npm install superb --save

Add a new endpoint to the API by appending these lines to api.js:

api.get('/greet', function (request) {
  var superb = require('superb');
  return request.queryString.name + ' is ' + superb();
});

You can now update your existing deployed application by executing the following command:

claudia update

When the deployment completes, try out the new endpoint by adding `/greet?name=’ followed by your name.

Benefits of using Claudia

Claudia significantly reduces the learning curve for deploying and managing serverless style applications, REST API, and event-driven microservices. Developers can use Lambda and API Gateway in a way that is similar to popular lightweight JavaScript web frameworks.

All the query string arguments are immediately available to your function in the request.queryString object. HTTP Form POST variables are in request.post, and any JSON, XML, or text content posted in as raw body text are in request.body.

Asynchronous processes are also easy; just return a Promise from the API endpoint handler, and Claudia waits until the promise resolves before responding to the caller. You can use any A+ Promise-compliant library, including the promises supported out of the box by the new AWS Lambda 4.3.2 runtime.

To make serverless-style applications easier to set up, Claudia automatically enables cross-origin resource sharing (CORS), so a client browser can call your new API directly even from a different domain. All errors are, by default, triggering the 500 HTTP code, so your API works well with most AJAX libraries. You can, of course, easily customize the API endpoints to return a different content type or HTTP response code, or include additional headers. For more information, see the Claudia API Builder documentation.

Conclusion

Claudia helps people get started quickly, and easily migrate existing, self-hosted or third-party-hosted APIs into Lambda. Because it’s not opinionated and does not require a particular structure or way of working, teams can easily start chopping pieces of existing infrastructure and gradually moving it over. For more information visit the git repository for Sample ClaudiaJS Projects.

Maybe we could tone down the JavaScript

Post Syndicated from Eevee original https://eev.ee/blog/2016/03/06/maybe-we-could-tone-down-the-javascript/

I’m having a really weird browser issue, where scripts on some pages just won’t run until about 20 seconds have passed.

Whatever you’re about to suggest, yes, I’ve thought of it, and no, it’s not the problem. I mention this not in the hope that someone will help me debug it, but because it’s made me acutely aware of a few… quirks… of frontend Web development.

(No, really, do not try to diagnose this problem from one sentence, I have heard and tried almost everything you could imagine.)

Useless pages

See, here is a screenshot of a tweet, with all of the parts that do not work without JavaScript highlighted in red. I know this because I keep spending 20 seconds staring at a page that has not yet executed the bulk of its code.

Screenshot of a tweet, with almost everything except the tweet text and author name highlighted red

Some of this I can understand. The reply button, for example, focuses and expands the textbox below. You can’t do that without some scripting. The … button opens a popup menu, which is iffy, since you could fake it with CSS too. Similarly, the ♥ button does an action behind the scenes, which is iffy since you could replicate it with a full page load. But those are both non-trivial changes that would work significantly differently with script vs without.

On the other hand…

That × button at the top right, and all the empty surrounding space? All they do is take you to my profile, which is shown in a skeletal form behind the tweet. They could just as well be regular links, like the “previous” and “next” links on the sides. But they’re not, so they don’t work without JavaScript.

That little graph button, for analytics? All it does is load another page in a faux popup with an iframe. It could just as well be a regular link that gets turned into a popup by script. But it’s not, so it doesn’t work without JavaScript.

The text box? Surely, that’s just a text box. But if you click in it before the JavaScript runs, the box is still awkwardly populated with “Reply to @eevee”. And when the script does run, it erases anything you’ve typed and replaces it with “Reply to @eevee” again, except now the “@eevee” is blue instead of gray.

That happens on Twitter’s search page, too, which is extra weird because there’s no text in the search box! If you start typing before scripts have finished running, they’ll just erase whatever you typed. Not even to replace it with homegrown placeholder text or apply custom styling. For no apparent reason at all.

I also use NoScript, so I’ve seen some other bizarre decisions leak through on sites I’ve visited for the first time. Blank white pages are common, of course. For quite a while, articles on Time’s site loaded perfectly fine without script, except that they wouldn’t scroll — the entire page had a overflow: hidden; that was removed by script for reasons I can’t begin to fathom. Vox articles also load fine, except that every image is preceded by an entire screen height’s worth of empty space. Some particularly bad enterprise sites are a mess of overlapping blocks of text; I guess they gave up on CSS and implemented their layout in JavaScript.

There’s no good reason for any of this. These aren’t cutting-edge interactive applications; they’re pages with text on them. We used to print those on paper, but as soon as we made the leap to computers, it became impossible to put words on a screen without executing several megabytes of custom junk?

I can almost hear the Hacker News comments now, about what a luddite I am for not thinking five paragraphs of static text need to be infested with a thousand lines of script. Well, let me say proactively: fuck all y’all. I think the Web is great, I think interactive dynamic stuff is great, and I think the progress we’ve made in the last decade is great. I also think it’s great that the Web is and always has been inherently customizable by users, and that I can use an extension that lets me decide ahead of time what an arbitrary site can run on my computer.

What’s less great is a team of highly-paid and highly-skilled people all using Chrome on a recent Mac Pro, developing in an office half a mile from almost every server they hit, then turning around and scoffing at people who don’t have exactly the same setup. Consider that any of the following might cause your JavaScript to not work:

  • Someone is on a slow computer.
  • Someone is on a slow connection.
  • Someone is on a phone, i.e. a slow computer with a slow connection.
  • Someone is stuck with an old browser on a computer they don’t control — at work, at school, in a library, etc.
  • Someone is trying to write a small program that interacts with your site, which doesn’t have an API.
  • Someone is trying to download a copy of your site to read while away from an Internet connection.
  • Someone is Google’s cache or the Internet Archive.
  • Someone broke their graphical environment and is trying to figuring out how to fix it by reading your site from elinks in the Linux framebuffer.
  • Someone has made a tweak to your site with a user script, and it interferes with your own code.
  • Someone is using NoScript and visits your site, only to be greeted by a blank screen. They’re annoyed enough that they just leave instead of whitelisting you.
  • Someone is using NoScript and whitelists you, but not one of the two dozen tracking gizmos you use. Later, you inadvertently make your script rely on the presence of a tracker, and it mysteriously no longer works for them.
  • You name a critical .js bundle something related to ads, and it doesn’t load for the tens of millions of people using ad blockers.
  • Your CDN goes down.
  • Your CDN has an IPv6 address, but it doesn’t actually work. (Yes, I have seen this happen, from both billion-dollar companies and the federal government.) Someone with IPv6 support visits, and the page loads, but the JS times out.
  • Your deploy goes a little funny and the JavaScript is corrupted.
  • You accidentally used a new feature that doesn’t work in the second-most-recent release of a major browser. It registers as a syntax error, and none of your script runs.
  • You outright introduce a syntax error, and nobody notices until it hits production.

I’m not saying that genuine web apps like Google Maps shouldn’t exist — although even Google Maps had a script-free fallback for many years, until the current WebGL version! I’m saying that something has gone very wrong when basic features that already work in plain HTML suddenly no longer work without JavaScript. 40MB of JavaScript, in fact, according to about:memory — that’s live data, not download size. That might not sound like a lot (for a page dedicated to showing a 140-character message?), but it’s not uncommon for me to accumulate a dozen open Twitter tabs, and now I have half a gig dedicated solely to, at worst, 6KB of text.

Reinventing the square wheel

You really have to go out of your way to do this. I mean, if you want a link, you just do <a href="somewhere">label</a> and you are done. If you go reinvent that with JavaScript, you need a click handler, and you need it to run at the right time so you know the link actually exists, and maybe you have to do some work to add click handlers to new faux links added by ajax. Right?

Wrong! That will get you a pale, shoddy imitation of a link. Consider all these features that native links have:

  • I can tab to a link.
  • I can open a link in a new tab or window with some combination of ctrl, shift, and middle-clicking.
  • I can copy a link’s address and paste it somewhere, or open it in another browser, or whatever.
  • I can use ' in Firefox to search only for links.
  • Some browsers — I want to say Opera, Konqueror, uzbl, Firefox with vimperator? — have a hotkey that shows a number or letter next to every link on the page, so you can very quickly “click” a link visually without ever touching the mouse.
  • I believe screenreaders treat links specially.
  • Simple crawlers rely on links to discover the rest of the site.
  • Browsers are starting to experiment with prefetching prominent links, so that the next page load is instant if the user actually clicks a prefetched link.

The common thread here is that <a href=...> means something. It says “this is a place you can go”. Tons of tools that want to know what places you can go rely on that information. If you replace it with a <div onclick>, then yes, clicking on the div will do something, but all the meaning has been completely lost. Conversely, if you use <a href="javascript:void(0);">, then you’re effectively lying to those tools; you’re invoking meaning but providing meaningless information.

This is what people mean when they harp on about “semantics” — that there’s useful information to be gleaned. Figuring out what “counts” as a link when you’ve reinvented it yourself would require either speculatively executing a lot of arbitrary code, writing an extremely clever static analyzer, or just being a trained human programmer. Declaring your intentions is much more powerful and flexible than just doing the work, because generic tools can do useful things with the former almost trivially.

Another good example is dropdown boxes — <select> — which I sometimes see reimplemented entirely out of non-native widgets. I guess to make them prettier? A noble goal. But did you know that in native dropdowns, you can start typing to jump to the first choice with a matching label? Apparently most of the people who make these reimplementation didn’t, and then they use them for long lists like states and countries. The end result looks better (or, well, different) but is functionally much worse.

Twitter’s custom text box is not a text box at all, but a contenteditable <div>. At least contenteditable means most native controls work fairly well, but it still has some bizarre behavior at times, like moving the cursor to the start of the text when I tab away. Or sometimes the script will have trouble keeping up with my typing for whatever reason, and i t w i l l v i s i b l y l a g. The only reason to have this at all instead of a regular <textarea> seems to be to turn @handles and links blue? The custom textbox doesn’t truncate links or substitute Twitter’s emoji font, so it’s not really a preview of how your tweet will look.

You know how some sites have keyboard shortcuts? Cute, right? Well, / is actually a built-in Firefox shortcut key — it opens a quick-find bar. Apparently nobody at Twitter or GitHub or BitBucket or Tumblr or half a dozen other places are aware of this, because they all bound it to move the focus to their whole-site search bar. Which is completely different from searching the current page. (To GitHub’s credit, they fixed this after I complained about it on Twitter.) For the longest time, Google+ disabled spacebar to scroll down. How did no one at these huge companies stop and say “hey, wait, this is already a thing and we’re breaking it”? Do web developers actually use web browsers?

Which reminds me: every Twitter page silently consumes all keyboard events and mouse clicks until all its script has finished running. That means that I can’t even tab away while waiting those 20 seconds for a page to load; ctrl-t, ctrl-w, ctrl-tab, ctrl-pgup, and ctrl-pgdn are all keyboard events and all swallowed indiscriminately. The gizmo responsible is called the “swift action queue”, which makes it sound like it’s supposed to replay the events once the page is ready, but (a) you can’t replay a browser shortcut and (b) it doesn’t seem to do that anyway. I had to write a user script to block a script tag with a specific id to fix this. I think it’s still a problem even now.

I don’t think I’m complaining about anything wildly unreasonable here. This is basic browser stuff, and you’re breaking it, often for no good reason. I don’t expect you to make Google Docs work without JavaScript. I just expect you to not break my goddamn keyboard.

If I may offer some advice

Accept that sometimes, or for some people, your JavaScript will not work. Put some thought into what that means. Err on the side of basing your work on existing HTML mechanisms whenever you can. Maybe one day a year, get your whole dev team to disable JavaScript and try using your site. Commence weeping.

If you’re going to override or reimplement something that already exists, do some research on what the existing thing does first. You cannot possibly craft a good replacement without understanding the original. Ask around. Hell, just try pressing / before deciding to make it a shortcut.

Remember that for all the power the web affords you, the control is still ultimately in end user’s hands. The web is not a video game console; act accordingly. Keep your stuff modular. Design proactively around likely or common customizations. Maybe scale it down a bit once you hit 40MB of loaded script per page.

Thanks. ♥

Magento a Perfect Fit for Princess Polly

Post Syndicated from Jonathan Crossfield original http://www.anchor.com.au/blog/2015/11/magento-a-perfect-fit-for-princess-polly/

Hand-tailored by Anchor and Acidgreen
Anchor Hosting Back in Fashion
Thwarted by the limitations of its existing CMS, Princess Polly chose Magento as the flexible foundation for the fashion retailer’s future ecommerce plans. However, for the project to succeed, Princess Polly also needed a digital agency to perform the necessary alterations to create a personalised designer platform. Plus, they needed a hosting partner to tailor the bespoke hosting ensemble to Magento’s particular shape, for a perfect fit with plenty of room in all the right places.
“Working with Anchor is like having an extra developer in house. We consider them to be part of our development team”
James Lowe: Senior Technical Consultant, Acidgreen
GROWING PAINS
Princess Polly started as a fashion boutique in Surfers Paradise back in 2005, launching an online store in 2010. “Within a year we were one of Hitwise’s Top 20 Power Retailers,” says Wez Bryett, General Manager at Princess Polly. “We experienced rapid growth for the first three years. Now the business has matured so we’re looking for improved stability while planning the future.”
Those plans include a global expansion, starting with the US and then moving into parts of Asia. Unfortunately, Princess Polly’s previous ecommerce platform was no longer supported.
“The existing system was pretty limiting,” explains James Lowe, Senior Technical Consultant for Acidgreen, the digital agency that took on the Princess Polly project. “Wez and the team at Princess Polly have big ideas. But they were constantly being told it couldn’t be done or that it was going to cost a considerable amount of money to make even just minor changes.”
“We needed an ecommerce platform we could stay on for good,” continues Bryett. “Easy to change, and with plenty of plugins available so you don’t have to custom build everything yourself.”
Bryett’s research reached one clear conclusion. “Magento offers enterprise-level features without the enterprise-level pricing. It’s also geared towards international ecommerce, with multi-site options, an integrated currency converter, and so on.”
Bryett was adamant that performance should not be compromised to achieve this increased functionality. Page load times had to be fast and the infrastructure robust and scalable enough to handle the busiest periods with ease. However, Bryett knew his limitations. “I have an IT management background, but I wouldn’t be comfortable managing the platform and the hosting myself. Instead of learning how to configure, maintain and fix everything in house, making mistakes along the way, we wanted to work with a team that has dealt with these problems before.”
“On a normal hosting setup, I found Magento to be quite slow. Extremely slow, even. But with the tweaks made by Anchor and Acidgreen, it’s become much faster.”
Wez Bryett: General Manager, Princess Polly
IN SEARCH OF THE PERFECT FIT
Acidgreen services a wide clientele, from start-ups to major brands. Now an official Magento partner, Acidgreen specialises in ecommerce websites for some of Australia’s biggest online retailers. The digital agency already worked closely with Anchor, having previously adopted managed hosting for previous projects. However, Princess Polly initially chose a different hosting provider.
Unfortunately, not every hosting provider can adapt, customise and manage a complex hosting infrastructure without hitting some obstacles. “They just couldn’t get the infrastructure right. They had a lot of problems configuring everything to be fast enough,” says Lowe. “It was quite a costly decision for the client to switch hosting providers a second time, but for the project to succeed we had to transfer to Anchor.”
Anchor tailored a bespoke hosting stack to fit the new Magento website and worked closely with Acidgreen to plan the migration. “Our first objective was to replicate what we already had. No redesign, new features or anything,” says Bryett.
Lowe agrees. “We didn’t want to freak out regular customers with a new user interface as well as a new platform. So we took baby steps, migrating the existing website across with only minor changes. Then we could compare apples with apples, giving us a benchmark and making any improvements measurable. Then we identified each piece of functionality and customisation that was needed, making the changes while keeping the front end user experience pretty similar.”
MAGENTO MADE TO MEASURE
Anchor is also an official Magento partner, specialising in custom hosting environments. The team unstitched the various layers of Magento to understand how it works at every level, before rebuilding the hosting stack as a perfect fit for the resource-hungry CMS.
The result is a fully managed clustered environment, with load balancers and three front-end machines backed onto a high-availability back-end. The made-to-measure application stack eliminates performance bottlenecks anywhere in the hosting infrastructure.
The stack includes Varnish, Turpentine and New Relic to track and cache over 90 per cent of the website content, dramatically improving page load speeds while reducing database requests. Meanwhile, load times for assets such as CSS, images and JavaScript improved with the addition of a PageSpeed optimisation module to Nginx, automatically applying web performance best practice across the site. Even the shopping cart was optimised and cached where possible, by rebuilding it in AJAX.
Anchor provided the necessary DevOps expertise, allowing Acidgreen to deliver a lightning fast website without relying on less-experienced internal resources. “Just the way Anchor designed the infrastructure made the difference between a four second and two second page load,” said Lowe.
STEPPING OUT IN STYLE
“On a normal hosting setup, I found Magento to be quite slow,” says Bryett. “Extremely slow, even. But with the tweaks made by Anchor and Acidgreen, it’s much faster. So stage one was the successful backend migration.
“Now that’s done, we’re making enhancements and adding new features all the time. For example, we just put in a Google Shopping Feed, which was a Magento extension. That would have cost us $15,000 to custom build in the old platform.”
Lowe says that the global expansion plans are now a lot easier to implement. “Create a new store in Magento with the multi-site feature, configure it with Varnish and CloudFlare, and bang — you’ve got a new US site. Simple.” Princess Polly was a tricky project, says Lowe, but he credits its success to the close working relationship between Acidgreen and Anchor. “We consider Anchor to be part of our development team. At any time, there’s always someone we can call. Whatever the task or issue, we both look at it and collaborate on sorting it out. The end result is a happy client that will stick with us.”
ANCHOR: Managed Operations
An Anchor hosting infrastructure represents the very latest in hosting couture – fresh, clean and bang up to date.
Stop patching your servers like an old pair of jeans because you lack the time, resources or expertise to update or replace them. Managed Operations means never putting up with unfashionable hosting ever again.
“Managed hosting frees up time and provides us with expertise we would otherwise have to hire in. It’s a huge competitive advantage.”
James Lowe: Senior Technical Consultant, Acidgreen
WHICH SIDE DOES YOUR WEBSITE DRESS?
An off-the-peg hosting plan is often too tight where you need freedom to move and baggy where you’ll never grow into it. If you’ve invested in a top quality website or application, you need a top quality hosting provider to tailor your environment to the perfect fit, and perform seamless repairs when necessary.
Your website or application can strut its stuff, confident that a wrong move won’t have you busting the seams of your bandwidth.
NO MORE MAKE DO AND MEND
Anchor is responsible for your entire hosting environment, right up to your code. We’ll fix it so you’re never caught in public in threadbare hosting.
We take care of the operating system and application stack, security hardening, performance optimisation, patches and security updates, configuration changes, backups, monitoring, auto-scaling, troubleshooting and emergency response.
All your code needs to do is wear it well.
Don't get stitched up by off-the-peg hosting plans
The post Magento a Perfect Fit for Princess Polly appeared first on Anchor Cloud Hosting.

Защо сменям банките

За последен път в този блог съм писал на 4 май 2011 г. – смятайте, колко съм покъртен, потресен и възмутен, за да се върна след повече от 2 години!
Всичко започна безобидно – вчера играта на Heineken ми скъси живота с половин година, а поддръжката на сайта им, ми скъси живота с още половин. И всичкото това ми припомни, че преди две седмици в ОББ ме попитаха, защо си закривам сметките в тяхната банка.
Реших да им изпратя писмо, но се оказа доста по-трудно от това да изтеглиш кредит за милиони, направо невъзможно. Може би трябваше да си напиша реч, вместо писмо и да я издекламирам на телефона за обслужване на клиенти (всъщност идеята не е никак лоша!?).
И така и така го написах (писмото, не речта)… и така и така е сложно да се изпрати, ще вземат, да не го получат, по-добре да го пусна, като отворено писмо.
Следва и самото отворено писмо, предназначено за ОББ:
Първо, не разбрах, защо е ЗАДЪЛЖИТЕЛНО да се попълнят ДЕВЕТ полета, за да изпратиш една препоръка. Сигурен съм, че в банката имате предостатъчно мастити професионалисти-експерти по този въпрос, които ако още не са си свършили работата, да напишат 500 стр. доклад обосноваващ необходимостта от това, значи не са си заслужили заплатата и незабавно трябва да го направят. След това, задължително докладът трябва да бъде публикуван на сайта на банката – на него и без друго е пълно с безполезна информация – никога не намираш това, което ти трябва, затова пък е пълно с неща, които не ти трябват.
А сега по същество
Бях клиент на ОББ в продължение на 10 години. За добро или за лошо, бях клиент на банката не по собствено желание, а по желание на работодателя ми. Тъй като работодателят НАЙ-НАКРАЯ след 10 години разбра, че НЕ Е НОРМАЛНО да ти налага да ползваш неговата банка, сега си получавам заплатата в друга банка – в банка на която също съм клиент от над 10 години и от която съм доволен, бил съм доволен всеки един ден през тези над десет години, продължавам да саъм доволен и се надявам че ще продължи да бъде така.
Когато си закривах всички сметки, профили, депозити, карти и т.н. в ОББ, служителите в офиса бяха така любезни да ме попитат, защо не желая да използвам услугите на ОББ занапред. В момента не можах да си формулирам мисълта достатъчно точно и просто им отговорих, че сменям работодателя (в интерес на истината, не го сменям).
После обаче помислих и реших, че ако изобщо има надежда в България някой ден да дочакам нормално обслужване, не само в банка, а където и да било изобщо, може би трябва да пробвам да им кажа къде им е проблемът? Доста песимистично съм настроен, досега, където и да си дам мнението, в общи линии само съм си загубил времето. А на мен времето ми е ценно, затова се надявам, че ще оцените това, което ще ви напиша по-надолу.
На първо място уеб сайта и онлайн банкирането. Не мога да разбера, как за 10 години нещата могат или да не мръднат на милиметър или да стават само по-зле? Особено по отношение на онлайн банкирането, тъй като ненавиждам да ходя по офиси, разположени на Мачу Пикчу у царевичака, на места, където не може да се стигне нито с градски транспорт, нито може да се паркира като хората с кола. Явно или имате доста клиенти с много свободно време или всичките до един са ви клиенти, защото работодателят им е избрал ОББ.
Та на сайта за онлайн банкиране първо има прекалено много информация, която е абсолютно ирелевантна към самия процес на онлайн банкирането. На първо място – чий го дирят реклами там, вероятно проява на онези горните специалисти, дето много ги виждат нещата, ама само са ги чели в букварчето, никога не са ги правили наживо. След това началната страница е три екрана висока, скролираш като луд нагоре – надолу, това защо? Няма ли къде да се подреди малко всичката тая информация? Отварям сайт за услуга, забога! Очаквам да видя само едно единствено нещо и абсолютно нищо друго – голям и красив login екран. Няма такова нещо – единственото, за което може да дойде човек на този сайт (да си влезе в профила) е най-малкото нещо на екрана.
Навигацията е пълна скръб и кръгла нула… там изобщо няма да коментирам, защото просто каквото има, трябва да се изгори и да се забрави, за да не се сетят бъдните поколения повече никога за него!
Кой, бих желал да знам кой, беше този “специалист”, дето препоръча една система да бъде повече шарена и Fancy, отколкото работеща? С тия динамични зарежданици на всеки ъгъл, след като се логнеш, постоянно нещо от тях не работи. Разберете го и го запомнете добре, ако не можете, напишете си го на голям лист, сложете го в рамка и го окачете на всяка врата в банката: една система преди да е красива трябва да работи, това не е конкурс за красота. Изобщо няма да се впечатля от заоблените ъгълчета, от AJAX зареждането, от прогрес баровете и т.н, ако накрая нещо не проработи и дори не знам защо не е сработило! Примерно един прогрес бар остава на екрана и се върти с часове. Явно някой от горните специалисти е преценил, че не е необходимо да се поставят съобщения за грешки… защо да притесняваме горкия потребител, по-добре да си почака още малко?!
Продължаваме нататък – тия сертификати дето се издават от ОББ – егаси мъката, докато го взема… мислех, че ипотечен кредит за 100 милиона тегля. В началото (преди 10 години) по неизвестни причини системата работеше само със сертификати издадени от банката. Ако имаш “Универсален електронен подпис”, който дори според закона е УНИВЕРСАЛЕН се оказва, че баш с ОББ не можело да се използва. Най-закостенялата администрация на света, дето никога не е имала за цел да е в помощ на жертвите си – данъчните, и те работят със сертификатите така, че да им е удобно на хората. На всичкото отгоре потребителското име е клиентски номер… ама разбира се, защо не… хората обичат да помнят дълги номера. А ако не могат да го запомнят, защо да не си го запишат на листче ида си го залепят на монитора – хем е по-удобно, хем е по-сигурно. Не знам, може би по-късно се е появила възможност да си избереш нормално потребителско име, но съм готов да се обзаложа, че ако имам от старите имена-номера, няма начин да го сменя?! Ама като гледам формата за регистрация – не виждам поле за избор на име, така че видимо печеля баса?!!
Разбира се – всяко нещо си има различна визия и свое виждане по въпроса… например екранът за регистриране прилича на всичко друго, но не и на останалите екрани от онлайн банкирането. Есттествено, различен е и от визията на основния сайт. Това е древна военна тактика за объркване на врага, пардон – клиента. И като допълнение – на всяка крачка има безкрайни обяснения – “натиснете този бутон, за да направите…”; “попълнете в това поле данни за вашата…”; “въведете 600 реда описание на…”. Ако интерфейсът на това нещо не е достатъчно ясен за потребителя, изтрийте го, жестоко накажете този, който го е направил (съжалявам за жестокостта, ама трябва да се научи повече да не прави така) и започнете отначало, най-добре с друг “специалист”.
Стига толкова за онлайн банкирането. Да речем, че понеже чеп за зеле не става от онлайн присъствието, решавам да дойда в клон на банката и да ме “обслужат” наживо. Супер идея, направо съм превъзбуден от мисълта. Тук поне ще ви похваля малко – офисите приличат на офиси, слава на бога. И повечето служители, с които съм се сблъсквал са любезни, приветливи, в общи линии помагат или ако не могат, поне правят всичко по силите си да помогнат. Което си е хубаво си е хубаво… обаче:
Първо в централния офис – тая машина с билетчетата, да я вземете и на тоя, дето ви е пробутал, как с нея ще се подобри обслужването на клиентите, да му я заврете – знаете къде. Ама ЦЯ-ЛА-ТА! Тоя няма начин, дори специалист да е бил. Влизам в офис на банката. В офиса има 20 гишета. Поне на 10 от тях има служител. Трима души чакат на “опашка” – седнали са на столчета покрай прозорците. Отиваш до машината и естествено искаш да свършиш някаква работа, за която не е написана очевидна опция. Мислиш, размишляваш, накрая теглиш някакво билетче, за което смяташ, че е максимално близко до това, което искаш да свършиш и очакваш, че до 5 минути ще ти дойде ред. Все пак има 10 служители и три клиента. Няма такова нещо – висиш и чакаш като сопол. В КАТ системата с билетчета, дето не работи, и тя работи по-добре. Може би по-зле са само в ДСК – там за една и съща услуга клиент, който е влязъл и изтеглил билет след теб му идва реда преди твоя. Ама на тях им е простено – ако някой е толкова зле, че да стане клиент и на ДСК, заслужава да чака до гроб на опашка. И то защо – щото ДСК-директ е напълно съпоставимо по безполезност с онлайн банкирането на ОББ. Накрая се оказва, че естествено услугата за която си дошъл не е тази, за която си изтеглил билетче – нищо, че за твоята услуга няма опция на машината за билетчета… А не дай си боже да имаш да свършиш повече от едно нещо в банката… или теглиш два билета на влизане и стискаш палци да не ти дойде реда и за двата едновременно. Или за по-сигурно теглиш един и като свършиш – теглиш втори. В крайна сметка – какво лошо има в това да почакаш малко на опашката? Пък и столчета има, можеш да поседнеш, да не ти отмалеят крачетата – това само трябва да ти подскаже колко ти предстои да почакаш.
Абстрахираме се от билетчетата и чакането. Да речем, че отиваме в някой по-малък офис… неочаквано, но там обслужването е по-добре, отколкото в централния офис. Влизаш и има трима служители и нито един клиент. Питаш къде да се насочиш за услугата Х и се случва невижданото чудо – оказва се, че и тримата могат да те обслужат – не екато в централния офис да има 20 гишета, но можеш да си свършиш работа само на едно от тях и навсичкото отгоре – не знаеш кое е то! Сядаш… връчват ти химикалка и кочанИ – в множествено число. И се започва великото писане. Изписваш “Под игото”, откъсваш го от кочана и го даваш на любезната служителка, която ти подава следващия формуляр, изписваш “Война и мир”, подписваш се с творчески псевдоним и пак нататък… трети формуляр, четвърти формуляр… и накрая само още една бланчица. Само ще напомня, че влязох за услугата Х, не за услугИТЕ X, Y, Z! На всичкото отгоре – аз съм напълно непознат човек за банката! Никой не знае кой съм… аз съм фантомът от операта! Трябва да си попълня всичките данни сякаш за първи път влизам в този офис. При това на всяка бланка – да не съм капо. Пиши име, пиши презиме, пиши фамилия, пиши ЕГН, лична карта, издадена на, от МВР, валидна до, номер, снимка в профил, амфас и отгоре, пиши адрес, родители, роднини, милиционери, номер на сметката (аз си ги помня – 8 IBAN номера по 22 символя си ги помня като поп)… BIC кодове, SWIFT кодове, на баба ти кодове. Ама разбира се, разбира се, че трябва да ги попълня – банката от къде може да ги знае всичките тия работи? Да не би компютри да ползват там… нееее, и системи нямат. Би било адско усилие, ако ми пуснат на принтера попълнена бланка – това е инвестиция в принтери, каквито може би в клоновете няма?! И инвестиция в “специалисти”, които да измислят невъзможен алгоритъм как това да се случва… още инвестиции – в други специалисти, които да направят нова система… и още инвестиции за нещо, за което не се сещам. Я да питам – защо изобщо ползвате индигирана хартия на НЯКОИ (не на всички) бланки… аз примерно като попълвам такива документи изключително много държа да попълня всяко от 100-те копия собственоръчно. Имам сериозни опасения, че “специалистите” вече са разработили специално индиго, което подменя цифрите на всяко от копията! Освен това клиентът трябва да попълва хартия до припадък, след това го свестяват и продължава. И особено трябва да се внимава, да не се допусне някаква грешка при ръчното попълване… най-вече на последните полета, щото иначе… започваш отначало.
В крайна сметка – какво се получи – искам да ползвам обслужване през Интернет, ама не става, щото пет пари не струва. Решавам да се обслужа наживо – пак не става, щото те правят на маймуна… Ми к`во пра`им – лесно е – сменяме банката. Все пак ще ви успокоя, ОББ не е най-голямото зло на банковия пазар… да не забравяме за съществуването на ДСК и Unicredit. Както е казал поетът – винаги може и още… по-зле!
P.S. Е как не се сетих, ако искам да се свържа с банката трябва да си посоча клиентския номер – това е така закономерно, че ако анализирам още малко, мога та предвидя бъдещето за 50 години напред!

Webloz 11

Вече достатъчно много хора ме попитаха, защо не съм блогвал още за тазгодишното издание на конкурса Webloz… може би вече е време да блогна?
Започвам с кратка предистория, като пропускам миналата година. През февруари Тихомил ми се обади и каза, че имам най-висока оценка от членовете на журито миналата година и държи, да участвам отново. И понеже аз също исках да участвам отново… съгласих се.
Първо какво ново: тази година имаше допълнителна категория на “малките” – от 5ти до 8ми клас. Не знаехме какво можем да очакваме от тази категория. Другата интересна новост, която на мен ми допадна много, е че по един от членовете на журитата беше победител от предходна година. В моето жури – за програмирането, това беше Николай Стоицев, който е победител в категорията от миналата година.
Третият човек в моето жури тази година беше Атанас Георгиев от ФМИ. Когато Тихомил ми каза за това, признавам си, първото, което си помислих е – егаси, няк’ъв от ФМИ, ще е някой сухар… хич не ги харесвам тия типове, ама ще го преживея. За добро или за лошо с Наско се запознахме чак в хотел Островче, като пристигнах за мероприятието. Въпреки, че емам колега в моя отдел, която е работила с него, не знаех нищо за Наско. Това беше едно от първите УАУ… оказа се егаси якия пич. Между другото, тази подробност не я бях споделял с никого – нито с Тихомил, нито с Николай, нито със самия Наско… Нищо против нямам Стоян и Гено – членовете на журито от миналата година, но на моя акъл ми идваха малко по-сериозни. С Наско и Ники обстановката се получи доста по-разчупена, което ми помогна по-късно да не изпуша.
Проектите, които бяха представени и тази година бяха УАУ. Това ме учуди по-малко от миналата година, защото бях психически подготвен отново да видя много добри работи. Освен това, да не забравяме, това са 20-те проекта отсяти от над 100, които са били най-добрите сред всички кандидатстващи.
Тихомил ме беше предупредил още преди самата конкурсна програма, че в категорията на “малките” има проекти, които са по-силни от проектите в категорията на “големите”. Така че и за този феномен бях подготвен, макар че това, което видях надмина дори и подготвените ми очаквания.
Победителят в категорията на “малките” – Атанас Господинов, беше направил система за училищен дневник. На пръв поглед нищо кой знае какво. Но системата беше направена на JAVA – доста нестандартен избор на технология. Не на последно място – целият проект беше много завършен. Имаше ясна цел, визия как точно да се постигне, всичко беше реализирано с изпипани детайли… пак ще кажа – системата сама по себе си не е нещо кой знае какво, но е по-приятно да видиш завършена система, отколкото велик проект, но реализиран до средата или реализиран целия, но с немърливо отношение към детайлите. Наско от журито разпитваше хлапето така, както мен не са ме изпитвали в университета. През цялото време беше убеден, че някой друг е писал системата, а момчето само я представя. И наистина изглеждаше невероятно, че такава корпоративна технология се прилага от ученик в 8ми клас. Но всичко, което той показа, само потвърди, че не само разбира технологията, но и има в главата си визия за това как работи всичко вътре, което е достатъчно убедително доказателство, че наистина е автор на проекта и дори и да е получил помощ от други, то със сигурност е напълно в течение на всичко.
Ще разкрия малко тайни от кухнята, но с журито имахме спор по отношение на този проект, тъй като на фона на всичко представено той беше като Rocket Science – все едно на автомобилно рали да отидеш с ракета. Може би Димитър Вулджев беше другият човек сред участниците, който прави някакви опити за висш пилотаж, като разработва неща, които дори и журито не знае как всъщност работят. И ако има малко по-сериозно отношение и към това, което прави и към представянето на работата си, ако проуктите му придобият завършен вид – има реални шансове догодина най-накрая да не е втори (както е тази и миналата година!) Та да се върна на въпросът с проекта на JAVA – моят довод беше, че въпреки, че съревнованието срещу такъв противник е неравна битка, усилието и инвестицията в знания и умения, която е направил Атанас Господинов е несъпоставима с тази на другите участници и би било несправедливо спрямо него, ако не му дадем наградата.
Какво не очаквах: след като миналата година похвалихме (дори може да се каже поощрихме) използването на Frameworks като цяло и на Code Igniter в частност, тази година масово имаше проекти с Code Igniter. Именно по тази причина от журито оставихме без отговор въпросът от страна на една учителка “Какво би ви впечатлило в конкурса догодина?“. Ако посочим нещо конкретно, рискуваме догодина всички да са направили това, а всъщност силата на конкурса е в разнообразността на проектите.
Това, което силно се надявам да видя догодина е повече завършени проекти с изпипан детайл – системи, които освен качествен код имат и други компоненти, които не са основното в категорията за програмиране, но са изключително важен добавък към един проект. Хубав дизайн, добре подбрано име за домейн, комбинирани технологии, дребни екстри, които правят комфортно потребителското изживяване в един сайт или система – AJAX и визуални ефекти, бърза и responsive архитектура. Не казвам, че не видяхме такива неща в някои от проектите, но ми се ще да виждам повече.
Какво ми хареса: на първо място съм впечатлен от начина на представяне на голяма част от проектите. Забелязах доста голямо подобрение в начина на общуване на участниците с журито и публиката в залата. Далеч е от най-добрите образци, но да не забравяме че това са ученици, които в общия случай нямат опит с говоренето пред публика. Виждал съм куп студенти, които се притесняват да говорят пред изпитната комисия след 6 години обучение в университет. Така че браво на хлапетата.
Малко критика: доста лошо проектирани бази данни. От 20 проекта в категорияна та големите имаше около 3 – 4 проекта, които имаха прилично проектирана база данни. Понеже беше една от основните ми забележки в дискусията с журито – надявам се догодина да видя повече и по-добре проектирани бази данни.
Малко и за победителя в категорията на “големите” – проектът е Slides.bg на Георги Ангелов – сайт за “споделяне на идеи”. Първото впечатление, което ми направи е, че също е много завършен проект – въпреки, че в конкурса се оценява предимно програмирането, тук не липсва нито дизайн, нито внимание към детайла – дори домейнът .bg е достатъчно голям фактор за това, че проектът е направен с мерак. Но всъщност най-впечатляващото е технологията, която стои зад сайта… или по-точно технологиите… или още по-точно умелото съчетаване на много различни технологии за постигане на оптимален резултат. Бих казал, че това е професионален подход към един проект – използвани са нестандартни решения, които не са достъпни на повечето виртуални хостинги, които се продават масово в Интернет, но всичко, което е приложено е поставено там за постигане на целта. Представянето на проекта също беше на доста добро ниво и макар да не навлязохме много надълбоко в кода и технологиите, дори и от презентациите си личеше, че момчето е наясно с това, което представя.
Другите проекти, които ми направиха специално впечатление: единият е SlideMate на Георги Костадинов и основното с което ми “обра точките” е това, че направи представянето на системата чрез самата система. Не ми беше хрумнало, макар че е съвсем логично, тъй като системата представлява online инструмент за правене на презентации. Тъй като представянето беше след една от почивките – направи ми впечатление, че размерът на слайдовете на екрана е по-малък от предходните. Предположих, че през почивката някой е разбъзикал проектора – играл си е със zoom колелцето или нещо такова. Чак след като приключи представянето и видях, че това е било в браузъра – разбрах какво се е случило и много се изкефих. Всъщност аз лично исках проектът да е в челната тройка, но за съжаление конкуренцията беше доста силна. Иначе този проект си го бях набелязал още на първия кръг като един от фаворитите. Ако не беше якото артистично представяне на момчето от Националната търговска гимназия – това е проектът който щеше да вземе наградата за най-добра презентация.
Друг проект, който ми (ни) направи страхотно впечатление е сайтът за пиратологията и двамата “малки пирати” – Янчо Янчев и Ивелин Тодоров, които го представяха. В действителност сайтът е чист HTML, качен е на сървър на hit.bg – изобщо имаше пълната предпоставка дори да не достигне до финалното класиране… но отново имахме пример за завършеност на проекта, много добро представяне, пък и участниците бяха шестокласници (на практика най-малките участници) и направеното наистина беше впечатляващо. Отново за съжаление, конкуренцията беше прекалено силна и не се пребориха за повече, но петото място на финално класиране е доста добро представяне и ако продължат да се интересуват от материята (имам предвид web и програмиране, не пиратите), сигурен съм, че след няколко години може да ги видим в много по-горни позиции в този или други подобни конкурси.
Кое беше трудно: колкото и да звучи невероятно – трудно беше да се определи кои проекти да останат на последно място. Много по-трудно от това да се определи кои да заемат първите места. Не само защото е трудно да кажеш на някого, че се е справил по-зле от останалите, но и защото фаворитите се очертават бързо и ясно – силните проекти се набиват на очи. Но въпросът е как да кажеш измежду останалите кой е по-добър и кой не, как да ги сортираш достатъчно прецизно? Всички участници са доста запознати с материала… това е финален кръг – никой не е попаднал тук случайно, това не е изпит, на който да “сгащиш” някого, че не си е научил урока. Задачата е не просто неприятна, но и трудна и сложна.
За да не си мислят участниците, че тия от журито са едни гадни типове, дето идват и “раздават правосъдие”, после си отиват и си лягат, ще разкрия още една тайна от кухнята, която мисля, че се поразчу още в неделя, но: след вечерята журито се затвори в една от залите и дописвахме точки, оценявахме, обсъждахме и коментирахме проектите, представянията и документацията към проектите до 4:30 през нощта. Лично аз имах доброто намерение да пиша точките докато тече представянето. Но оценяването е по 24 критерия и трябваше да избирам – или да се съсредоточа върху попълването на таблицата или да се съсредоточа върху представянето на участниците. Прецених, че първо ще бъде неуважение да забия нос в таблиците, вместо да слушам презентацията и второ – ако не слушам представянето, как мога изобщо да го оценя? Затова при всяко представяне си водех кратки бележки – кое ми е харесало и кое не… а по-късно преминахме към писането на точките. И в действителност задачата се оказа доста тежка. На другия ден станах в 6:30 за да успея да се изкъпя, да закуся и да мога да се присъединя обратно към групата. Постарах се да изглеждам свеж, но на връщане изпих две енергийни напитки, за да мога да карам до София. Прибрах се и заспах с влизането вкъщи.
Какво още: освен че участвах в журито, тази година фирмата в която работя успя да задели малко пари и да подпомогне “младия уеб”, като станахме спонсори на мероприятието. Радвам се, че се намират спонсори и за такива конкурси, които са далеч от комерсиалността на големиете конкурси, които от години са окупирани от различни студия, които ги използват предимно за PR и реклама. Не искам да визирам конкретни имена – нито на конкурси, нито на фирми участници, нито порочните практики, които там се развиват от години. Тъй като в Webloz няма такава търговска ориентация, тук нещата са доста по-чисти и приятни за окото от професионална гледна точка. Затова казвам, че е похвално, че се намират спонсори, които да финансират такава дейност, въпреки, че тя не може да се съпостави със силата на рекламата в комерсиалните конкурси. И разбира се – трябва да благодарим и на Тихомил и фондация Технология за младите за всичко, което правят.
P.S. благодарение на Webloz поствам в този блог поне веднъж годишно… очаквайте да се натутам и да кача снимки от мястото на събитието.