# Weekly roundup: Lost time

Post Syndicated from Eevee original https://eev.ee/dev/2018/02/13/weekly-roundup-lost-time/

I ran out of brain pills near the end of January due to some regulatory kerfuffle, and spent something like a week and a half basically in a daze. I have incredibly a lot of stuff to do right now, too, so not great timing… but, well, I guess no time would be especially good. Oh well. I got a forced vacation and played some Avernum.

Anyway, in the last three weeks, the longest span I’ve ever gone without writing one of these:

• anise: I added a ✨ completely new menu feature ✨ that looks super cool and amazing and will vastly improve the game.

• blog: I wrote SUPER game night 3, featuring a bunch of games from GAMES MADE QUICK??? 2.0! It’s only a third of them though, oh my god, there were just so many.

I also backfilled some release posts, including one for Strawberry Jam 2 — more on that momentarily.

• ???: Figured out a little roadmap and started on an ???.

• idchoppers: Went down a whole rabbit hole trying to port some academic C++ to Rust, ultimately so I could intersect arbitrary shapes, all so I could try out this ridiculous idea to infer the progression through a Doom map. This was kind of painful, and is basically the only useful thing I did while unmedicated. I might write about it.

• misc: I threw together a little PICO-8 prime sieve inspired by this video. It’s surprisingly satisfying.

(Hmm, does this deserve a release post? Where should its permanent home be? Argh.)

• art: I started to draw my Avernum party but only finished one of them. I did finish a comic celebrating the return of my brain pills.

• neon vn: I contributed some UI and bugfixing to a visual novel that’ll be released on Floraverse tomorrow.

• alice vn: For Strawberry Jam 2, glip and I are making a ludicrously ambitious horny visual novel in Ren’Py. Turns out Ren’Py is impressively powerful, and I’ve been having a blast messing with it. But also our idea requires me to write about sixty zillion words by the end of the month. I guess we’ll see how that goes.

I have a (NSFW) progress thread going on my smut alt, but honestly, most of the progress for the next week will be “did more writing”.

I’m behind again! Sorry. I still owe a blog post for last month, and a small project for last month, and now blog posts for this month, and Anise game is kinda in limbo, and I don’t know how any of this will happen with this huge jam game taking priority over basically everything else. I’ll see if I can squeeze other stuff in here and there. I intended to draw more regularly this month, too, but wow I don’t think I can even spare an hour a day.

The jam game is forcing me to do a lot of writing that I’d usually dance around and avoid, though, so I think I’ll come out the other side of it much better and faster and more confident.

Welp. Back to writing!

# Blizzard Targets Fan-Created ‘World of Warcraft’ Legacy Server

Post Syndicated from Ernesto original https://torrentfreak.com/blizzard-targets-fan-created-world-of-warcraft-legacy-server-180203/

Over the years video game developer Blizzard Entertainment has published many popular game titles, including World of Warcraft (WoW).

First released in 2004, the multiplayer online role-playing game has been a massive success. It holds the record for the most popular MMORPG in history, with over 100 million subscribers.

While the current game looks entirely different from its first release, there are many nostalgic gamers who still enjoy the earlier editions. Unfortunately, however, they can’t play them. At least not legally.

The only option WoW fans have is to go to unauthorized fan projects which recreate the early gaming experience, such as Light’s Hope.

“We are what’s known as a ‘Legacy Server’ project for World of Warcraft, which seeks to emulate the experience of playing the game in its earliest iterations, including advancing through early expansions,” the project explains.

“If you’ve ever wanted to see what World of Warcraft was like back in 2004 then this is the place to be. Our goal is to maintain the same feel and structure as the realms back then while maintaining an open platform for development and operation.”

In recent years the project has captured the hearts of tens of thousands of die-hard WoW fans. At the time of writing, the most popular realm has more than 6,000 people playing from all over the world. Blizzard, however, is less excited.

The company has asked the developer platform GitHub to remove the code repository published by Light’s Hope. Blizzard’s notice targets several SQL databases stating that the layout and structure is nearly identical to the early WoW databases.

“The LightsHope spell table has identical layout and typically identical field names as the table from early WoW. We use database tables to represent game data, like spells, in WoW,” Blizzard writes.

“In our code, we use .sql files to represent the data layout of each table […]. MaNGOS, the platform off of which Light’s Hope appears to be built, uses a similar structure. The LightsHope spell_template table matches almost exactly the layout and field names of early WoW client database tables.”

This takedown notice had some effect, as people now see a “repository unavailable due to DMCA takedown” message when they access it in their browser.

While this may slow down development temporarily, it appears that the server itself is still running just fine. There were some downtime reports earlier this week, but it’s unknown whether that was related.

In addition to the GitHub repository, the official Twitter account was also suspended recently.

TorrentFreak contacted both Blizzard and Light’s Hope earlier this week for a comment on the situation. At the time of publication, we haven’t heard back.

Blizzard’s takedown notice comes just weeks after several organizations and gaming fans asked the US Copyright Office to make a DMCA circumvention exemption for “abandoned” games, including older versions of popular MMORPGs.

While it’s possible that such an exemption is granted in the future, it’s unlikely to apply to the public at large. The more likely scenario is that it would permit libraries, researchers, and museums to operate servers for these abandoned games.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

# Random with care

Post Syndicated from Eevee original https://eev.ee/blog/2018/01/02/random-with-care/

Hi! Here are a few loose thoughts about picking random numbers.

This is all aimed at frivolous pursuits like video games. Hell, even video games where money is at stake should be deferring to someone who knows way more than I do. Otherwise you might find out that your deck shuffles in your poker game are woefully inadequate and some smartass is cheating you out of millions. (If your random number generator has fewer than 226 bits of state, it can’t even generate every possible shuffling of a deck of cards!)

## Use the right distribution

Most languages have a random number primitive that spits out a number uniformly in the range [0, 1), and you can go pretty far with just that. But beware a few traps!

### Random pitches

Say you want to pitch up a sound by a random amount, perhaps up to an octave. Your audio API probably has a way to do this that takes a pitch multiplier, where I say “probably” because that’s how the only audio API I’ve used works.

Easy peasy. If 1 is unchanged and 2 is pitched up by an octave, then all you need is rand() + 1. Right?

No! Pitch is exponential — within the same octave, the “gap” between C and C♯ is about half as big as the gap between B and the following C. If you pick a pitch multiplier uniformly, you’ll have a noticeable bias towards the higher pitches.

One octave corresponds to a doubling of pitch, so if you want to pick a random note, you want 2 ** rand().

### Random directions

For two dimensions, you can just pick a random angle with rand() * TAU.

If you want a vector rather than an angle, or if you want a random direction in three dimensions, it’s a little trickier. You might be tempted to just pick a random point where each component is rand() * 2 - 1 (ranging from −1 to 1), but that’s not quite right. A direction is a point on the surface (or, equivalently, within the volume) of a sphere, and picking each component independently produces a point within the volume of a cube; the result will be a bias towards the corners of the cube, where there’s much more extra volume beyond the sphere.

No? Well, just trust me. I don’t know how to make a diagram for this.

Anyway, you could use the Pythagorean theorem a few times and make a huge mess of things, or it turns out there’s a really easy way that even works for two or four or any number of dimensions. You pick each coordinate from a Gaussian (normal) distribution, then normalize the resulting vector. In other words, using Python’s random module:

 1 2 3 4 5 6 def random_direction(): x = random.gauss(0, 1) y = random.gauss(0, 1) z = random.gauss(0, 1) r = math.sqrt(x*x + y*y + z*z) return x/r, y/r, z/r 

Why does this work? I have no idea!

Note that it is possible to get zero (or close to it) for every component, in which case the result is nonsense. You can re-roll all the components if necessary; just check that the magnitude (or its square) is less than some epsilon, which is equivalent to throwing away a tiny sphere at the center and shouldn’t affect the distribution.

### Beware Gauss

Since I brought it up: the Gaussian distribution is a pretty nice one for choosing things in some range, where the middle is the common case and should appear more frequently.

That said, I never use it, because it has one annoying drawback: the Gaussian distribution has no minimum or maximum value, so you can’t really scale it down to the range you want. In theory, you might get any value out of it, with no limit on scale.

In practice, it’s astronomically rare to actually get such a value out. I did a hundred million trials just to see what would happen, and the largest value produced was 5.8.

But, still, I’d rather not knowingly put extremely rare corner cases in my code if I can at all avoid it. I could clamp the ends, but that would cause unnatural bunching at the endpoints. I could reroll if I got a value outside some desired range, but I prefer to avoid rerolling when I can, too; after all, it’s still (astronomically) possible to have to reroll for an indefinite amount of time. (Okay, it’s really not, since you’ll eventually hit the period of your PRNG. Still, though.) I don’t bend over backwards here — I did just say to reroll when picking a random direction, after all — but when there’s a nicer alternative I’ll gladly use it.

And lo, there is a nicer alternative! Enter the beta distribution. It always spits out a number in [0, 1], so you can easily swap it in for the standard normal function, but it takes two “shape” parameters α and β that alter its behavior fairly dramatically.

With α = β = 1, the beta distribution is uniform, i.e. no different from rand(). As α increases, the distribution skews towards the right, and as β increases, the distribution skews towards the left. If α = β, the whole thing is symmetric with a hump in the middle. The higher either one gets, the more extreme the hump (meaning that value is far more common than any other). With a little fiddling, you can get a number of interesting curves.

Screenshots don’t really do it justice, so here’s a little Wolfram widget that lets you play with α and β live:

Note that if α = 1, then 1 is a possible value; if β = 1, then 0 is a possible value. You probably want them both greater than 1, which clamps the endpoints to zero.

Also, it’s possible to have either α or β or both be less than 1, but this creates very different behavior: the corresponding endpoints become poles.

Anyway, something like α = β = 3 is probably close enough to normal for most purposes but already clamped for you. And you could easily replicate something like, say, NetHack’s incredibly bizarre rnz function.

### Random frequency

Say you want some event to have an 80% chance to happen every second. You (who am I kidding, I) might be tempted to do something like this:

 1 2 if random() < 0.8 * dt: do_thing() 

In an ideal world, dt is always the same and is equal to 1 / f, where f is the framerate. Replace that 80% with a variable, say P, and every tic you have a P / f chance to do the… whatever it is.

Each second, f tics pass, so you’ll make this check f times. The chance that any check succeeds is the inverse of the chance that every check fails, which is $$1 – \left(1 – \frac{P}{f}\right)^f$$.

For P of 80% and a framerate of 60, that’s a total probability of 55.3%. Wait, what?

Consider what happens if the framerate is 2. On the first tic, you roll 0.4 twice — but probabilities are combined by multiplying, and splitting work up by dt only works for additive quantities. You lose some accuracy along the way. If you’re dealing with something that multiplies, you need an exponent somewhere.

But in this case, maybe you don’t want that at all. Each separate roll you make might independently succeed, so it’s possible (but very unlikely) that the event will happen 60 times within a single second! Or 200 times, if that’s someone’s framerate.

If you explicitly want something to have a chance to happen on a specific interval, you have to check on that interval. If you don’t have a gizmo handy to run code on an interval, it’s easy to do yourself with a time buffer:

 1 2 3 4 5 6 timer += dt # here, 1 is the "every 1 seconds" while timer > 1: timer -= 1 if random() < 0.8: do_thing() 

Using while means rolls still happen even if you somehow skipped over an entire second.

(For the curious, and the nerds who already noticed: the expression $$1 – \left(1 – \frac{P}{f}\right)^f$$ converges to a specific value! As the framerate increases, it becomes a better and better approximation for $$1 – e^{-P}$$, which for the example above is 0.551. Hey, 60 fps is pretty accurate — it’s just accurately representing something nowhere near what I wanted. Er, you wanted.)

### Rolling your own

Of course, you can fuss with the classic [0, 1] uniform value however you want. If I want a bias towards zero, I’ll often just square it, or multiply two of them together. If I want a bias towards one, I’ll take a square root. If I want something like a Gaussian/normal distribution, but with clearly-defined endpoints, I might add together n rolls and divide by n. (The normal distribution is just what you get if you roll infinite dice and divide by infinity!)

It’d be nice to be able to understand exactly what this will do to the distribution. Unfortunately, that requires some calculus, which this post is too small to contain, and which I didn’t even know much about myself until I went down a deep rabbit hole while writing, and which in many cases is straight up impossible to express directly.

Here’s the non-calculus bit. A source of randomness is often graphed as a PDF — a probability density function. You’ve almost certainly seen a bell curve graphed, and that’s a PDF. They’re pretty nice, since they do exactly what they look like: they show the relative chance that any given value will pop out. On a bog standard bell curve, there’s a peak at zero, and of course zero is the most common result from a normal distribution.

(Okay, actually, since the results are continuous, it’s vanishingly unlikely that you’ll get exactly zero — but you’re much more likely to get a value near zero than near any other number.)

For the uniform distribution, which is what a classic rand() gives you, the PDF is just a straight horizontal line — every result is equally likely.

If there were a calculus bit, it would go here! Instead, we can cheat. Sometimes. Mathematica knows how to work with probability distributions in the abstract, and there’s a free web version you can use. For the example of squaring a uniform variable, try this out:

 1 PDF[TransformedDistribution[u^2, u \[Distributed] UniformDistribution[{0, 1}]], u] 

(The \[Distributed] is a funny tilde that doesn’t exist in Unicode, but which Mathematica uses as a first-class operator. Also, press shiftEnter to evaluate the line.)

This will tell you that the distribution is… $$\frac{1}{2\sqrt{u}}$$. Weird! You can plot it:

 1 Plot[%, {u, 0, 1}] 

(The % refers to the result of the last thing you did, so if you want to try several of these, you can just do Plot[PDF[…], u] directly.)

The resulting graph shows that numbers around zero are, in fact, vastly — infinitely — more likely than anything else.

What about multiplying two together? I can’t figure out how to get Mathematica to understand this, but a great amount of digging revealed that the answer is -ln x, and from there you can plot them both on Wolfram Alpha. They’re similar, though squaring has a much better chance of giving you high numbers than multiplying two separate rolls — which makes some sense, since if either of two rolls is a low number, the product will be even lower.

What if you know the graph you want, and you want to figure out how to play with a uniform roll to get it? Good news! That’s a whole thing called inverse transform sampling. All you have to do is take an integral. Good luck!

This is all extremely ridiculous. New tactic: Just Simulate The Damn Thing. You already have the code; run it a million times, make a histogram, and tada, there’s your PDF. That’s one of the great things about computers! Brute-force numerical answers are easy to come by, so there’s no excuse for producing something like rnz. (Though, be sure your histogram has sufficiently narrow buckets — I tried plotting one for rnz once and the weird stuff on the left side didn’t show up at all!)

By the way, I learned something from futzing with Mathematica here! Taking the square root (to bias towards 1) gives a PDF that’s a straight diagonal line, nothing like the hyperbola you get from squaring (to bias towards 0). How do you get a straight line the other way? Surprise: $$1 – \sqrt{1 – u}$$.

### Okay, okay, here’s the actual math

I don’t claim to have a very firm grasp on this, but I had a hell of a time finding it written out clearly, so I might as well write it down as best I can. This was a great excuse to finally set up MathJax, too.

Say $$u(x)$$ is the PDF of the original distribution and $$u$$ is a representative number you plucked from that distribution. For the uniform distribution, $$u(x) = 1$$. Or, more accurately,

$$u(x) = \begin{cases} 1 & \text{ if } 0 \le x \lt 1 \\ 0 & \text{ otherwise } \end{cases}$$

Remember that $$x$$ here is a possible outcome you want to know about, and the PDF tells you the relative probability that a roll will be near it. This PDF spits out 1 for every $$x$$, meaning every number between 0 and 1 is equally likely to appear.

We want to do something to that PDF, which creates a new distribution, whose PDF we want to know. I’ll use my original example of $$f(u) = u^2$$, which creates a new PDF $$v(x)$$.

The trick is that we need to work in terms of the cumulative distribution function for $$u$$. Where the PDF gives the relative chance that a roll will be (“near”) a specific value, the CDF gives the relative chance that a roll will be less than a specific value.

The conventions for this seem to be a bit fuzzy, and nobody bothers to explain which ones they’re using, which makes this all the more confusing to read about… but let’s write the CDF with a capital letter, so we have $$U(x)$$. In this case, $$U(x) = x$$, a straight 45° line (at least between 0 and 1). With the definition I gave, this should make sense. At some arbitrary point like 0.4, the value of the PDF is 1 (0.4 is just as likely as anything else), and the value of the CDF is 0.4 (you have a 40% chance of getting a number from 0 to 0.4).

Calculus ahoy: the PDF is the derivative of the CDF, which means it measures the slope of the CDF at any point. For $$U(x) = x$$, the slope is always 1, and indeed $$u(x) = 1$$. See, calculus is easy.

Okay, so, now we’re getting somewhere. What we want is the CDF of our new distribution, $$V(x)$$. The CDF is defined as the probability that a roll $$v$$ will be less than $$x$$, so we can literally write:

$$V(x) = P(v \le x)$$

(This is why we have to work with CDFs, rather than PDFs — a PDF gives the chance that a roll will be “nearby,” whatever that means. A CDF is much more concrete.)

What is $$v$$, exactly? We defined it ourselves; it’s the do something applied to a roll from the original distribution, or $$f(u)$$.

$$V(x) = P\!\left(f(u) \le x\right)$$

Now the first tricky part: we have to solve that inequality for $$u$$, which means we have to do something, backwards to $$x$$.

$$V(x) = P\!\left(u \le f^{-1}(x)\right)$$

Almost there! We now have a probability that $$u$$ is less than some value, and that’s the definition of a CDF!

$$V(x) = U\!\left(f^{-1}(x)\right)$$

Hooray! Now to turn these CDFs back into PDFs, all we need to do is differentiate both sides and use the chain rule. If you never took calculus, don’t worry too much about what that means!

$$v(x) = u\!\left(f^{-1}(x)\right)\left|\frac{d}{dx}f^{-1}(x)\right|$$

Wait! Where did that absolute value come from? It takes care of whether $$f(x)$$ increases or decreases. It’s the least interesting part here by far, so, whatever.

There’s one more magical part here when using the uniform distribution — $$u(\dots)$$ is always equal to 1, so that entire term disappears! (Note that this only works for a uniform distribution with a width of 1; PDFs are scaled so the entire area under them sums to 1, so if you had a rand() that could spit out a number between 0 and 2, the PDF would be $$u(x) = \frac{1}{2}$$.)

$$v(x) = \left|\frac{d}{dx}f^{-1}(x)\right|$$

So for the specific case of modifying the output of rand(), all we have to do is invert, then differentiate. The inverse of $$f(u) = u^2$$ is $$f^{-1}(x) = \sqrt{x}$$ (no need for a ± since we’re only dealing with positive numbers), and differentiating that gives $$v(x) = \frac{1}{2\sqrt{x}}$$. Done! This is also why square root comes out nicer; inverting it gives $$x^2$$, and differentiating that gives $$2x$$, a straight line.

Incidentally, that method for turning a uniform distribution into any distribution — inverse transform sampling — is pretty much the same thing in reverse: integrate, then invert. For example, when I saw that taking the square root gave $$v(x) = 2x$$, I naturally wondered how to get a straight line going the other way, $$v(x) = 2 – 2x$$. Integrating that gives $$2x – x^2$$, and then you can use the quadratic formula (or just ask Wolfram Alpha) to solve $$2x – x^2 = u$$ for $$x$$ and get $$f(u) = 1 – \sqrt{1 – u}$$.

Multiply two rolls is a bit more complicated; you have to write out the CDF as an integral and you end up doing a double integral and wow it’s a mess. The only thing I’ve retained is that you do a division somewhere, which then gets integrated, and that’s why it ends up as $$-\ln x$$.

And that’s quite enough of that! (Okay but having math in my blog is pretty cool and I will definitely be doing more of this, sorry, not sorry.)

## Random vs varied

Sometimes, random isn’t actually what you want. We tend to use the word “random” casually to mean something more like chaotic, i.e., with no discernible pattern. But that’s not really random. In fact, given how good humans can be at finding incidental patterns, they aren’t all that unlikely! Consider that when you roll two dice, they’ll come up either the same or only one apart almost half the time. Coincidence? Well, yes.

If you ask for randomness, you’re saying that any outcome — or series of outcomes — is acceptable, including five heads in a row or five tails in a row. Most of the time, that’s fine. Some of the time, it’s less fine, and what you really want is variety. Here are a couple examples and some fairly easy workarounds.

### NPC quips

The nature of games is such that NPCs will eventually run out of things to say, at which point further conversation will give the player a short brush-off quip — a slight nod from the designer to the player that, hey, you hit the end of the script.

Some NPCs have multiple possible quips and will give one at random. The trouble with this is that it’s very possible for an NPC to repeat the same quip several times in a row before abruptly switching to another one. With only a few options to choose from, getting the same option twice or thrice (especially across an entire game, which may have numerous NPCs) isn’t all that unlikely. The notion of an NPC quip isn’t very realistic to start with, but having someone repeat themselves and then abruptly switch to something else is especially jarring.

The easy fix is to show the quips in order! Paradoxically, this is more consistently varied than choosing at random — the original “order” is likely to be meaningless anyway, and it already has the property that the same quip can never appear twice in a row.

If you like, you can shuffle the list of quips every time you reach the end, but take care here — it’s possible that the last quip in the old order will be the same as the first quip in the new order, so you may still get a repeat. (Of course, you can just check for this case and swap the first quip somewhere else if it bothers you.)

That last behavior is, in fact, the canonical way that Tetris chooses pieces — the game simply shuffles a list of all 7 pieces, gives those to you in shuffled order, then shuffles them again to make a new list once it’s exhausted. There’s no avoidance of duplicates, though, so you can still get two S blocks in a row, or even two S and two Z all clumped together, but no more than that. Some Tetris variants take other approaches, such as actively avoiding repeats even several pieces apart or deliberately giving you the worst piece possible.

### Random drops

Random drops are often implemented as a flat chance each time. Maybe enemies have a 5% chance to drop health when they die. Legally speaking, over the long term, a player will see health drops for about 5% of enemy kills.

Over the short term, they may be desperate for health and not survive to see the long term. So you may want to put a thumb on the scale sometimes. Games in the Metroid series, for example, have a somewhat infamous bias towards whatever kind of drop they think you need — health if your health is low, missiles if your missiles are low.

I can’t give you an exact approach to use, since it depends on the game and the feeling you’re going for and the variables at your disposal. In extreme cases, you might want to guarantee a health drop from a tough enemy when the player is critically low on health. (Or if you’re feeling particularly evil, you could go the other way and deny the player health when they most need it…)

The problem becomes a little different, and worse, when the event that triggers the drop is relatively rare. The pathological case here would be something like a raid boss in World of Warcraft, which requires hours of effort from a coordinated group of people to defeat, and which has some tiny chance of dropping a good item that will go to only one of those people. This is why I stopped playing World of Warcraft at 60.

Dialing it back a little bit gives us Enter the Gungeon, a roguelike where each room is a set of encounters and each floor only has a dozen or so rooms. Initially, you have a 1% chance of getting a reward after completing a room — but every time you complete a room and don’t get a reward, the chance increases by 9%, up to a cap of 80%. Once you get a reward, the chance resets to 1%.

The natural question is: how frequently, exactly, can a player expect to get a reward? We could do math, or we could Just Simulate The Damn Thing.

  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 from collections import Counter import random histogram = Counter() TRIALS = 1000000 chance = 1 rooms_cleared = 0 rewards_found = 0 while rewards_found < TRIALS: rooms_cleared += 1 if random.random() * 100 < chance: # Reward! rewards_found += 1 histogram[rooms_cleared] += 1 rooms_cleared = 0 chance = 1 else: chance = min(80, chance + 9) for gaps, count in sorted(histogram.items()): print(f"{gaps:3d} | {count / TRIALS * 100:6.2f}%", '#' * (count // (TRIALS // 100))) 
  1 2 3 4 5 6 7 8 9 10 11 12 13 14 15  1 | 0.98% 2 | 9.91% ######### 3 | 17.00% ################ 4 | 20.23% #################### 5 | 19.21% ################### 6 | 15.05% ############### 7 | 9.69% ######### 8 | 5.07% ##### 9 | 2.09% ## 10 | 0.63% 11 | 0.12% 12 | 0.03% 13 | 0.00% 14 | 0.00% 15 | 0.00% 

We’ve got kind of a hilly distribution, skewed to the left, which is up in this histogram. Most of the time, a player should see a reward every three to six rooms, which is maybe twice per floor. It’s vanishingly unlikely to go through a dozen rooms without ever seeing a reward, so a player should see at least one per floor.

Of course, this simulated a single continuous playthrough; when starting the game from scratch, your chance at a reward always starts fresh at 1%, the worst it can be. If you want to know about how many rewards a player will get on the first floor, hey, Just Simulate The Damn Thing.

 1 2 3 4 5 6 7  0 | 0.01% 1 | 13.01% ############# 2 | 56.28% ######################################################## 3 | 27.49% ########################### 4 | 3.10% ### 5 | 0.11% 6 | 0.00% 

Cool. Though, that’s assuming exactly 12 rooms; it might be worth changing that to pick at random in a way that matches the level generator.

(Enter the Gungeon does some other things to skew probability, which is very nice in a roguelike where blind luck can make or break you. For example, if you kill a boss without having gotten a new gun anywhere else on the floor, the boss is guaranteed to drop a gun.)

### Critical hits

I suppose this is the same problem as random drops, but backwards.

Say you have a battle sim where every attack has a 6% chance to land a devastating critical hit. Presumably the same rules apply to both the player and the AI opponents.

Consider, then, that the AI opponents have exactly the same 6% chance to ruin the player’s day. Consider also that this gives them an 0.4% chance to critical hit twice in a row. 0.4% doesn’t sound like much, but across an entire playthrough, it’s not unlikely that a player might see it happen and find it incredibly annoying.

Perhaps it would be worthwhile to explicitly forbid AI opponents from getting consecutive critical hits.

## In conclusion

An emerging theme here has been to Just Simulate The Damn Thing. So consider Just Simulating The Damn Thing. Even a simple change to a random value can do surprising things to the resulting distribution, so unless you feel like differentiating the inverse function of your code, maybe test out any non-trivial behavior and make sure it’s what you wanted. Probability is hard to reason about.

# Weekly roundup: Anise’s very own video game

Post Syndicated from Eevee original https://eev.ee/dev/2018/01/01/weekly-roundup-anises-very-own-video-game/

Happy new year! 🎆

In an unprecedented move, I did one thing for an entire calendar week. I say “unprecedented” but I guess the same thing happened with fox flux. And NEON PHASE. Hmm. Sensing a pattern. See if you can guess what the one thing was!

• anise!!: Wow! It’s Anise! The game has come so far that I can’t even believe that any of this was a recent change. I made monster AI vastly more sensible, added a boatload of mechanics, fleshed out more than half the map (and sketched out the rest), and drew and implemented most of a menu with a number of excellent goodies. Also, FINALLY (after a full year of daydreaming about it), eliminated the terrible “clock” structure I invented for collision detection, as well as cut down on a huge source of completely pointless allocations, which sped physics up in general by at least 10% and cut GC churn significantly. Hooray! And I’ve done even more just in the last day and a half. Still a good bit of work left, but this game is gonna be fantastic.

• art: Oh right I tried drawing a picture but I didn’t like it so I stopped.

I have some writing to catch up on — I have several things 80% written, but had to stop because I was just starting to get a cold and couldn’t even tell if my own writing was sensible any more. And then I had to work on a video game about my cat. Sorry. Actually, not sorry, video games about my cat are always top priority. You knew what you were signing up for.

# All the lights, all of the twinkly lights

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/all-of-the-lights/

Twinkly lights are to Christmas what pumpkins are to Halloween. And when you add a Raspberry Pi to your light show, the result instantly goes from “Meh, yeah.” to “OMG, wow!”

Here are some cool light-based Christmas projects to inspire you this weekend.

## App-based light control

#### Christmas Tree Lights Demo

Project Code – https://github.com/eidolonFIRE/Christmas-Lights Raspberry Pi A+ ws2812b – https://smile.amazon.com/gp/product/B01H04YAIQ/ref=od_aui_detailpages00?ie=UTF8&psc=1 200w 5V supply – https://smile.amazon.com/gp/product/B01LZRIWZD/ref=od_aui_detailpages01?ie=UTF8&psc=1

In his Christmas lights project, Caleb Johnson uses an app as a control panel to switch between predefined displays. The full code is available on his GitHub, and it connects a Raspberry Pi A+ to a strip of programmable LEDs that change their pattern at the touch of a phone screen.

What’s great about this project, aside from the simplicity of its design, is the scope for extending it. Why not share the app with friends and family, allowing them to control your lights remotely? Or link the lights to social media so they are triggered by a specific hashtag, like in Alex Ellis’ #cheerlights project below.

## Outdoor decorations

#### DIY musical Xmas lights for beginners with raspberry pi

With just a few bucks of extra material, I walk you through converting your regular Christmas lights into a whole-house light show. The goal here is to go from scratch. Although this guide is intended for people who don’t know how to use linux at all and those who do alike, the focus is for people for whom linux and the raspberry pi are a complete mystery.

Looking to outdo your neighbours with your Christmas light show this year? YouTuber Makin’Things has created a beginners guide to setting up a Raspberry Pi–based musical light show for your facade, complete with information on soldering, wiring, and coding.

Once you’ve wrapped your house in metres and metres of lights and boosted your speakers so they can be heard for miles around, why not incorporate #cheerlights to make your outdoor decor interactive?

Still not enough? How about controlling your lights using a drum kit? Christian Kratky’s MIDI-Based Christmas Lights Animation system (or as I like to call it, House Rock) does exactly that.

#### Eye Of The Tiger (MIDI based christmas lights animation system prototype)

Project documentation and source code: https://www.hackster.io/cyborg-titanium-14/light-pi-1c88b0 The song is taken from: https://www.youtube.com/watch?v=G6r1dAire0Y

## Any more?

We know these projects are just the tip of the iceberg when it comes to the Raspberry Pi–powered Christmas projects out there, and as always, we’d love you to share yours with us. So post a link in the comments below, or tag us on social media when posting your build photos, videos, and/or blog links. ‘Tis the season for sharing after all.

The post All the lights, all of the twinkly lights appeared first on Raspberry Pi.

# Presenting AWS IoT Analytics: Delivering IoT Analytics at Scale and Faster than Ever Before

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/launch-presenting-aws-iot-analytics/

One of the technology areas I thoroughly enjoy is the Internet of Things (IoT). Even as a child I used to infuriate my parents by taking apart the toys they would purchase for me to see how they worked and if I could somehow put them back together. It seems somehow I was destined to end up the tough and ever-changing world of technology. Therefore, it’s no wonder that I am really enjoying learning and tinkering with IoT devices and technologies. It combines my love of development and software engineering with my curiosity around circuits, controllers, and other facets of the electrical engineering discipline; even though an electrical engineer I can not claim to be.

Despite all of the information that is collected by the deployment of IoT devices and solutions, I honestly never really thought about the need to analyze, search, and process this data until I came up against a scenario where it became of the utmost importance to be able to search and query through loads of sensory data for an anomaly occurrence. Of course, I understood the importance of analytics for businesses to make accurate decisions and predictions to drive the organization’s direction. But it didn’t occur to me initially, how important it was to make analytics an integral part of my IoT solutions. Well, I learned my lesson just in time because this re:Invent a service is launching to make it easier for anyone to process and analyze IoT messages and device data.

Hello, AWS IoT Analytics!  AWS IoT Analytics is a fully managed service of AWS IoT that provides advanced data analysis of data collected from your IoT devices.  With the AWS IoT Analytics service, you can process messages, gather and store large amounts of device data, as well as, query your data. Also, the new AWS IoT Analytics service feature integrates with Amazon Quicksight for visualization of your data and brings the power of machine learning through integration with Jupyter Notebooks.

Benefits of AWS IoT Analytics

• Helps with predictive analysis of data by providing access to pre-built analytical functions
• Provides ability to visualize analytical output from service
• Provides tools to clean up data
• Can help identify patterns in the gathered data

Be In the Know: IoT Analytics Concepts

• Channel: archives the raw, unprocessed messages and collects data from MQTT topics.
• Pipeline: consumes messages from channels and allows message processing.
• Activities: perform transformations on your messages including filtering attributes and invoking lambda functions advanced processing.
• Data Store: Used as a queryable repository for processed messages. Provide ability to have multiple datastores for messages coming from different devices or locations or filtered by message attributes.
• Data Set: Data retrieval view from a data store, can be generated by a recurring schedule.

Getting Started with AWS IoT Analytics

First, I’ll create a channel to receive incoming messages.  This channel can be used to ingest data sent to the channel via MQTT or messages directed from the Rules Engine. To create a channel, I’ll select the Channels menu option and then click the Create a channel button.

I’ll name my channel, TaraIoTAnalyticsID and give the Channel a MQTT topic filter of Temperature. To complete the creation of my channel, I will click the Create Channel button.

Now that I have my Channel created, I need to create a Data Store to receive and store the messages received on the Channel from my IoT device. Remember you can set up multiple Data Stores for more complex solution needs, but I’ll just create one Data Store for my example. I’ll select Data Stores from menu panel and click Create a data store.

I’ll name my Data Store, TaraDataStoreID, and once I click the Create the data store button and I would have successfully set up a Data Store to house messages coming from my Channel.

Now that I have my Channel and my Data Store, I will need to connect the two using a Pipeline. I’ll create a simple pipeline that just connects my Channel and Data Store, but you can create a more robust pipeline to process and filter messages by adding Pipeline activities like a Lambda activity.

To create a pipeline, I’ll select the Pipelines menu option and then click the Create a pipeline button.

I will not add an Attribute for this pipeline. So I will click Next button.

As we discussed there are additional pipeline activities that I can add to my pipeline for the processing and transformation of messages but I will keep my first pipeline simple and hit the Next button.

The final step in creating my pipeline is for me to select my previously created Data Store and click Create Pipeline.

All that is left for me to take advantage of the AWS IoT Analytics service is to create an IoT rule that sends data to an AWS IoT Analytics channel.  Wow, that was a super easy process to set up analytics for IoT devices.

If I wanted to create a Data Set as a result of queries run against my data for visualization with Amazon Quicksight or integrate with Jupyter Notebooks to perform more advanced analytical functions, I can choose the Analyze menu option to bring up the screens to create data sets and access the Juypter Notebook instances.

Summary

As you can see, it was a very simple process to set up the advanced data analysis for AWS IoT. With AWS IoT Analytics, you have the ability to collect, visualize, process, query and store large amounts of data generated from your AWS IoT connected device. Additionally, you can access the AWS IoT Analytics service in a myriad of different ways; the AWS Command Line Interface (AWS CLI), the AWS IoT API, language-specific AWS SDKs, and AWS IoT Device SDKs.

AWS IoT Analytics is available today for you to dig into the analysis of your IoT data. To learn more about AWS IoT and AWS IoT Analytics go to the AWS IoT Analytics product page and/or the AWS IoT documentation.

Tara

# Weekly roundup: Pedal to the medal

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/09/weekly-roundup-pedal-to-the-medal/

Hi! Sorry. I’m a bit late. I’ve actually been up to my eyeballs in doing stuff for a few days, which has been pretty cool.

• fox flux: Definitely been ramping up how much I’m working on this game. Finished another landing animation blah blah player sprites. Some more work on visual effects, this time a cool silhouette stencil effect thing.

• art: Drew a pic celebrating 1000 followers on my nsfw art Twitter, wow!

• blog: Wrote half of another cross-cutting programming languages post, for October. Then forgot about it for, uhhh, ten days. Whoops! Will definitely get back to that, um, soon.

• writing: Actually made some “good ass legit progress” (according to my notes) on the little Flora twine I’m writing, now including some actual prose instead of just JavaScript wankery.

• bots: I added a bunch more patterns to my Perlin noise Twitter bot and finally implemented a little “masking” thing that will let me make more complex patterns while still making it obvious what they’re supposed to be.

Alas, while Twitter recently bumped the character limit to 280, that doesn’t mean the bot’s output can now be twice as big — emoji now count as two characters. (No, not because of UTF-16; Twitter is deliberately restricting CJK to 140. It’s super weird.)

• cc: I got undo working with this accursèd sprite animation UI, and I fixed just a whole mess of bugs.

This week has been even more busy, which I think bodes well. I’m up to a lot of stuff, hope you’re looking forward to it!

# Welcome Carlo!

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-carlo/

As Backblaze continues to grow, we need to keep our web experience on point, so we put out a call for creative folks that can help us make the Backblaze experience all that it can be. We found Carlo! He’s a frontend web developer who used to work at Sea World. Lets learn a bit more about Carlo, shall we?

Senior Frontend Developer

Where are you originally from?
I grew up in San Diego, California.

What attracted you to Backblaze?
I am excited that frontend architecture is approaching parity with the rest of the web services software development ecosystem. Most of my experience has been full stack development, but I have recently started focusing on the front end. Backblaze shares my goal of having a first class user experience using frameworks like React.

What do you expect to learn while being at Backblaze?
I’m interested in building solutions that help customers visualize and work with their data intuitively and efficiently.

Where else have you worked?
GoPro, Sungevity, and Sea World.

Hip Hop dressage choreographer.

Favorite place you’ve traveled?
The Arctic in Northern Finland, in a train in a boat sailing the gap between Germany and Denmark, and Vieques PR.

Favorite hobby?
Sketching, writing, and dressing up my hairless dogs.

Of what achievement are you most proud?
It’s either helping release a large SOA site, or orchestrating a Morrissey cover band flash mob #squadgoals. OK, maybe one those things didn’t happen…

Star Trek or Star Wars?
Interstellar!

Favorite food?
Mexican food.

Coke or Pepsi?
Ginger beer.

Why do you like certain things?
Things that I like bring me joy a la Marie Kondo.

Anything else you’d like you’d like to tell us?
¯\_(ツ)_/¯

Wow, hip hop dressage choreographer — that is amazing. Welcome aboard Carlo!

The post Welcome Carlo! appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

# Hacker House’s gesture-controlled holographic visualiser

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/hacker-house-holographic-visualiser/

YouTube makers Hacker House are back with a beautiful Flick-controlled holographic music visualiser that we’d really like to have at Pi Towers, please and thank you.

#### Make a Holographic Audio Visualizer with Gesture Control

Find all the code and materials on: https://www.hackster.io/hackerhouse/holographic-audio-visualizer-with-motion-control-e72fee A 3D holographic audio visualizer with gesture control can definitely spice up your party and impress your friends. This display projects an image from a monitor down onto an acrylic pyramid, or “frustum”, which then creates a 3D effect.

You may have seen a similar trick for creating holograms in this tutorial by American Hacker:

#### How To Make 3D Hologram Projector – No Glasses

Who will know that from plastic cd case we can make mini 3d hologram generator and you can watch 3d videos without glasses.

The illusion works due to the way in which images reflect off a flat-topped pyramid or frustum, to use its proper name. In the wonderful way they always do, the residents of Hacker House have now taken this trick one step further.

Using an LCD monitor, 3D-printed parts, a Raspberry Pi, and a Flick board, the Hacker House team has produced a music visualiser truly worthy of being on display.

The Pi Supply Flick is a 3D-tracking and gesture board for your Raspberry Pi, enabling you to channel your inner Jedi and control devices with a mere swish of your hand. As the Hacker House makers explain, in this music player project, there are various ways in which you could control the playlist, visualisation, and volume. However, using the Flick adds a wow-factor that we highly approve of.

The music and visualisations are supplied by a Mac running node.js. As the Raspberry Pi is running on the same network as the Mac, it can communicate with the it via HTTP requests.

The Pi processes incoming commands from the Flick board, and in response send requests to the Mac. Swipe upward above the Flick board, for example, and the Raspberry Pi will request a change of visualisation. Swipe right, and the song will change.

As for the hologram itself, it is formed on an acrylic pyramid sitting below an LCD screen. Images on the screen reflect off the three sides of the pyramid, creating the illusion of a three-dimensional image within. Standard hocus pocus trickery.

Full details on the holographic visualiser, including the scripts, can be found on the hackster.io project page. And if you make your own, we’d love to see it.

Using ideas from this Hacker House build and the American Hacker tutorial, our maker community is bound to create amazing things with the Raspberry Pi, holograms, and tricks of the eye. We’re intrigued to see what you come up with!

For inspiration, another example of a Raspberry Pi optical illusion project is Brian Corteil’s Digital Zoetrope:

Are you up for the challenge of incorporating optical illusions into your Raspberry Pi builds? Share your project ideas and creations in the comments below!

The post Hacker House’s gesture-controlled holographic visualiser appeared first on Raspberry Pi.

# Some notes on the KRACK attack

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/10/some-notes-on-krack-attack.html

This is my interpretation of the KRACK attacks paper that describes a way of decrypting encrypted WiFi traffic with an active attack.

tl;dr: Wow. Everyone needs to be afraid. (Well, worried — not panicked.) It means in practice, attackers can decrypt a lot of wifi traffic, with varying levels of difficulty depending on your precise network setup. My post last July about the DEF CON network being safe was in error.

### Details

This is not a crypto bug but a protocol bug (a pretty obvious and trivial protocol bug).
When a client connects to the network, the access-point will at some point send a random “key” data to use for encryption. Because this packet may be lost in transmission, it can be repeated many times.
What the hacker does is just repeatedly sends this packet, potentially hours later. Each time it does so, it resets the “keystream” back to the starting conditions. The obvious patch that device vendors will make is to only accept the first such packet it receives, ignore all the duplicates.
At this point, the protocol bug becomes a crypto bug. We know how to break crypto when we have two keystreams from the same starting position. It’s not always reliable, but reliable enough that people need to be afraid.
Android, though, is the biggest danger. Rather than simply replaying the packet, a packet with key data of all zeroes can be sent. This allows attackers to setup a fake WiFi access-point and man-in-the-middle all traffic.
In a related case, the access-point/base-station can sometimes also be attacked, affecting the stream sent to the client.
Not only is sniffing possible, but in some limited cases, injection. This allows the traditional attack of adding bad code to the end of HTML pages in order to trick users into installing a virus.

This is an active attack, not a passive attack, so in theory, it’s detectable.

### Who is vulnerable?

Everyone, pretty much.
The hacker doesn’t need to be logged into your network.
It affects all WPA1/WPA2, the personal one with passwords that we use in home, and the enterprise version with certificates we use in enterprises.
It can’t defeat SSL/TLS or VPNs. Thus, if you feel your laptop is safe surfing the public WiFi at airports, then your laptop is still safe from this attack. With Android, it does allow running tools like sslstrip, which can fool many users.
Your home network is vulnerable. Many devices will be using SSL/TLS, so are fine, like your Amazon echo, which you can continue to use without worrying about this attack. Other devices, like your Phillips lightbulbs, may not be so protected.

### How can I defend myself?

Patch.
More to the point, measure your current vendors by how long it takes them to patch. Throw away gear by those vendors that took a long time to patch and replace it with vendors that took a short time.
High-end access-points that contains “WIPS” (WiFi Intrusion Prevention Systems) features should be able to detect this and block vulnerable clients from connecting to the network (once the vendor upgrades the systems, of course). Even low-end access-points, like the $30 ones you get for home, can easily be updated to prevent packet sequence numbers from going back to the start (i.e. from the keystream resetting back to the start). At some point, you’ll need to run the attack against yourself, to make sure all your devices are secure. Since you’ll be constantly allowing random phones to connect to your network, you’ll need to check their vulnerability status before connecting them. You’ll need to continue doing this for several years. Of course, if you are using SSL/TLS for everything, then your danger is mitigated. This is yet another reason why you should be using SSL/TLS for internal communications. Most security vendors will add things to their products/services to defend you. While valuable in some cases, it’s not a defense. The defense is patching the devices you know about, and preventing vulnerable devices from attaching to your network. If I remember correctly, DEF CON uses Aruba. Aruba contains WIPS functionality, which means by the time DEF CON roles around again next year, they should have the feature to deny vulnerable devices from connecting, and specifically to detect an attack in progress and prevent further communication. However, for an attacker near an Android device using a low-powered WiFi, it’s likely they will be able to conduct man-in-the-middle without any WIPS preventing them. # Coaxing 2D platforming out of Unity Post Syndicated from Eevee original https://eev.ee/blog/2017/10/13/coaxing-2d-platforming-out-of-unity/ An anonymous donor asked a question that I can’t even begin to figure out how to answer, but they also said anything else is fine, so here’s anything else. I’ve been avoiding writing about game physics, since I want to save it for ✨ the book I’m writing ✨, but that book will almost certainly not touch on Unity. Here, then, is a brief run through some of the brick walls I ran into while trying to convince Unity to do 2D platforming. This is fairly high-level — there are no blocks of code or helpful diagrams. I’m just getting this out of my head because it’s interesting. If you want more gritty details, I guess you’ll have to wait for ✨ the book ✨. ## The setup I hadn’t used Unity before. I hadn’t even used a “real” physics engine before. My games so far have mostly used LÖVE, a Lua-based engine. LÖVE includes box2d bindings, but for various reasons (not all of them good), I opted to avoid them and instead write my own physics completely from scratch. (How, you ask? ✨ Book ✨!) I was invited to work on a Unity project, Chaos Composer, that someone else had already started. It had basic movement already implemented; I taught myself Unity’s physics system by hacking on it. It’s entirely possible that none of this is actually the best way to do anything, since I was really trying to reproduce my own homegrown stuff in Unity, but it’s the best I’ve managed to come up with. Two recurring snags were that you can’t ask Unity to do multiple physics updates in a row, and sometimes getting the information I wanted was difficult. Working with my own code spoiled me a little, since I could invoke it at any time and ask it anything I wanted; Unity, on the other hand, is someone else’s black box with a rigid interface on top. Also, wow, Googling for a lot of this was not quite as helpful as expected. A lot of what’s out there is just the first thing that works, and often that’s pretty hacky and imposes severe limits on the game design (e.g., “this won’t work with slopes”). Basic movement and collision are the first thing you do, which seems to me like the worst time to be locking yourself out of a lot of design options. I tried very (very, very, very) hard to minimize those kinds of constraints. ## Problem 1: Movement When I showed up, movement was already working. Problem solved! Like any good programmer, I immediately set out to un-solve it. Given a “real” physics engine like Unity prominently features, you have two options: ⓐ treat the player as a physics object, or ⓑ don’t. The existing code went with option ⓑ, like I’d done myself with LÖVE, and like I’d seen countless people advise. Using a physics sim makes for bad platforming. But… why? I believed it, but I couldn’t concretely defend it. I had to know for myself. So I started a blank project, drew some physics boxes, and wrote a dozen-line player controller. Ah! Immediate enlightenment. If the player was sliding down a wall, and I tried to move them into the wall, they would simply freeze in midair until I let go of the movement key. The trouble is that the physics sim works in terms of forces — moving the player involves giving them a nudge in some direction, like a giant invisible hand pushing them around the level. Surprise! If you press a real object against a real wall with your real hand, you’ll see the same effect — friction will cancel out gravity, and the object will stay in midair.. Platformer movement, as it turns out, doesn’t make any goddamn physical sense. What is air control? What are you pushing against? Nothing, really; we just have it because it’s nice to play with, because not having it is a nightmare. I looked to see if there were any common solutions to this, and I only really found one: make all your walls frictionless. Game development is full of hacks like this, and I… don’t like them. I can accept that minor hacks are necessary sometimes, but this one makes an early and widespread change to a fundamental system to “fix” something that was wrong in the first place. It also imposes an “invisible” requirement, something I try to avoid at all costs — if you forget to make a particular wall frictionless, you’ll never know unless you happen to try sliding down it. And so, I swiftly returned to the existing code. It wasn’t too different from what I’d come up with for LÖVE: it applied gravity by hand, tracked the player’s velocity, computed the intended movement each frame, and moved by that amount. The interesting thing was that it used MovePosition, which schedules a movement for the next physics update and stops the movement if the player hits something solid. It’s kind of a nice hybrid approach, actually; all the “physics” for conscious actors is done by hand, but the physics engine is still used for collision detection. It’s also used for collision rejection — if the player manages to wedge themselves several pixels into a solid object, for example, the physics engine will try to gently nudge them back out of it with no extra effort required on my part. I still haven’t figured out how to get that to work with my homegrown stuff, which is built to prevent overlap rather than to jiggle things out of it. ## But wait, what about… Our player is a dynamic body with rotation lock and no gravity. Why not just use a kinematic body? I must be missing something, because I do not understand the point of kinematic bodies. I ran into this with Godot, too, which documented them the same way: as intended for use as players and other manually-moved objects. But by default, they don’t even collide with other kinematic bodies or static geometry. What? There’s a checkbox to turn this on, which I enabled, but then I found out that MovePosition doesn’t stop kinematic bodies when they hit something, so I would’ve had to cast along the intended path of movement to figure out when to stop, thus duplicating the same work the physics engine was about to do. But that’s impossible anyway! Static geometry generally wants to be made of edge colliders, right? They don’t care about concave/convex. Imagine the player is standing on the ground near a wall and tries to move towards the wall. Both the ground and the wall are different edges from the same edge collider. If you try to cast the player’s hitbox horizontally, parallel to the ground, you’ll only get one collision: the existing collision with the ground. Casting doesn’t distinguish between touching and hitting. And because Unity only reports one collision per collider, and because the ground will always show up first, you will never find out about the impending wall collision. So you’re forced to either use raycasts for collision detection or decomposed polygons for world geometry, both of which are slightly worse tools for no real gain. I ended up sticking with a dynamic body. Oh, one other thing that doesn’t really fit anywhere else: keep track of units! If you’re adding something called “velocity” directly to something called “position”, something has gone very wrong. Acceleration is distance per time squared; velocity is distance per time; position is distance. You must multiply or divide by time to convert between them. I never even, say, add a constant directly to position every frame; I always phrase it as velocity and multiply by Δt. It keeps the units consistent: time is always in seconds, not in tics. ## Problem 2: Slopes Ah, now we start to get off in the weeds. A sort of pre-problem here was detecting whether we’re on a slope, which means detecting the ground. The codebase originally used a manual physics query of the area around the player’s feet to check for the ground, which seems to be somewhat common, but that can’t tell me the angle of the detected ground. (It’s also kind of error-prone, since “around the player’s feet” has to be specified by hand and may not stay correct through animations or changes in the hitbox.) I replaced that with what I’d eventually settled on in LÖVE: detect the ground by detecting collisions, and looking at the normal of the collision. A normal is a vector that points straight out from a surface, so if you’re standing on the ground, the normal points straight up; if you’re on a 10° incline, the normal points 10° away from straight up. Not all collisions are with the ground, of course, so I assumed something is ground if the normal pointed away from gravity. (I like this definition more than “points upwards”, because it avoids assuming anything about the direction of gravity, which leaves some interesting doors open for later on.) That’s easily detected by taking the dot product — if it’s negative, the collision was with the ground, and I now have the normal of the ground. Actually doing this in practice was slightly tricky. With my LÖVE engine, I could cram this right into the middle of collision resolution. With Unity, not quite so much. I went through a couple iterations before I really grasped Unity’s execution order, which I guess I will have to briefly recap for this to make sense. Unity essentially has two update cycles. It performs physics updates at fixed intervals for consistency, and updates everything else just before rendering. Within a single frame, Unity does as many fixed physics updates as it has spare time for (which might be zero, one, or more), then does a regular update, then renders. User code can implement either or both of Update, which runs during a regular update, and FixedUpdate, which runs just before Unity does a physics pass. So my solution was: • At the very end of FixedUpdate, clear the actor’s “on ground” flag and ground normal. • During OnCollisionEnter2D and OnCollisionStay2D (which are called from within a physics pass), if there’s a collision that looks like it’s with the ground, set the “on ground” flag and ground normal. (If there are multiple ground collisions, well, good luck figuring out the best way to resolve that! At the moment I’m just taking the first and hoping for the best.) That means there’s a brief window between the end of FixedUpdate and Unity’s physics pass during which a grounded actor might mistakenly believe it’s not on the ground, which is a bit of a shame, but there are very few good reasons for anything to be happening in that window. Okay! Now we can do slopes. Just kidding! First we have to do sliding. When I first looked at this code, it didn’t apply gravity while the player was on the ground. I think I may have had some problems with detecting the ground as result, since the player was no longer pushing down against it? Either way, it seemed like a silly special case, so I made gravity always apply. Lo! I was a fool. The player could no longer move. Why? Because MovePosition does exactly what it promises. If the player collides with something, they’ll stop moving. Applying gravity means that the player is trying to move diagonally downwards into the ground, and so MovePosition stops them immediately. Hence, sliding. I don’t want the player to actually try to move into the ground. I want them to move the unblocked part of that movement. For flat ground, that means the horizontal part, which is pretty much the same as discarding gravity. For sloped ground, it’s a bit more complicated! Okay but actually it’s less complicated than you’d think. It can be done with some cross products fairly easily, but Unity makes it even easier with a couple casts. There’s a Vector3.ProjectOnPlane function that projects an arbitrary vector on a plane given by its normal — exactly the thing I want! So I apply that to the attempted movement before passing it along to MovePosition. I do the same thing with the current velocity, to prevent the player from accelerating infinitely downwards while standing on flat ground. One other thing: I don’t actually use the detected ground normal for this. The player might be touching two ground surfaces at the same time, and I’d want to project on both of them. Instead, I use the player body’s GetContacts method, which returns contact points (and normals!) for everything the player is currently touching. I believe those contact points are tracked by the physics engine anyway, so asking for them doesn’t require any actual physics work. (Looking at the code I have, I notice that I still only perform the slide for surfaces facing upwards — but I’d want to slide against sloped ceilings, too. Why did I do this? Maybe I should remove that.) (Also, I’m pretty sure projecting a vector on a plane is non-commutative, which raises the question of which order the projections should happen in and what difference it makes. I don’t have a good answer.) (I note that my LÖVE setup does something slightly different: it just tries whatever the movement ought to be, and if there’s a collision, then it projects — and tries again with the remaining movement. But I can’t ask Unity to do multiple moves in one physics update, alas.) Okay! Now, slopes. But actually, with the above work done, slopes are most of the way there already. One obvious problem is that the player tries to move horizontally even when on a slope, and the easy fix is to change their movement from speed * Vector2.right to speed * new Vector2(ground.y, -ground.x) while on the ground. That’s the ground normal rotated a quarter-turn clockwise, so for flat ground it still points to the right, and in general it points rightwards along the ground. (Note that it assumes the ground normal is a unit vector, but as far as I’m aware, that’s true for all the normals Unity gives you.) Another issue is that if the player stands motionless on a slope, gravity will cause them to slowly slide down it — because the movement from gravity will be projected onto the slope, and unlike flat ground, the result is no longer zero. For conscious actors only, I counter this by adding the opposite factor to the player’s velocity as part of adding in their walking speed. This matches how the real world works, to some extent: when you’re standing on a hill, you’re exerting some small amount of effort just to stay in place. (Note that slope resistance is not the same as friction. Okay, yes, in the real world, virtually all resistance to movement happens as a result of friction, but bracing yourself against the ground isn’t the same as being passively resisted.) From here there are a lot of things you can do, depending on how you think slopes should be handled. You could make the player unable to walk up slopes that are too steep. You could make walking down a slope faster than walking up it. You could make jumping go along the ground normal, rather than straight up. You could raise the player’s max allowed speed while running downhill. Whatever you want, really. Armed with a normal and awareness of dot products, you can do whatever you want. But first you might want to fix a few aggravating side effects. ## Problem 3: Ground adherence I don’t know if there’s a better name for this. I rarely even see anyone talk about it, which surprises me; it seems like it should be a very common problem. The problem is: if the player runs up a slope which then abruptly changes to flat ground, their momentum will carry them into the air. For very fast players going off the top of very steep slopes, this makes sense, but it becomes visible even for relatively gentle slopes. It was a mild nightmare in the original release of our game Lunar Depot 38, which has very “rough” ground made up of lots of shallow slopes — so the player is very frequently slightly off the ground, which meant they couldn’t jump, for seemingly no reason. (I even had code to fix this, but I disabled it because of a silly visual side effect that I never got around to fixing.) Anyway! The reason this is a problem is that game protagonists are generally not boxes sliding around — they have legs. We don’t go flying off the top of real-world hilltops because we put our foot down until it touches the ground. Simulating this footfall is surprisingly fiddly to get right, especially with someone else’s physics engine. It’s made somewhat easier by Cast, which casts the entire hitbox — no matter what shape it is — in a particular direction, as if it had moved, and tells you all the hypothetical collisions in order. So I cast the player in the direction of gravity by some distance. If the cast hits something solid with a ground-like collision normal, then the player must be close to the ground, and I move them down to touch it (and set that ground as the new ground normal). There are some wrinkles. Wrinkle 1: I only want to do this if the player is off the ground now, but was on the ground last frame, and is not deliberately moving upwards. That latter condition means I want to skip this logic if the player jumps, for example, but also if the player is thrust upwards by a spring or abducted by a UFO or whatever. As long as external code goes through some interface and doesn’t mess with the player’s velocity directly, that shouldn’t be too hard to track. Wrinkle 2: When does this logic run? It needs to happen after the player moves, which means after a Unity physics pass… but there’s no callback for that point in time. I ended up running it at the beginning of FixedUpdate and the beginning of Update — since I definitely want to do it before rendering happens! That means it’ll sometimes happen twice between physics updates. (I could carefully juggle a flag to skip the second run, but I… didn’t do that. Yet?) Wrinkle 3: I can’t move the player with MovePosition! Remember, MovePosition schedules a movement, it doesn’t actually perform one; that means if it’s called twice before the physics pass, the first call is effectively ignored. I can’t easily combine the drop with the player’s regular movement, for various fiddly reasons. I ended up doing it “by hand” using transform.Translate, which I think was the “old way” to do manual movement before MovePosition existed. I’m not totally sure if it activates triggers? For that matter, I’m not sure it even notices collisions — but since I did a full-body Cast, there shouldn’t be any anyway. Wrinkle 4: What, exactly, is “some distance”? I’ve yet to find a satisfying answer for this. It seems like it ought to be based on the player’s current speed and the slope of the ground they’re moving along, but every time I’ve done that math, I’ve gotten totally ludicrous answers that sometimes exceed the size of a tile. But maybe that’s not wrong? Play around, I guess, and think about when the effect should “break” and the player should go flying off the top of a hill. Wrinkle 5: It’s possible that the player will launch off a slope, hit something, and then be adhered to the ground where they wouldn’t have hit it. I don’t much like this edge case, but I don’t see a way around it either. This problem is surprisingly awkward for how simple it sounds, and the solution isn’t entirely satisfying. Oh, well; the results are much nicer than the solution. As an added bonus, this also fixes occasional problems with running down a hill and becoming detached from the ground due to precision issues or whathaveyou. ## Problem 4: One-way platforms Ah, what a nightmare. It took me ages just to figure out how to define one-way platforms. Only block when the player is moving downwards? Nope. Only block when the player is above the platform? Nuh-uh. Well, okay, yes, those approaches might work for convex players and flat platforms. But what about… sloped, one-way platforms? There’s no reason you shouldn’t be able to have those. If Super Mario World can do it, surely Unity can do it almost 30 years later. The trick is, again, to look at the collision normal. If it faces away from gravity, the player is hitting a ground-like surface, so the platform should block them. Otherwise (or if the player overlaps the platform), it shouldn’t. Here’s the catch: Unity doesn’t have conditional collision. I can’t decide, on the fly, whether a collision should block or not. In fact, I think that by the time I get a callback like OnCollisionEnter2D, the physics pass is already over. I could go the other way and use triggers (which are non-blocking), but then I have the opposite problem: I can’t stop the player on the fly. I could move them back to where they hit the trigger, but I envision all kinds of problems as a result. What if they were moving fast enough to activate something on the other side of the platform? What if something else moved to where I’m trying to shove them back to in the meantime? How does this interact with ground detection and listing contacts, which would rightly ignore a trigger as non-blocking? I beat my head against this for a while, but the inability to respond to collision conditionally was a huge roadblock. It’s all the more infuriating a problem, because Unity ships with a one-way platform modifier thing. Unfortunately, it seems to have been implemented by someone who has never played a platformer. It’s literally one-way — the player is only allowed to move straight upwards through it, not in from the sides. It also tries to block the player if they’re moving downwards while inside the platform, which invokes clumsy rejection behavior. And this all seems to be built into the physics engine itself somehow, so I can’t simply copy whatever they did. Eventually, I settled on the following. After calculating attempted movement (including sliding), just at the end of FixedUpdate, I do a Cast along the movement vector. I’m not thrilled about having to duplicate the physics engine’s own work, but I do filter to only things on a “one-way platform” physics layer, which should at least help. For each object the cast hits, I use Physics2D.IgnoreCollision to either ignore or un-ignore the collision between the player and the platform, depending on whether the collision was ground-like or not. (A lot of people suggested turning off collision between layers, but that can’t possibly work — the player might be standing on one platform while inside another, and anyway, this should work for all actors!) Again, wrinkles! But fewer this time. Actually, maybe just one: handling the case where the player already overlaps the platform. I can’t just check for that with e.g. OverlapCollider, because that doesn’t distinguish between overlapping and merely touching. I came up with a fairly simple fix: if I was going to un-ignore the collision (i.e. make the platform block), and the cast distance is reported as zero (either already touching or overlapping), I simply do nothing instead. If I’m standing on the platform, I must have already set it blocking when I was approaching it from the top anyway; if I’m overlapping it, I must have already set it non-blocking to get here in the first place. I can imagine a few cases where this might go wrong. Moving platforms, especially, are going to cause some interesting issues. But this is the best I can do with what I know, and it seems to work well enough so far. Oh, and our player can deliberately drop down through platforms, which was easy enough to implement; I just decide the platform is always passable while some button is held down. ## Problem 5: Pushers and carriers I haven’t gotten to this yet! Oh boy, can’t wait. I implemented it in LÖVE, but my way was hilariously invasive; I’m hoping that having a physics engine that supports a handwaved “this pushes that” will help. Of course, you also have to worry about sticking to platforms, for which the recommended solution is apparently to parent the cargo to the platform, which sounds goofy to me? I guess I’ll find out when I throw myself at it later. ## Overall result I ended up with a fairly pleasant-feeling system that supports slopes and one-way platforms and whatnot, with all the same pieces as I came up with for LÖVE. The code somehow ended up as less of a mess, too, but it probably helps that I’ve been down this rabbit hole once before and kinda knew what I was aiming for this time. Sorry that I don’t have a big block of code for you to copy-paste into your project. I don’t think there are nearly enough narrative discussions of these fundamentals, though, so hopefully this is useful to someone. If not, well, look forward to ✨ my book, that I am writing ✨! # JavaScript got better while I wasn’t looking Post Syndicated from Eevee original https://eev.ee/blog/2017/10/07/javascript-got-better-while-i-wasnt-looking/ IndustrialRobot has generously donated in order to inquire: In the last few years there seems to have been a lot of activity with adding emojis to Unicode. Has there been an equal effort to add ‘real’ languages/glyph systems/etc? And as always, if you don’t have anything to say on that topic, feel free to choose your own. :p Yes. I mean, each release of Unicode lists major new additions right at the top — Unicode 10, Unicode 9, Unicode 8, etc. They also keep fastidious notes, so you can also dig into how and why these new scripts came from, by reading e.g. the proposal for the addition of Zanabazar Square. I don’t think I have much to add here; I’m not a real linguist, I only play one on TV. So with that out of the way, here’s something completely different! ## A brief history of JavaScript JavaScript was created in seven days, about eight thousand years ago. It was pretty rough, and it stayed rough for most of its life. But that was fine, because no one used it for anything besides having a trail of sparkles follow your mouse on their Xanga profile. Then people discovered you could actually do a handful of useful things with JavaScript, and it saw a sharp uptick in usage. Alas, it stayed pretty rough. So we came up with polyfills and jQuerys and all kinds of miscellaneous things that tried to smooth over the rough parts, to varying degrees of success. And… that’s it. That’s pretty much how things stayed for a while. I have complicated feelings about JavaScript. I don’t hate it… but I certainly don’t enjoy it, either. It has some pretty neat ideas, like prototypical inheritance and “everything is a value”, but it buries them under a pile of annoying quirks and a woefully inadequate standard library. The DOM APIs don’t make things much better — they seem to be designed as though the target language were Java, rarely taking advantage of any interesting JavaScript features. And the places where the APIs overlap with the language are a hilarious mess: I have to check documentation every single time I use any API that returns a set of things, because there are at least three totally different conventions for handling that and I can’t keep them straight. The funny thing is that I’ve been fairly happy to work with Lua, even though it shares most of the same obvious quirks as JavaScript. Both languages are weakly typed; both treat nonexistent variables and keys as simply false values, rather than errors; both have a single data structure that doubles as both a list and a map; both use 64-bit floating-point as their only numeric type (though Lua added integers very recently); both lack a standard object model; both have very tiny standard libraries. Hell, Lua doesn’t even have exceptions, not really — you have to fake them in much the same style as Perl. And yet none of this bothers me nearly as much in Lua. The differences between the languages are very subtle, but combined they make a huge impact. • Lua has separate operators for addition and concatenation, so + is never ambiguous. It also has printf-style string formatting in the standard library. • Lua’s method calls are syntactic sugar: foo:bar() just means foo.bar(foo). Lua doesn’t even have a special this or self value; the invocant just becomes the first argument. In contrast, JavaScript invokes some hand-waved magic to set its contextual this variable, which has led to no end of confusion. • Lua has an iteration protocol, as well as built-in iterators for dealing with list-style or map-style data. JavaScript has a special dedicated Array type and clumsy built-in iteration syntax. • Lua has operator overloading and (surprisingly flexible) module importing. • Lua allows the keys of a map to be any value (though non-scalars are always compared by identity). JavaScript implicitly converts keys to strings — and since there’s no operator overloading, there’s no way to natively fix this. These are fairly minor differences, in the grand scheme of language design. And almost every feature in Lua is implemented in a ridiculously simple way; in fact the entire language is described in complete detail in a single web page. So writing JavaScript is always frustrating for me: the language is so close to being much more ergonomic, and yet, it isn’t. Or, so I thought. As it turns out, while I’ve been off doing other stuff for a few years, browser vendors have been implementing all this pie-in-the-sky stuff from “ES5” and “ES6”, whatever those are. People even upgrade their browsers now. Lo and behold, the last time I went to write JavaScript, I found out that a number of papercuts had actually been solved, and the solutions were sufficiently widely available that I could actually use them in web code. The weird thing is that I do hear a lot about JavaScript, but the feature I’ve seen raved the most about by far is probably… built-in types for working with arrays of bytes? That’s cool and all, but not exactly the most pressing concern for me. Anyway, if you also haven’t been keeping tabs on the world of JavaScript, here are some things we missed. ## let MDN docs — supported in Firefox 44, Chrome 41, IE 11, Safari 10 I’m pretty sure I first saw let over a decade ago. Firefox has supported it for ages, but you actually had to opt in by specifying JavaScript version 1.7. Remember JavaScript versions? You know, from back in the days when people actually suggested you write stuff like this:  1  Yikes. Anyway, so, let declares a variable — but scoped to the immediately containing block, unlike var, which scopes to the innermost function. The trouble with var was that it was very easy to make misleading:  1 2 3 4 5 6 // foo exists here while (true) { var foo = ...; ... } // foo exists here too  If you reused the same temporary variable name in a different block, or if you expected to be shadowing an outer foo, or if you were trying to do something with creating closures in a loop, this would cause you some trouble. But no more, because let actually scopes the way it looks like it should, the way variable declarations do in C and friends. As an added bonus, if you refer to a variable declared with let outside of where it’s valid, you’ll get a ReferenceError instead of a silent undefined value. Hooray! There’s one other interesting quirk to let that I can’t find explicitly documented. Consider:  1 2 3 4 5 6 7 let closures = []; for (let i = 0; i < 4; i++) { closures.push(function() { console.log(i); }); } for (let j = 0; j < closures.length; j++) { closures[j](); }  If this code had used var i, then it would print 4 four times, because the function-scoped var i means each closure is sharing the same i, whose final value is 4. With let, the output is 0 1 2 3, as you might expect, because each run through the loop gets its own i. But wait, hang on. The semantics of a C-style for are that the first expression is only evaluated once, at the very beginning. So there’s only one let i. In fact, it makes no sense for each run through the loop to have a distinct i, because the whole idea of the loop is to modify i each time with i++. I assume this is simply a special case, since it’s what everyone expects. We expect it so much that I can’t find anyone pointing out that the usual explanation for why it works makes no sense. It has the interesting side effect that for no longer de-sugars perfectly to a while, since this will print all 4s:  1 2 3 4 5 6 7 8 9 closures = []; let i = 0; while (i < 4) { closures.push(function() { console.log(i); }); i++; } for (let j = 0; j < closures.length; j++) { closures[j](); }  This isn’t a problem — I’m glad let works this way! — it just stands out to me as interesting. Lua doesn’t need a special case here, since it uses an iterator protocol that produces values rather than mutating a visible state variable, so there’s no problem with having the loop variable be truly distinct on each run through the loop. ## Classes MDN docs — supported in Firefox 45, Chrome 42, Safari 9, Edge 13 Prototypical inheritance is pretty cool. The way JavaScript presents it is a little bit opaque, unfortunately, which seems to confuse a lot of people. JavaScript gives you enough functionality to make it work, and even makes it sound like a first-class feature with a property outright called prototype… but to actually use it, you have to do a bunch of weird stuff that doesn’t much look like constructing an object or type. The funny thing is, people with almost any background get along with Python just fine, and Python uses prototypical inheritance! Nobody ever seems to notice this, because Python tucks it neatly behind a class block that works enough like a Java-style class. (Python also handles inheritance without using the prototype, so it’s a little different… but I digress. Maybe in another post.) The point is, there’s nothing fundamentally wrong with how JavaScript handles objects; the ergonomics are just terrible. Lo! They finally added a class keyword. Or, rather, they finally made the class keyword do something; it’s been reserved this entire time.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 class Vector { constructor(x, y) { this.x = x; this.y = y; } get magnitude() { return Math.sqrt(this.x * this.x + this.y * this.y); } dot(other) { return this.x * other.x + this.y * other.y; } }  This is all just sugar for existing features: creating a Vector function to act as the constructor, assigning a function to Vector.prototype.dot, and whatever it is you do to make a property. (Oh, there are properties. I’ll get to that in a bit.) The class block can be used as an expression, with or without a name. It also supports prototypical inheritance with an extends clause and has a super pseudo-value for superclass calls. It’s a little weird that the inside of the class block has its own special syntax, with function omitted and whatnot, but honestly you’d have a hard time making a class block without special syntax. One severe omission here is that you can’t declare values inside the block, i.e. you can’t just drop a bar = 3; in there if you want all your objects to share a default attribute. The workaround is to just do this.bar = 3; inside the constructor, but I find that unsatisfying, since it defeats half the point of using prototypes. ## Properties MDN docs — supported in Firefox 4, Chrome 5, IE 9, Safari 5.1 JavaScript historically didn’t have a way to intercept attribute access, which is a travesty. And by “intercept attribute access”, I mean that you couldn’t design a value foo such that evaluating foo.bar runs some code you wrote. Exciting news: now it does. Or, rather, you can intercept specific attributes, like in the class example above. The above magnitude definition is equivalent to:  1 2 3 4 5 6 7 Object.defineProperty(Vector.prototype, 'magnitude', { configurable: true, enumerable: true, get: function() { return Math.sqrt(this.x * this.x + this.y * this.y); }, });  Beautiful. And what even are these configurable and enumerable things? It seems that every single key on every single object now has its own set of three Boolean twiddles: • configurable means the property itself can be reconfigured with another call to Object.defineProperty. • enumerable means the property appears in for..in or Object.keys(). • writable means the property value can be changed, which only applies to properties with real values rather than accessor functions. The incredibly wild thing is that for properties defined by Object.defineProperty, configurable and enumerable default to false, meaning that by default accessor properties are immutable and invisible. Super weird. Nice to have, though. And luckily, it turns out the same syntax as in class also works in object literals.  1 2 3 4 5 6 Vector.prototype = { get magnitude() { return Math.sqrt(this.x * this.x + this.y * this.y); }, ... };  Alas, I’m not aware of a way to intercept arbitrary attribute access. Another feature along the same lines is Object.seal(), which marks all of an object’s properties as non-configurable and prevents any new properties from being added to the object. The object is still mutable, but its “shape” can’t be changed. And of course you can just make the object completely immutable if you want, via setting all its properties non-writable, or just using Object.freeze(). I have mixed feelings about the ability to irrevocably change something about a dynamic runtime. It would certainly solve some gripes of former Haskell-minded colleagues, and I don’t have any compelling argument against it, but it feels like it violates some unwritten contract about dynamic languages — surely any structural change made by user code should also be able to be undone by user code? ## Slurpy arguments MDN docs — supported in Firefox 15, Chrome 47, Edge 12, Safari 10 Officially this feature is called “rest parameters”, but that’s a terrible name, no one cares about “arguments” vs “parameters”, and “slurpy” is a good word. Bless you, Perl.  1 2 3 function foo(a, b, ...args) { // ... }  Now you can call foo with as many arguments as you want, and every argument after the second will be collected in args as a regular array. You can also do the reverse with the spread operator:  1 2 3 4 5 let args = []; args.push(1); args.push(2); args.push(3); foo(...args);  It even works in array literals, even multiple times:  1 2 let args2 = [...args, ...args]; console.log(args2); // [1, 2, 3, 1, 2, 3]  Apparently there’s also a proposal for allowing the same thing with objects inside object literals. ## Default arguments MDN docs — supported in Firefox 15, Chrome 49, Edge 14, Safari 10 Yes, arguments can have defaults now. It’s more like Sass than Python — default expressions are evaluated once per call, and later default expressions can refer to earlier arguments. I don’t know how I feel about that but whatever.  1 2 3 function foo(n = 1, m = n + 1, list = []) { ... }  Also, unlike Python, you can have an argument with a default and follow it with an argument without a default, since the default default (!) is and always has been defined as undefined. Er, let me just write it out.  1 2 3 function bar(a = 5, b) { ... }  ## Arrow functions MDN docs — supported in Firefox 22, Chrome 45, Edge 12, Safari 10 Perhaps the most humble improvement is the arrow function. It’s a slightly shorter way to write an anonymous function.  1 2 3 (a, b, c) => { ... } a => { ... } () => { ... }  An arrow function does not set this or some other magical values, so you can safely use an arrow function as a quick closure inside a method without having to rebind this. Hooray! Otherwise, arrow functions act pretty much like regular functions; you can even use all the features of regular function signatures. Arrow functions are particularly nice in combination with all the combinator-style array functions that were added a while ago, like Array.forEach.  1 2 3 [7, 8, 9].forEach(value => { console.log(value); });  ## Symbol MDN docs — supported in Firefox 36, Chrome 38, Edge 12, Safari 9 This isn’t quite what I’d call an exciting feature, but it’s necessary for explaining the next one. It’s actually… extremely weird. symbol is a new kind of primitive (like number and string), not an object (like, er, Number and String). A symbol is created with Symbol('foo'). No, not new Symbol('foo'); that throws a TypeError, for, uh, some reason. The only point of a symbol is as a unique key. You see, symbols have one very special property: they can be used as object keys, and will not be stringified. Remember, only strings can be keys in JavaScript — even the indices of an array are, semantically speaking, still strings. Symbols are a new exception to this rule. Also, like other objects, two symbols don’t compare equal to each other: Symbol('foo') != Symbol('foo'). The result is that symbols solve one of the problems that plauges most object systems, something I’ve talked about before: interfaces. Since an interface might be implemented by any arbitrary type, and any arbitrary type might want to implement any number of arbitrary interfaces, all the method names on an interface are effectively part of a single global namespace. I think I need to take a moment to justify that. If you have IFoo and IBar, both with a method called method, and you want to implement both on the same type… you have a problem. Because most object systems consider “interface” to mean “I have a method called method, with no way to say which interface’s method you mean. This is a hard problem to avoid, because IFoo and IBar might not even come from the same library. Occasionally languages offer a clumsy way to “rename” one method or the other, but the most common approach seems to be for interface designers to avoid names that sound “too common”. You end up with redundant mouthfuls like IFoo.foo_method. This incredibly sucks, and the only languages I’m aware of that avoid the problem are the ML family and Rust. In Rust, you define all the methods for a particular trait (interface) in a separate block, away from the type’s “own” methods. It’s pretty slick. You can still do obj.method(), and as long as there’s only one method among all the available traits, you’ll get that one. If not, there’s syntax for explicitly saying which trait you mean, which I can’t remember because I’ve never had to use it. Symbols are JavaScript’s answer to this problem. If you want to define some interface, you can name its methods with symbols, which are guaranteed to be unique. You just have to make sure you keep the symbol around somewhere accessible so other people can actually use it. (Or… not?) The interesting thing is that JavaScript now has several of its own symbols built in, allowing user objects to implement features that were previously reserved for built-in types. For example, you can use the Symbol.hasInstance symbol — which is simply where the language is storing an existing symbol and is not the same as Symbol('hasInstance')! — to override instanceof:  1 2 3 4 5 6 7 8 // oh my god don't do this though class EvenNumber { static [Symbol.hasInstance](obj) { return obj % 2 == 0; } } console.log(2 instanceof EvenNumber); // true console.log(3 instanceof EvenNumber); // false  Oh, and those brackets around Symbol.hasInstance are a sort of reverse-quoting — they indicate an expression to use where the language would normally expect a literal identifier. I think they work as object keys, too, and maybe some other places. The equivalent in Python is to implement a method called __instancecheck__, a name which is not special in any way except that Python has reserved all method names of the form __foo__. That’s great for Python, but doesn’t really help user code. JavaScript has actually outclassed (ho ho) Python here. Of course, obj[BobNamespace.some_method]() is not the prettiest way to call an interface method, so it’s not perfect. I imagine this would be best implemented in user code by exposing a polymorphic function, similar to how Python’s len(obj) pretty much just calls obj.__len__(). I only bring this up because it’s the plumbing behind one of the most incredible things in JavaScript that I didn’t even know about until I started writing this post. I’m so excited oh my gosh. Are you ready? It’s: ## Iteration protocol MDN docs — supported in Firefox 27, Chrome 39, Safari 10; still experimental in Edge Yes! Amazing! JavaScript has first-class support for iteration! I can’t even believe this. It works pretty much how you’d expect, or at least, how I’d expect. You give your object a method called Symbol.iterator, and that returns an iterator. What’s an iterator? It’s an object with a next() method that returns the next value and whether the iterator is exhausted. Wait, wait, wait a second. Hang on. The method is called next? Really? You didn’t go for Symbol.next? Python 2 did exactly the same thing, then realized its mistake and changed it to __next__ in Python 3. Why did you do this? Well, anyway. My go-to test of an iterator protocol is how hard it is to write an equivalent to Python’s enumerate(), which takes a list and iterates over its values and their indices. In Python it looks like this:  1 2 3 4 5 for i, value in enumerate(['one', 'two', 'three']): print(i, value) # 0 one # 1 two # 2 three  It’s super nice to have, and I’m always amazed when languages with “strong” “support” for iteration don’t have it. Like, C# doesn’t. So if you want to iterate over a list but also need indices, you need to fall back to a C-style for loop. And if you want to iterate over a lazy or arbitrary iterable but also need indices, you need to track it yourself with a counter. Ridiculous. Here’s my attempt at building it in JavaScript.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 function enumerate(iterable) { // Return a new iter*able* object with a Symbol.iterator method that // returns an iterator. return { [Symbol.iterator]: function() { let iterator = iterable[Symbol.iterator](); let i = 0; return { next: function() { let nextval = iterator.next(); if (! nextval.done) { nextval.value = [i, nextval.value]; i++; } return nextval; }, }; }, }; } for (let [i, value] of enumerate(['one', 'two', 'three'])) { console.log(i, value); } // 0 one // 1 two // 2 three  Incidentally, for..of (which iterates over a sequence, unlike for..in which iterates over keys — obviously) is finally supported in Edge 12. Hallelujah. Oh, and let [i, value] is destructuring assignment, which is also a thing now and works with objects as well. You can even use the splat operator with it! Like Python! (And you can use it in function signatures! Like Python! Wait, no, Python decided that was terrible and removed it in 3…)  1 let [x, y, ...others] = ['apple', 'orange', 'cherry', 'banana'];  It’s a Halloween miracle. 🎃 ## Generators MDN docs — supported in Firefox 26, Chrome 39, Edge 13, Safari 10 That’s right, JavaScript has goddamn generators now. It’s basically just copying Python and adding a lot of superfluous punctuation everywhere. Not that I’m complaining. Also, generators are themselves iterable, so I’m going to cut to the chase and rewrite my enumerate() with a generator.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 function enumerate(iterable) { return { [Symbol.iterator]: function*() { let i = 0; for (let value of iterable) { yield [i, value]; i++; } }, }; } for (let [i, value] of enumerate(['one', 'two', 'three'])) { console.log(i, value); } // 0 one // 1 two // 2 three  Amazing. function* is a pretty strange choice of syntax, but whatever? I guess it also lets them make yield only act as a keyword inside a generator, for ultimate backwards compatibility. JavaScript generators support everything Python generators do: yield* yields every item from a subsequence, like Python’s yield from; generators can return final values; you can pass values back into the generator if you iterate it by hand. No, really, I wasn’t kidding, it’s basically just copying Python. It’s great. You could now built asyncio in JavaScript! In fact, they did that! JavaScript now has async and await. An async function returns a Promise, which is also a built-in type now. Amazing. ## Sets and maps MDN docs for MapMDN docs for Set — supported in Firefox 13, Chrome 38, IE 11, Safari 7.1 I did not save the best for last. This is much less exciting than generators. But still exciting. The only data structure in JavaScript is the object, a map where the strings are keys. (Or now, also symbols, I guess.) That means you can’t readily use custom values as keys, nor simulate a set of arbitrary objects. And you have to worry about people mucking with Object.prototype, yikes. But now, there’s Map and Set! Wow. Unfortunately, because JavaScript, Map couldn’t use the indexing operators without losing the ability to have methods, so you have to use a boring old method-based API. But Map has convenient methods that plain objects don’t, like entries() to iterate over pairs of keys and values. In fact, you can use a map with for..of to get key/value pairs. So that’s nice. Perhaps more interesting, there’s also now a WeakMap and WeakSet, where the keys are weak references. I don’t think JavaScript had any way to do weak references before this, so that’s pretty slick. There’s no obvious way to hold a weak value, but I guess you could substitute a WeakSet with only one item. ## Template literals MDN docs — supported in Firefox 34, Chrome 41, Edge 12, Safari 9 Template literals are JavaScript’s answer to string interpolation, which has historically been a huge pain in the ass because it doesn’t even have string formatting in the standard library. They’re just strings delimited by backticks instead of quotes. They can span multiple lines and contain expressions.  1 2 console.log(one plus two is${1 + 2}); 

Someone decided it would be a good idea to allow nesting more sets of backticks inside a {} expression, so, good luck to syntax highlighters. However, someone also had the most incredible idea ever, which was to add syntax allowing user code to do the interpolation — so you can do custom escaping, when absolutely necessary, which is virtually never, because “escaping” means you’re building a structured format by slopping strings together willy-nilly instead of using some API that works with the structure.   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 // OF COURSE, YOU SHOULDN'T BE DOING THIS ANYWAY; YOU SHOULD BUILD HTML WITH // THE DOM API AND USE .textContent FOR LITERAL TEXT. BUT AS AN EXAMPLE: function html(literals, ...values) { let ret = []; literals.forEach((literal, i) => { if (i > 0) { // Is there seriously still not a built-in function for doing this? // Well, probably because you SHOULDN'T BE DOING IT ret.push(values[i - 1] .replace(/&/g, '&') .replace(//g, '>') .replace(/"/g, '"') .replace(/'/g, ''')); } ret.push(literal); }); return ret.join(''); } let username = 'Bob It’s a shame this feature is in JavaScript, the language where you are least likely to need it. ## Trailing commas Remember how you couldn’t do this for ages, because ass-old IE considered it a syntax error and would reject the entire script?  1 2 3 4 5 { a: 'one', b: 'two', c: 'three', // <- THIS GUY RIGHT HERE }  Well now it’s part of the goddamn spec and if there’s anything in this post you can rely on, it’s this. In fact you can use AS MANY GODDAMN TRAILING COMMAS AS YOU WANT. But only in arrays.  1 [1, 2, 3,,,,,,,,,,,,,,,,,,,,,,,,,]  Apparently that has the bizarre side effect of reserving extra space at the end of the array, without putting values there. ## And more, probably Like strict mode, which makes a few silent “errors” be actual errors, forces you to declare variables (no implicit globals!), and forbids the completely bozotic with block. Or String.trim(), which trims whitespace off of strings. Or… Math.sign()? That’s new? Seriously? Well, okay. Or the Proxy type, which lets you customize indexing and assignment and calling. Oh. I guess that is possible, though this is a pretty weird way to do it; why not just use symbol-named methods? You can write Unicode escapes for astral plane characters in strings (or identifiers!), as \u{XXXXXXXX}. There’s a const now? I extremely don’t care, just name it in all caps and don’t reassign it, come on. There’s also a mountain of other minor things, which you can peruse at your leisure via MDN or the ECMAScript compatibility tables (note the links at the top, too). That’s all I’ve got. I still wouldn’t say I’m a big fan of JavaScript, but it’s definitely making an effort to clean up some goofy inconsistencies and solve common problems. I think I could even write some without yelling on Twitter about it now. On the other hand, if you’re still stuck supporting IE 10 for some reason… well, er, my condolences. # timeShift(GrafanaBuzz, 1w) Issue 16 Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/10/06/timeshiftgrafanabuzz-1w-issue-16/ Welcome to another issue of TimeShift. In addition to the roundup of articles and plugin updates, we had a big announcement this week – Early Bird tickets to GrafanaCon EU are now available! We’re also accepting CFPs through the end of October, so if you have a topic in mind, don’t wait until the last minute, please send it our way. Speakers who are selected will receive a comped ticket to the conference. #### Early Bird Tickets Now Available We’ve released a limited number of Early Bird tickets before General Admission tickets are available. Take advantage of this discount before they’re sold out! Interested in speaking at GrafanaCon? We’re looking for technical and non-tecnical talks of all sizes. Submit a CFP Now. #### From the Blogosphere Get insights into your Azure Cosmos DB: partition heatmaps, OMS, and More: Microsoft recently announced the ability to access a subset of Azure Cosmos DB metrics via Azure Monitor API. Grafana Labs built an Azure Monitor Plugin for Grafana 4.5 to visualize the data. How to monitor Docker for Mac/Windows: Brian was tired of guessing about the performance of his development machines and test environment. Here, he shows how to monitor Docker with Prometheus to get a better understanding of a dev environment in his quest to monitor all the things. Prometheus and Grafana to Monitor 10,000 servers: This article covers enokido’s process of choosing a monitoring platform. He identifies three possible solutions, outlines the pros and cons of each, and discusses why he chose Prometheus. GitLab Monitoring: It’s fascinating to see Grafana dashboards with production data from companies around the world. For instance, we’ve previously highlighted the huge number of dashboards Wikimedia publicly shares. This week, we found that GitLab also has public dashboards to explore. Monitoring a Docker Swarm Cluster with cAdvisor, InfluxDB and Grafana | The Laboratory: It’s important to know the state of your applications in a scalable environment such as Docker Swarm. This video covers an overview of Docker, VM’s vs. containers, orchestration and how to monitor Docker Swarm. Introducing Telemetry: Actionable Time Series Data from Counters: Learn how to use counters from mulitple disparate sources, devices, operating systems, and applications to generate actionable time series data. ofp_sniffer Branch 1.2 (docker/influxdb/grafana) Upcoming Features: This video demo shows off some of the upcoming features for OFP_Sniffer, an OpenFlow sniffer to help network troubleshooting in production networks. #### Grafana Plugins Plugin authors add new features and bugfixes all the time, so it’s important to always keep your plugins up to date. To update plugins from on-prem Grafana, use the Grafana-cli tool, if you are using Hosted Grafana, you can update with 1 click! If you have questions or need help, hit up our community site, where the Grafana team and members of the community are happy to help. UPDATED PLUGIN PNP for Nagios Data Source – The latest release for the PNP data source has some fixes and adds a mathematical factor option. UPDATED PLUGIN Google Calendar Data Source – This week, there was a small bug fix for the Google Calendar annotations data source. UPDATED PLUGIN BT Plugins – Our friends at BT have been busy. All of the BT plugins in our catalog received and update this week. The plugins are the Status Dot Panel, the Peak Report Panel, the Trend Box Panel and the Alarm Box Panel. Changes include: • Custom dashboard links now work in Internet Explorer. • The Peak Report panel no longer supports click-to-sort. • The Status Dot panel tooltips now look like Grafana tooltips. #### This week’s MVC (Most Valuable Contributor) Each week we highlight some of the important contributions from our amazing open source community. This week, we’d like to recognize a contributor who did a lot of work to improve Prometheus support. pdoan017 Thanks to Alin Sinpaleanfor his Prometheus PR – that aligns the step and interval parameters. Alin got a lot of feedback from the Prometheus community and spent a lot of time and energy explaining, debating and iterating before the PR was ready. Thank you! #### Grafana Labs is Hiring! We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING! Check out our #### Tweet of the Week We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove Wow – Excited to be a part of exploring data to find out how Mexico City is evolving. #### We Need Your Help! Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment. #### What do you think? That’s a wrap! How are we doing? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better! Follow us on Twitter, like us on Facebook, and join the Grafana Labs community. # Какво трябва да се случи с електронното управление според пътната карта? Post Syndicated from Bozho original https://blog.bozho.net/blog/2887 „Пътна карта за развитие на стратегията за електронно управление“ звучи като тежко бюрократичен документ. Когато го писахме дори си позволих да иронизирам стратегическите документи. Но всъщност такива документи са необходими, за да се знаят както посоката и целите, така и конкретните проекти и интервенции, които трябва да се случат. Пътната карта обхваща разбирането на авторите ѝ (и техните ръководители) за това какво трябва да се случи в следващите няколко години. Медиите спорадично се сещат за пътната карта – като например при приемането ѝ генерираха мини-скандал как така държавата ще прави електронна поща за 25 милиона, което беше напълно грешен опит за тълкуване на един абзац от документ. Наскоро пътната карта беше преоткрита, в частта за изцяло нов адресен регистър и ново ГРАО, този път с по-позитивна конотация. Но отново цялостния и лесно разбираем поглед липсва. Признавам си, че макар да беше доста харесана от хора извън администрацията (напр. бизнеса), пътната карта все пак е голям документ с много текст, който не непременно е интересен за четене. Затова година и половина след приемането ѝ, но преди който и да било от проектите вътре да се е случил, ще опитам да я разясня – по няколко изречения на проект. В самата пътна карта има малко повече информация (което всъщност не е стандартно за пътни карти – обикновено те включват доста общи и разтегливи мерки. Ние решихме да бъдат конкретни проекти с реални цели и резултати). Може някои от тях да са скучни или прекалено „вътрешни“, но електронизацията на цяла държава е дълъг и сложен процес пълен с много детайли. „Прости“ решения няма и малко неща са „wow, това е яко“. Но ето по няколко думи за проектите: • Електронно гласуване – започвам с този проект, макар да е в края на таблицата, защото е популярен в момента – Мая Манолова е „вдигнала“ темата след писмо, което е получила. Проектът беше вкаран като приоритетен в пътната карта през есента (няколко месеца след като бяха приети измененията в изборния кодекс). Оттогава досега мина доста време, но проектът най-накрая е в някаква видима фаза. Първата стъпка е да се проучат добрите практики по света и на база на това да се подготви техническо задание за системата. Това може да се разгледа като известно изоставане от първоначалния график, но така или иначе другия ключов за електронното гласуван проект – този за електронната идентификация – също е назад. Все пак срокът „май 2019-та“ не е непостижим, особено ако се използва някое от готовите решения на пазара, вместо да се изгражда нещо от нулата. ЦИК и Държавната агнеция за е-управление засега изглежда да имат намерения да изпълнят проекта. Има ли риск да стане както с машините, с които ЦИК не беше готова навреме? Има, но според мен това няма да е фатално – 6 месеца или 1 година по-късно, електронното гласуване ще е факт, а по-добре малко закъснение, отколкото прибързано въвеждане на нещо с такова голямо значение. Защото объркаме ли го, втори шанс няма да има скоро. (Защо системата се казва „пилотна“ – защото първо ще има пилотно внедряване. След това обаче няма да се прави нова система, а просто пилотната ще бъде въведена в продукционен режим) • Одит на информационната и комуникационната инфраструктура (на администрацията и в сектор „правосъдие“). Проблемът в момента е че никой в държавата не знае какви компютри, какви мрежи, какви лицензи, какъв софтуер, какви и колко стари компютри има по администрациите. Идеята е тази информация да се събере еднократно и след това да бъде поддържана в актуален вид чрез изграден за целта регистър. Нормативното задължение го създадохме с наредба, сега остава да бъде изпълнен и проекта. Защо това е нужно изобщо? За да се планира каквото и да било в мащаб, е нужна информация. За да се правят изгодни сделки за лицензи, например, е нужно да се знае какво има налично. На практика най-вероятно ще се окаже, че държавата е купила два пъти повече лицензи за бази данни, отколкото реално ползва. Отделно, тази информация ще позволи да се консолидират част от тези ресурси в облачна инфраструктура. Но за да може да се планира каквото и да било, трябва информацията да е налична. Документацията за проекта е налична на сайта на Държавна агенция „Електронно управление“ • Държавен хибриден частен облак, или серия от дейта-центрове, в които държавата да си консолидира хардуерния ресурс. В момента „сървъри“ биват наричани всевъзможни настолни компютри, стоящи в краката на някой системен администратор или секретарка, или в добрия случай – в килера на чистачките. Това няма да е първият опит за държавен дейта-център, но целта е най-накрая да се направи както трябва. Не просто да бъдат изсипани железа, а реално да предлага „инфраструктура като услуга“ – администрация Х казва „трябва ми толкова RAM и толкова дисково пространство“ през потребителски интерфейс…и го получава веднага, автоматично. И не се грижи за поддръжката му, защото тя е осигурена централно, в т.ч. резервни копия, защита от атаки, резервираност и т.н. • Развитие на пилотната система за електронна идентификация – това е един от най-важните проекти, който се основава на цял закон – целта е да има софтуерна инфраструктура за идентифициране на гражданите в електронна среда. Това на практика значи, че всеки ще може да получи електронна идентичност на избран от него носител (лична карта, друга карта, „флашка“ с електронния му подпис, мобилен телефон) и с нея ще има достъп до всички електронни услуги в държавата. И не само в нашата държава, а във всички европейски държави, защото електронната идентификация ще има трансгранично признаване. Този проект е най-напреднал, и е на етап набиране на оферти. Всички други проекти имат предвидена интеграция с него, така че е важно, че се движи пред останалите (макар и всички да изостават от планираните срокове). (Защо пилотна система – защото има такава, дадени са пари за нея, и трябва да се ползва като отправна точка) • Реализиране на система за мониторинг на индикаторите за изпълнение на всички оперативни програми – сравнително скучен от гледна точка на цялостната картина вътрешен проект, който просто е необходим за мониторинга на евросредствата. Има още два проекта за НСИ, които са „вътрешни“ и са свързани с покриване на задължителни изисквания, идващи от ЕС. • Публични регистри за бюджетен и проектен контрол и информационни ресурси на електронното управление – със Закона за електронното управление беше създадена държавна агенция с доста правомощия за контрол. Причината е, че нещата в държавата се случват некоординирано, „както дойде“ и „на парче“ и за да имаме накаква устойчивост, трябва това да спре. За целта агенцията трябва да осъществява контрол на всички проекти и дейности и на всички бюджети за информационни технологии. А не всеки да си „маже“ технически задания, които изпълнителите могат да тълкуват както им е удобно, или пък да харчи пари за лицензи и компютри, каквито си е купил само една година преди това и реално не му трябват. В момента агенцията започва да осъществява този контрол без такива регистри (използвайки интернет страницата си вместо това), но тези регистри ще позволят по-плавното му и по-прозрачното му осъществяване. • Реализира не на ЦАИС „Съдебен статус“ – това е прословутото „свидетелство за съдимост“. В момента съществува система за това, на база на което е реализирано електронното свидетелство за съдимост, но тя е децентрализирана и остаряла, изисква много ръчно въвеждане на данни, и няма да може да обслужи достатъчно нарастващ брой автоматизирани заявки. За това трябва да се изгради изцяло нова такава, централизирана, така че на едно място да има съдебния статус на всички граждани, и със съответния контрол на досдтъпа тази информация да се предоставя на администрациите (и на гражданите, ако например искат да представят свидетелството в чужбина). • Реализиране на универсална ЦАИС „Анализ на корупционния риск“ – каквото и антикорупционно звено да бъде направено, то трябва да има аналитичен капацитет, за да може да установява конфликти на интереси и корупционни практики. Това „на ръка“ не че няма как да стане, но е много несигурно, бавно и подлежащо на „външни влияния“. Целта на системата за анализ на корупционния риск е да събира на едно място информация за лица, заемащи публични длъжности от множество източници и да търси в тях потенциални конфликти на интерерси. Да сравнява декларации пред сметната палата с данни от НАП, да сравнява обществени поръчки с фирми, регистрирани от свързани лица, и т.н. В един момент бях предложил дори данните от системата да бъдат частично публични – т.е. всеки месец системата да публикува колко потенциални проблеми е открила системата, и по колко от тях са образувани проверки. Така неща по-трудно ще се замитат под килима. Вероятно този проект ще трябва да изчака създаването на поредния антикорупционен орган. • Реализиране на ЦАИС „Единна входна точка“ за подаване годишни финансови отчети – всеки, който има бизнес, сигурно е „псувал“ защо държавата му иска едни и същи данни по няколко пъти – НАП, Търговския регистър, НСИ. Отговорът е – защото никой не се е бил сетил да ги обедини, а и да се е бил сетил, е нямало техническа И нормативна възможност. Реално в момента законите не позволяват това да се случи, защото Законът за счетоводството изисква печат на счетоводител преди да бъде качен документа, с което се губи машинната му четимост. Т.е. освен проекта ще трябват и нормативни изменения, но идеята е като дойде време за отчитане пред държавата, счетоводителят да извади един файл от счетоводната си система, да го прати на управителя, който да го качи в единната система (подписано с електронен подпис)… и това е. Без декларации и отчети към няколко институции – системата сама ще преценя коя част от данните на коя институция да изпрати. Реализирането на този проект трябва да стартира всеки момент (ако вече не е). • Надграждане на Тъговския регистър – Търговският регистър е едно от най-хубавите неща за електронното ни управление по принцип. На практика има много неща, които могат да бъдат подобрени. На първо място – потребителският интерфейс. В рамките на проекта той трябва да стане по-съвременен (най-накрая), но и много други неща – обмен на данни с европейските търговски регистри, както и с имотния регистър. Също така регистърът на особените залози (който е на 20 години и „плаче“ за пълно подновяване) и регистъра на юридическите с нестопанска цел минават към агенция по вписванията, та съответните промени в Търговския регистър трябва да бъдат реализирани. Това може би звучи скучно, но всъщност е ключово, тъй като в Агенцията по вписванията е съсредоточен доста материален интерес – фирми, имоти, особени залози, и системите трябва да са в добро състояние, за да няма злоупотреби. • Надграждане на имотния регистър – имотният регистър в момента предоставя малка част от услугите, които реално може, а процесът по закупуване на имот включват доста „хартиени“ стъпки, които от своя страна отварят вратички за имотни измами. Тези дупки трябва да бъдат затворени, а административните услуги на имотния регистър трябва да се електронизират, за да се спестят опашки. • Базов регистър на субекти, обекти и събития – в държавата се водят хиляди регистри и регистърчета. Голяма част са на хартия, друга голяма част с в екселски таблици или (не се бъзикам) в Doc файлове. За много от тях би било прекалено скъпа и излишна инвестиция да бъдат изграждани отделни информационни системи – те съдържат малко данни, обновяват се рядко. Но минусите на хартията и екселските таблици (на които не се правят резервни копия) са много. Затова целта на този проект е да има централизирана държавна система за „регистър като услуга“ – някой от съответната администрация (включително общинска) влиза, натиска „създай регистър“, дефинира какви полета има в него, кой има достъп до него, и готово – вече имаме електронен регистър, с резервни копия, отворени данни, контрол на достъпа, защита от атаки и какво ли още не. Това, разбира се, няма да е приложимо на големите и важни регистри като Търговския, имотния, ГРАО, защото там вписванията минават през сложни процеси, но мнозинството от регистрите могат да бъдат електронизирани много лесно чрез такава система. Идеята за проекта тръгна от т.нар. „професионално-квалифицирани лица“, тъй като в секторите „Здравеопазване“ и „Правосъдие“ има множество регистри, в които се водят различни лица – лекари, сестри, санитари, фармацевти, съответно вещи лица, заклети преводачи, оценители, синдици и т.н. Но от „регистър за хора“ решихме, че стъпката до „регистър за всичко“ е лесна и дава много повече ползи. • Реализиране на ЦАИС „Гражданска регистрация“ и ЦАИС „Адресен регистър“ – може би най-важните бази данни в държавата – тези на физическите лица (и адресите). Реално това са два проекта, обединени в един, тъй като са към едни и същи институции – ГРАО и ДАЕУ. В момента ГРАО е децентрализирано, писано е на архаична технология, за която поддръжка е много трудно да се намери, не е ясно дали ще издръжи на хиляди заявки в реално време (каквато е целта на електронното управление). Изцяло нова, централизирана система, с уеб-услуги, отворени данни (при защита на личните данни) ще „отключи“ електронното управление. Адресен регистър обаче досега няма. Има номенклатура с адреси на физически лица, има кадастър, но и двете не са достатъчни – съответно съществуват отделни бази данни с адреси в ГРАО, в пощите, в кадастъра и вероятно по частните компании. Целта е всеки адрес да има единен идентификатор, който не се променя никога (за разлика от тези в кадстъра), и по който да може да бъде рефериран и намерен всеки адрес. Т.е. вместо в базата данни да се записва целия адрес, ще се записва само един идентификатор, а реалния адрес ще се извлича от адресния регистър. Плюсовете са много, но един пример е – преименуване на улица. Ако една улица се преименува, всички бази данни, в които са записани адреси, ще имат грешни адреси. Ако използват референция към адресния регистър, това няма да е проблем (а в регистъра ще се вижда кога и защо е преименувана дадената улица). • Регистър на годишните технически прегледи и на проведените изпити за водачи – общо взето регистри на ИААА (известна като ДАИ). Няма нужда от много пояснения, но съществуващите системи (част от които са децентрализирани и/или разчитат на хартиен обмен на документи) трябва да се приведат в модерен вид. Защо? Защото в момента се допускат много грешки, данните са неконсистентни и системите не си говорят помежду си. • Централизиран регистър на МПС – в момента има регистър, който обаче е децентрализиран, стар, неинтегриран с европейската платформа EUCaris и на който му липсва доста важна функционалност. Най-вече новият регистър ще позволи по-бързото и лесно регистриране на автомобили, тъй като процесите по регистрация ще са част от регистъра и ще бъдат оптимизирани. • Регистър на пълномощните – в момента нотариусите имат нещо като регистър на пълномощните, но той е толкова „нефелен“, че реално не върши работа – когато отидеш с пълномощно някъде (напр. в банка), оттам звънят на нотариуса да го питат за пълномощното, защото не всички пълномощни се въвеждат, ако се – не веднага, а и се въвеждат в свободен текст, в който се допускат грешки. Например веднъж в мое пълномощно беше сбъркано ЕГН-то ми, което установих в банката. Ако пълномощното беше вписано, тази проверка щеше да „изгърми“ още при нотариуса. Но това е само на повърхността – от регистъра ще могат да се избират типови пълномощни, пълномощни да се преглеждат и оттеглят от граждани по електронен път. • Регистър на запорите – освен нотариусите, и съдебните изпълнители (частни и държавни) имат нужда от централизирана система. Освен чисто административното улеснение, такава система ще намали и злоупотребите на някои ЧСИ-та, които се случват в момента • Административно-наказателна дейност. в момента всеки държавен орган с право да „наказва“, си вписва актовете както му дойде – къса ги от кочан най-често. Целта е цялата административно-наказателна дейност в държавата да се вписва в централизиран регистър. Така ще може да се следи кой какви актове съставя, на какви основания, каква част от тях биват платени, в какви срокове. Ще могат да се идентифицират законови текстове, които се нарушават твърде често (или с които администрацията злоупотребява твърде често, налагайки „сплашващи“ глоби), както и законови текстове, които са напрактика глухи – има ги, ама никой не следи за тях. Това от своя страна ще позволи намаляване на административната тежест на база на реалността, а не на база на „тука ни хрумна, че можем да оправим нещо“. Проследимостта на вземанията също е важна, тъй като администрациите рядко знаят дали НАП е събрала глобите, които са наложили. • Реализиране на „ЦАИС“ Обществени поръчкитемата беше популярна преди месец-два във връзка с Auxionize. За съжаление държавната централизирана платформа за изцяло електронни обществени поръчки още не е факт, основно заради обжалвания и решения на КЗК. Но целта е ясна – всички обществени поръчки задължително да са изцяло електронни, през централна система, която да е сигурна и прозрачна (т.е. да публикува автоматично отворени данни за всички поръчки, а не както сега това да става ръчно). • Реализиране на национална здравна информационна система – това е огромен проект, с няколко етапа, като първите неща (които трябваше да са се случили преди 2 години, но вместо тях получихме пръстови отпечатъци) са – електронна рецепта, електронно направление, електронен здравен запис. На база на това ще се изграждат още доста системи, чрез които лекарите да не са залети с бюрокрация, пациентите да не разнасят хартии напред-назад и да са по-добре информирани. Централизирана (и вярна) здравна статистика ще позволи реална политика в здравеопазването, а не гледане в тавана. • Надграждане на основните системи на НАП – НАП е най-електронната администрация (заедно с Агенция по вписванията), но и там има доста кусури. Не всички системи предоставят възможност за интеграция, когато такава има, тя е по архаичен начин с копиране на файлове, а потребителските интерфейси често са ужасни. Тези и други проблеми трябва да бъдат адресирани с този проект. Аналогичен проект за Агенция митници, на който няма да се спирам, и чиято цел е подобряване на вътрешните системи на агенцията. Все пак би трябвало в рамките на проекта най-накрая да направят човешко електронно получаване на EORI номер, че това, което има в момента, е обидно. • Национален портал за пространствени данни – има европейска директива INSPIRE, според която трбва да имаме национален портал с отворени пространствени данни. Има няколко опити за такива, всичките са катастрофа (колкото и в администрацията да твърдят иначе), съответно сме на практика в нарушение на директивата и подлежим на санкции. Порталът ще бъде интегриран с портала за отворени данни и ще включва географски данни от страната – пътища, административни единици, водоизточници и др. Като цяло директивата не се изпълнява и от доста други страни, а наличните данни са с кофти качество, така че личното ми мнение е, че не е особено смислена предвид резултатите, но да видим. • Портал за споделени ресурси за разработка на софтуерни системи за електронно управление – нещо като „Бг-мама“ за електронно управление. В момента разработчиците на системи за държавата откриват топлата вода всеки път – всеки си пише компоненти за интеграция, аплети за електронни подписи, интеграции с деловодни системи и какво ли още не – идеята е всички да споделят както код, така и знание чрез обща система, така че да не губим време в откриване на топлата вода. • Цифровизиране на кадастрални карти – далеч не цялата кадастрална информация е електронизирана, а трябва. Всъщност към момента само за 20% от страната има цифрови кадастрални карти. А електронни услуги на кадастъра всъщност съществуват, но са с ужасен потребителски интерфейс и много малко хора реално знаят за тях и могат да ги ползват. Това трябва да се промени. • Регистър на недвижимите културни ценности – помните като изгоряха тютюневите складове в Пловдив. Или като събориха поредната къща-паметник на културата. Ами, това, че те са паметник на културата всъщност е трудно установимо, тъй като няма централизиран регистър, няма и публично достъпна систематизирана информация. Е, целта на този проект е да има такава. • Единна информационна точка – общо взето целта е да има информация за подземна инфраструктура (тръби, кабели), така че някой като копае някъде, да не вземе да скъса нещо. И да е ясно, къдет вече има прокопано, така че да не се копае излишно. По-сложно е, разбира се, и произтича от европейска директива, но накратко е това. Ако сте оцелели дотук – супер. Но всичките тези детайли вероятно не интересуват много хора. Което е и единият от проблемите на електронното управление – нито печелят избори, нито някой забелязва, когато нещо се случи (например последният проект и да стане, много малко хора ще разберат, но от друга страна ще бъдта спестени доста пари и доста главоболия на бизнеса; или пък това, че ще има ново ГРАО едва ли ще бъде усетено директно, но това ключово за развитието на … всичко). И се надявам да е видимо, че нещата са „мислени“. Дали ще бъдат реализирани както са замислени е друга тема – засега виждаме само забавяне (по редица причини, някои от които обективни, други – свързани със смяната на властта, трети – с мудност на част от администрацията). Пътната карта има и втора част със следващи проекти, но там съвсем ще ви отегча, та ще я оставя за момента, в който сме по-близо до нея – например след 2 години. А дотогава всички проекти по-горе трябва да са се случили, трябва да са с отворен код, за да можем да следим дали се случват както трябва, да предоставят отворени данни, за да следим дали оперират както трябва след като бъдат внедрени, и изобщо – да „станат както трябва, а не както обикновено“. И като казвам „трябва“ – имам предвид, че го пише в закон и наредба. Дали ще се спазват – ще видим. # Pirate Sites Ordered to Pay1 Million in Damages to ABS-CBN

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-sites-ordered-to-pay-1-million-in-damages-to-abs-cbn-170715/

ABS-CBN, the largest media and entertainment company in the Philippines, has booked another victory in the United States.

This week a federal court in Florida signed a default judgment against 19 websites that offered links to copyright infringing streams of ABS-CBN owned movies.

The lawsuit in question was filed in April and targets cinesilip.net, pinoychannel.co, pinoy-hd.com, and several other streaming portals that specialize in Philippine content. These sites also attract visitors from other countries, including the United States, where they target people of Philippine origin.

“Defendants’ entire Internet-based website businesses amount to nothing more than illegal operations established and operated in order to infringe the intellectual property rights of ABS-CBN and others,” the company wrote in its original complaint.

Despite facing hefty damages, none of the defendants turned up in court. This prompted ABS-CBN to file for a default judgment which was granted this week.

In his verdict, US District Judge Robert Scola Jr orders the 19 websites to pay $1 million in damages each. These damages are not for copyright infringement, as one would expect, but for violating ABS-CBN’s trademark. In addition, four of the defendants received an additional$30,000 in copyright infringement damages on top.

The media giant initially suggested that it would request the maximum of \$2 million in trademark infringement damages per site, but has opted go “only” for half.

Part of the order

ABS-CBN’s most recent win follows a pattern of similar verdicts in recent months. The company has managed to score dozens of millions in damages from a wide variety of streaming sites with relative ease.

In addition to the millions of dollars that were awarded, Judge Scola also signed off on a permanent injunction to sign over the websites’ domain names to the media giant.

The question remains, of course, whether the company will ever see a penny in return. Most of the defendants remain unknown and even if they’re identified, most won’t have an extra million lying around.

To increase the chance of seeing something of monetary value in return, ABS-CBN also requested an injunction against the advertisers of several pirate sites in its latest lawsuit. If granted, this would allow the company to claim the pending advertising payouts. However, no such injunction was requested in the current case.

A copy of the default judgement is available abs-default, and a list of all the defendants is available below.

cinesilip.net
pinoychanneltv.me
pinoytambayantv.me
pinoytambayanreplay.net
drembed.com
embeds.me
fullpinoymovies.com
lambingan.ph
magtvna.com
pinoye.com
pinoyteleserye.org
pinoytvnetwork.net
pinoytopmovies.info
teleserye.me
watchpinaytv.com
wildpinoy.net
pinoy-hd.com
pinoytvreplay.ws
pinoychannel.co
wowpinoytambayan.ws
pinoytelebyuwers.se

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

# Some memorable levels

Post Syndicated from Eevee original https://eev.ee/blog/2017/07/01/some-memorable-levels/

Another Patreon request from Nova Dasterin:

Maybe something about level design. In relation to a vertical shmup since I’m working on one of those.

I’ve been thinking about level design a lot lately, seeing as how I’ve started… designing levels. Shmups are probably the genre I’m the worst at, but perhaps some general principles will apply universally.

And speaking of general principles, that’s something I’ve been thinking about too.

I’ve been struggling to create a more expansive tileset for a platformer, due to two general problems: figuring out what I want to show, and figuring out how to show it with a limited size and palette. I’ve been browsing through a lot of pixel art from games I remember fondly in the hopes of finding some inspiration, but so far all I’ve done is very nearly copy a dirt tile someone submitted to my potluck project.

Recently I realized that I might have been going about looking for inspiration all wrong. I’ve been sifting through stuff in the hopes of finding something that would create some flash of enlightenment, but so far that aimless tourism has only found me a thing or two to copy.

I don’t want to copy a small chunk of the final product; I want to understand the underlying ideas that led the artist to create what they did in the first place. Or, no, that’s not quite right either. I don’t want someone else’s ideas; I want to identify what I like, figure out why I like it, and turn that into some kinda of general design idea. Find the underlying themes that appeal to me and figure out some principles that I could apply. You know, examine stuff critically.

I haven’t had time to take a deeper look at pixel art this way, so I’ll try it right now with level design. Here, then, are some levels from various games that stand out to me for whatever reason; the feelings they evoke when I think about them; and my best effort at unearthing some design principles from those feelings.

## Doom II: MAP10, Refueling Base

screenshots mine — map via doom wiki — see also textured perspective map (warning: large!) via ian albertpistol start playthrough

I’m surprising myself by picking Refueling Base. I would’ve expected myself to pick MAP08, Tricks and Traps, for its collection of uniquely bizarre puzzles and mechanisms. Or MAP13, Downtown, the map that had me convinced (erroneously) that Doom levels supported multi-story structures. Or at least MAP08, The Pit, which stands out for the unique way it feels like a plunge into enemy territory.

(Curiously, those other three maps are all Sandy Petersen’s sole work. Refueling Base was started by Tom Hall in the original Doom days, then finished by Sandy for Doom II.)

But Refueling Base is the level I have the most visceral reaction to: it terrifies me.

See, I got into Doom II through my dad, who played it on and off sometimes. My dad wasn’t an expert gamer or anything, but as a ten-year-old, I assumed he was. I watched him play Refueling Base one night. He died. Again, and again, over and over. I don’t even have very strong memories of his particular attempts, but watching my parent be swiftly and repeatedly defeated — at a time when I still somewhat revered parents — left enough of an impression that hearing the level music still makes my skin crawl.

This may seem strange to bring up as a first example in a post about level design, but I don’t think it would have impressed on me quite so much if the level weren’t designed the way it is. (It’s just a video game, of course, and since then I’ve successfully beaten it from a pistol start myself. But wow, little kid fears sure do linger.)

The one thing that most defines the map has to be its interconnected layout. Almost every major area (of which there are at least half a dozen) has at least three exits. Not only are you rarely faced with a dead end, but you’ll almost always have a choice of where to go next, and that choice will lead into more choices.

This hugely informs the early combat. Many areas near the beginning are simply adjacent with no doors between them, so it’s easy for monsters to start swarming in from all directions. It’s very easy to feel overwhelmed by an endless horde; no matter where you run, they just seem to keep coming. (In fact, Refueling Base has the most monsters of any map in the game by far: 279. The runner up is the preceding map at 238.) Compounding this effect is the relatively scant ammo and health in the early parts of the map; getting very far from a pistol start is an uphill battle.

The connections between rooms also yield numerous possible routes through the map, as well as several possible ways to approach any given room. Some of the connections are secrets, which usually connect the “backs” of two rooms. Clearing out one room thus rewards you with a sneaky way into another room that puts you behind all the monsters.

In fact, the map rewards you for exploring it in general.

Well, okay. It might be more accurate to say that that map punishes you for not exploring it. From a pistol start, the map is surprisingly difficult — the early areas offer rather little health and ammo, and your best chance of success is a very specific route that collects weapons as quickly as possible. Many of the most precious items are squirrelled away in (numerous!) secrets, and you’ll have an especially tough time if you don’t find any of them — though they tend to be telegraphed.

One particularly nasty surprise is in the area shown above, which has three small exits at the back. Entering or leaving via any of those exits will open one of the capsule-shaped pillars, revealing even more monsters. A couple of those are pain elementals, monsters which attack by spawning another monster and shooting it at you — not something you want to be facing with the starting pistol.

But nothing about the level indicates this, so you have to make the association the hard way, probably after making several mad dashes looking for cover. My successful attempt avoided this whole area entirely until I’d found some more impressive firepower. It’s fascinating to me, because it’s a fairly unique effect that doesn’t make any kind of realistic sense, yet it’s still built out of familiar level mechanics: walk through an area and something opens up. Almost like 2D sidescroller design logic applied to a 3D space. I really like it, and wish I saw more of it. So maybe that’s a more interesting design idea: don’t be afraid to do something weird only once, as long as it’s built out of familiar pieces so the player has a chance to make sense of it.

A similarly oddball effect is hidden in a “barracks” area, visible on the far right of the map. A secret door leads to a short U-shaped hallway to a marble skull door, which is themed nothing like the rest of the room. Opening it seems to lead back into the room you were just in, but walking through the doorway teleports you to a back entrance to the boss fight at the end of the level.

It sounds so bizarre, but the telegraphing makes it seem very natural; if anything, the “oh, I get it!” moment overrides the weirdness. It stops being something random and becomes something consciously designed. I believe that this might have been built by someone, even if there’s no sensible reason to have built it.

In fact, that single weird teleporter is exactly the kind of thing I’d like to be better at building. It could’ve been just a plain teleporter pad, but instead it’s a strange thing that adds a lot of texture to the level and makes it much more memorable. I don’t know how to even begin to have ideas like that. Maybe it’s as simple as looking at mundane parts of a level and wondering: what could I do with this instead?

I think a big problem I have is limiting myself to the expected and sensible, to the point that I don’t even consider more outlandish ideas. I can’t shake that habit simply by bolding some text in a blog post, but maybe it would help to keep this in mind: you can probably get away with anything, as long as you justify it somehow. Even “justify” here is too strong a word; it takes only the slightest nod to make an arbitrary behavior feel like part of a world. Why does picking up a tiny glowing knight helmet give you 1% armor in Doom? Does anyone care? Have you even thought about it before? It’s green and looks like armor; the bigger armor pickup is also green; yep, checks out.

On the other hand, the map as a whole ends up feeling very disorienting. There’s no shortage of landmarks, but every space is distinct in both texture and shape, so everything feels like a landmark. No one part of the map feels particularly central; there are a few candidates, but they neighbor other equally grand areas with just as many exits. It’s hard to get truly lost, but it’s also hard to feel like you have a solid grasp of where everything is. The space itself doesn’t make much sense, even though small chunks of it do. Of course, given that the Hellish parts of Doom were all just very weird overall, this is pretty fitting.

This sort of design fascinates me, because the way it feels to play is so different from the way it looks as a mapper with God Vision. Looking at the overhead map, I can identify all the familiar places easily enough, but I don’t know how to feel the way the map feels to play; it just looks like some rooms with doors between them. Yet I can see screenshots and have a sense of how “deep” in the level they are, how difficult they are to reach, whether I want to visit or avoid them. The lesson here might be that most of the interesting flavor of the map isn’t actually contained within the overhead view; it’s in the use of height and texture and interaction.

I realize as I describe all of this that I’m really just describing different kinds of contrast. If I know one thing about creative work (and I do, I only know one thing), it’s that effectively managing contrast is super duper important.

And it appears here in spades! A brightly-lit, outdoor, wide-open round room is only a short jog away from a dark, cramped room full of right angles and alcoves. A wide straight hallway near the beginning is directly across from a short, curvy, organic hallway. Most of the monsters in the map are small fry, but a couple stronger critters are sprinkled here and there, and then the exit is guarded by the toughest monster in the game. Some of the connections between rooms are simple doors; others are bizarre secret corridors or unnatural twisty passages.

You could even argue that the map has too much contrast, that it starts to lose cohesion. But if anything, I think this is one of the more cohesive maps in the first third of the game; many of the earlier maps aren’t so much places as they are concepts. This one feels distinctly like it could be something. The theming is all over the place, but enough of the parts seem deliberate.

I hadn’t even thought about it until I sat down to write this post, but since this is a “refueling base”, I suppose those outdoor capsules (which contain green slime, inset into the floor) could be the fuel tanks! I already referred to that dark techy area as “barracks”. Elsewhere is a rather large barren room, which might be where the vehicles in need of refueling are parked? Or is this just my imagination, and none of it was intended this way?

It doesn’t really matter either way, because even in this abstract world of ambiguity and vague hints, all of those rooms still feel like a place. I don’t have to know what the place is for it to look internally consistent.

I’m hesitant to say every game should have the loose design sense of Doom II, but it might be worth keeping in mind that anything can be a believable world as long as it looks consciously designed. And I’d say this applies even for natural spaces — we frequently treat real-world nature as though it were “designed”, just with a different aesthetic sense.

Okay, okay. I’m sure I could clumsily ramble about Doom forever, but I do that enough as it is. Other people have plenty to say if you’re interested.

I do want to stick in one final comment about MAP13, Downtown, while I’m talking about theming. I’ve seen a few people rag on it for being “just a box” with a lot of ideas sprinkled around — the map is basically a grid of skyscrapers, where each building has a different little mini encounter inside. And I think that’s really cool, because those encounters are arranged in a way that very strongly reinforces the theme of the level, of what this place is supposed to be. It doesn’t play quite like anything else in the game, simply because it was designed around a shape for flavor reasons. Weird physical constraints can do interesting things to level design.

## Braid: World 4-7, Fickle Companion

screenshots via StrategyWikiplaythroughplaythrough of secret area

I love Braid. If you’re not familiar (!), it’s a platformer where you have the ability to rewind time — whenever you want, for as long as you want, all the way back to when you entered the level.

The game starts in world 2, where you do fairly standard platforming and use the rewind ability to do some finnicky jumps with minimal frustration. It gets more interesting in world 3 with the addition of glowing green objects, which aren’t affected by the reversal of time.

And then there’s world 4, “Time and Place”. I love world 4, so much. It’s unlike anything I’ve ever seen in any other game, and it’s so simple yet so clever.

The premise is this: for everything except you, time moves forwards as you move right, and backwards as you move left.

This has some weird implications, which all come together in the final level of the world, Fickle Companion. It’s so named because you have to use one (single-use) key to open three doors, but that key is very easy to lose.

Say you pick up the key and walk to the right with it. Time continues forwards for the key, so it stays with you as expected. Now you climb a ladder. Time is frozen since you aren’t moving horizontally, but the key stays with you anyway. Now you walk to the left. Oops — the key follows its own path backwards in time, going down the ladder and back along the path you carried it in the first place. You can’t fix this by walking to the right again, because that will simply advance time normally for the key; since you’re no longer holding it, it will simply fall to the ground and stay there.

You can see how this might be a problem in the screenshot above (where you get the key earlier in the level, to the left). You can climb the first ladder, but to get to the door, you have to walk left to get to the second ladder, which will reverse the key back down to the ground.

The solution is in the cannon in the upper right, which spits out a Goomba-like critter. It has the timeproof green glow, so the critters it spits out have the same green glow — making them immune to both your time reversal power and to the effect your movement has on time. What you have to do is get one of the critters to pick up the key and carry it leftwards for you. Once you have the puzzle piece, you have to rewind time and do it again elsewhere. (Or, more likely, the other way around; this next section acts as a decent hint for how to do the earlier section.)

It’s hard to convey how bizarre this is in just text. If you haven’t played Braid, it’s absolutely worth it just for this one world, this one level.

And it gets even better, slash more ridiculous: there’s a super duper secret hidden very cleverly in this level. Reaching it involves bouncing twice off of critters; solving the puzzle hidden there involves bouncing the critters off of you. It’s ludicrous and perhaps a bit too tricky, but very clever. Best of all, it’s something that an enterprising player might just think to do on a whim — hey, this is possible here, I wonder what happens if I try it. And the game rewards the player for trying something creative! (Ironically, it’s most rewarding to have a clever idea when it turns out the designer already had the same idea.)

What can I take away from this? Hm.

Well, the underlying idea of linking time with position is pretty novel, but getting to it may not be all that hard: just combine different concepts and see what happens.

A similar principle is to apply a general concept to everything and see what happens. This is the first sighting of a timeproof wandering critter; previously timeproofing had only been seen on keys, doors, puzzle pieces, and stationary monsters. Later it even applies to Tim himself in special circumstances.

The use of timeproofing on puzzle pieces is especially interesting, because the puzzle pieces — despite being collectibles that animate moving into the UI when you get them — are also affected by time. If the pieces in this level weren’t timeproof, then as soon as you collected one and moved left to leave its alcove, time would move backwards and the puzzle piece would reverse out of the UI and right back into the world.

Along similar lines, the music and animated background are also subject to the flow of time. It’s obvious enough that the music plays backwards when you rewind time, but in world 4, the music only plays at all while you’re moving. It’s a fantastic effect that makes the whole world feel as weird and jerky as it really is under these rules. It drives the concept home instantly, and it makes your weird influence over time feel all the more significant and far-reaching. I love when games weave all the elements of the game into the gameplaylike this, even (especially?) for the sake of a single oddball level.

Admittedly, this is all about gameplay or puzzle mechanics, not so much level design. What I like about the level itself is how simple and straightforward it is: it contains exactly as much as it needs to, yet still invites trying the wrong thing first, which immediately teaches the player why it won’t work. And it’s something that feels like it ought to work, except that the rules of the game get in the way just enough. This makes for my favorite kind of puzzle, the type where you feel like you’ve tried everything and it must be impossible — until you realize the creative combination of things you haven’t tried yet. I’m talking about puzzles again, oops; I guess the general level design equivalent of this is that players tend to try the first thing they see first, so if you put required parts later, players will be more likely to see optional parts.

I think that’s all I’ve got for this one puzzle room. I do want to say (again) that I love both endings of Braid. The normal ending weaves together the game mechanics and (admittedly loose) plot in a way that gave me chills when I first saw it; the secret ending completely changes both how the ending plays and how you might interpret the finale, all by making only the slightest changes to the level.

screenshot mine — playthrough of normal mapplaythrough of advanced map

I love Portal. I blazed through the game in a couple hours the night it came out. I’d seen the trailer and instantly grasped the concept, so the very slow and gentle learning curve was actually a bit frustrating for me; I just wanted to portal around a big playground, and I finally got to do that in the six “serious” tests towards the end, 13 through 18.

Valve threw an interesting curveball with these six maps. As well as being more complete puzzles by themselves, Valve added “challenges” requiring that they be done with as few portals, time, or steps as possible. I only bothered with the portal challenges — time and steps seemed less about puzzle-solving and more about twitchy reflexes — and within them I found buried an extra layer of puzzles. All of the minimum portal requirements were only possible if you found an alternative solution to the map: skipping part of it, making do with only one cube instead of two, etc. But Valve offered no hints, only a target number. It was a clever way to make me think harder about familiar areas.

Alongside the challenges were “advanced” maps, and these blew me away. They were six maps identical in layout to the last six test chambers, but with a simple added twist that completely changed how you had to approach them. Test 13 has two buttons with two boxes to place on them; the advanced version removes a box and also changes the floor to lava. Test 14 is a live fire course with turrets you have to knock over; the advanced version puts them all in impenetrable cages. Test 17 is based around making extensive use of a single cube; the advanced version changes it to a ball.

But the one that sticks out the most to me is test 18, a potpourri of everything you’ve learned so far. The beginning part has you cross several large pits of toxic sludge by portaling from the ceilings; the advanced version simply changes the ceilings to unportalable metal. It seems you’re completely stuck after only the first jump, unless you happen to catch a glimpse of the portalable floor you pass over in mid-flight. Or you might remember from the regular version of the map that the floor was portalable there, since you used it to progress further. Either way, you have to fire a portal in midair in a way you’ve never had to do before, and the result feels very cool, like you’ve defeated a puzzle that was intended to be unsolvable. All in a level that was fairly easy the first time around, and has been modified only slightly.

I’m not sure where I’m going with this. I could say it’s good to make the player feel clever, but that feels wishy-washy. What I really appreciated about the advanced tests is that they exploited inklings of ideas I’d started to have when playing through the regular game; they encouraged me to take the spark of inspiration this game mechanic gave me and run with it.

So I suppose the better underlying principle here — the most important principle in level design, in any creative work — is to latch onto what gets you fired up and run with it. I am absolutely certain that the level designers for this game loved the portal concept as much as I do, they explored it thoroughly, and they felt compelled to fit their wilder puzzle ideas in somehow.

More of that. Find the stuff that feels like it’s going to burst out of your head, and let it burst.

## Chip’s Challenge: Level 122, Totally Fair and Level 131, Totally Unfair

screenshots mine — full maps of both levelsplaythrough of Totally Fairplaythrough of Totally Unfair

I mention this because Portal reminded me of it. The regular and advanced maps in Portal are reminiscent of parallel worlds or duality or whatever you want to call the theme. I extremely dig that theme, and it shows up in Chip’s Challenge in an unexpected way.

Totally Fair is a wide open level with a little maze walled off in one corner. The maze contains a monster called a “teeth”, which follows Chip at a slightly slower speed. (The second teeth, here shown facing upwards, starts outside the maze but followed me into it when I took this screenshot.)

The goal is to lure the teeth into standing on the brown button on the right side. If anything moves into a “trap” tile (the larger brown recesses at the bottom), it cannot move out of that tile until/unless something steps on the corresponding brown button. So there’s not much room for error in maneuvering the teeth; if it falls in the water up top, it’ll die, and if it touches the traps at the bottom, it’ll be stuck permanently.

The reason you need the brown button pressed is to acquire the chips on the far right edge of the level.

The gray recesses turn into walls after being stepped on, so once you grab a chip, the only way out is through the force floors and ice that will send you onto the trap. If you haven’t maneuvered the teeth onto the button beforehand, you’ll be trapped there.

Doesn’t seem like a huge deal, since you can go see exactly how the maze is shaped and move the teeth into position fairly easily. But you see, here is the beginning of Totally Fair.

The gray recess leads up into the maze area, so you can only enter it once. A force floor in the upper right lets you exit it.

Totally Unfair is exactly identical, except the second teeth has been removed, and the entrance to the maze looks like this.

You can’t get into the maze area. You can’t even see the maze; it’s too far away from the wall. You have to position the teeth completely blind. In fact, if you take a single step to the left from here, you’ll have already dumped the teeth into the water and rendered the level impossible.

The hint tile will tell you to “Remember sjum”, where SJUM is the password to get back to Totally Fair. So you have to learn that level well enough to recreate the same effect without being able to see your progress.

It’s not impossible, and it’s not a “make a map” faux puzzle. A few scattered wall blocks near the chips, outside the maze area, are arranged exactly where the edges of the maze are. Once you notice that, all you have to do is walk up and down a few times, waiting a moment each time to make sure the teeth has caught up with you.

So in a sense, Totally Unfair is the advanced chamber version of Totally Fair. It makes a very minor change that force you to approach the whole level completely differently, using knowledge gleaned from your first attempt.

And crucially, it’s an actual puzzle! A lot of later Chip’s Challenge levels rely heavily on map-drawing, timing, tedium, or outright luck. (Consider, if you will, Blobdance.) The Totally Fair + Totally Unfair pairing requires a little ingenuity unlike anything else in the game, and the solution is something more than just combinations of existing game mechanics. There’s something very interesting about that hint in the walls, a hint you’d have no reason to pick up on when playing through the first level. I wish I knew how to verbalize it better.

Anyway, enough puzzle games; let’s get back to regular ol’ level design.

maps via vgmaps and TCRFplaythrough with commentary

Link’s Awakening was my first Zelda (and only Zelda for a long time), which made for a slightly confusing introduction to the series — what on earth is a Zelda and why doesn’t it appear in the game?

The whole game is a blur of curiosities and interesting little special cases. It’s fabulously well put together, especially for a Game Boy game, and the dungeons in particular are fascinating microcosms of design. I never really appreciated it before, but looking at the full maps, I’m struck by how each dungeon has several large areas neatly sliced into individual screens.

Much like with Doom II, I surprise myself by picking Eagle’s Tower as the most notable part of the game. The dungeon isn’t that interesting within the overall context of the game; it gives you only the mirror shield, possibly the least interesting item in the game, second only to the power bracelet upgrade from the previous dungeon. The dungeon itself is fairly long, full of traps, and overflowing with crystal switches and toggle blocks, making it possibly the most frustrating of the set. Getting to it involves spending some excellent quality time with a flying rooster, but you don’t really do anything — mostly you just make your way through nondescript caves and mountaintops.

Having now thoroughly dunked on it, I’ll tell you what makes it stand out: the player changes the shape of the dungeon.

That’s something I like a lot about Doom, as well, but it’s much more dramatic in Eagle’s Tower. As you might expect, the dungeon is shaped like a tower, where each floor is on a 4×4 grid. The top floor, 4F, is a small 2×2 block of rooms in the middle — but one of those rooms is the boss door, and there’s no way to get to that floor.

(Well, sort of. The “down” stairs in the upper-right of 3F actually lead up to 4F, but the connection is bogus and puts you in a wall, and both of the upper middle rooms are unreachable during normal gameplay.)

The primary objective of the dungeon is to smash four support columns on 2F by throwing a huge iron ball at them, which causes 4F to crash down into the middle of 3F.

Even the map on the pause screen updates to reflect this. In every meaningful sense, you, the player, have fundamentally reconfigured the shape of this dungeon.

I love this. It feels like I have some impact on the world, that I came along and did something much more significant than mere game mechanics ought to allow. I saw that the tower was unsolvable as designed, so I fixed it.

It’s clear that the game engine supports rearranging screens arbitrarily — consider the Wind Fish’s Egg — but this is s wonderfully clever and subtle use of that. Let the player feel like they have an impact on the world.

## The cutting room floor

This is getting excessively long so I’m gonna cut it here. Some other things I thought of but don’t know how to say more than a paragraph about:

• Super Mario Land 2: Six Golden Coins has a lot of levels with completely unique themes, backed by very simple tilesets but enhanced by interesting one-off obstacles and enemies. I don’t even know how to pick a most interesting one. Maybe just play the game, or at least peruse the maps.

• This post about density of detail in Team Fortress 2 is really good so just read that I guess. It’s really about careful balance of contrast again, but through the lens of using contrasting amounts of detail to draw the player’s attention, while still carrying a simple theme through less detailed areas.

• Metroid Prime is pretty interesting in a lot of ways, but I mostly laugh at how they spaced rooms out with long twisty hallways to improve load times — yet I never really thought about it because they all feel like they belong in the game.

One thing I really appreciate is level design that hints at a story, that shows me a world that exists persistently, that convinces me this space exists for some reason other than as a gauntlet for me as a player. But it seems what comes first to my mind is level design that’s clever or quirky, which probably says a lot about me. Maybe the original Fallouts are a good place to look for that sort of detail.

Conversely, it sticks out like a sore thumb when a game tries to railroad me into experiencing the game As The Designer Intended. Games are interactive, so the more input the player can give, the better — and this can be as simple as deciding to avoid rather than confront enemies, or deciding to run rather than walk.

I think that’s all I’ve got in me at the moment. Clearly I need to meditate on this a lot more, but I hope some of this was inspiring in some way!

# CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

## This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Helen’s and Oly’s favourite project involved…live bees!

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

## Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

#### Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

## Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

## Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Take a look at the gallery of all winners over on Flickr.

## The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

## Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

# Weekly roundup: Successful juggling

Post Syndicated from Eevee original https://eev.ee/dev/2017/06/19/weekly-roundup-successful-juggling/

Despite flipping my sleep, as I seem to end up doing every month now, I’ve had a pretty solid week. We finally got our hands on a Switch, so I just played Zelda to stay up a ridiculously long time and restore my schedule pretty quickly.

• potluck: I started building the potluck game in LÖVE, and it’s certainly come along much faster — I have map transitions, dialogue, and a couple moving platforms working. I still don’t quite know what this game is, but I’m starting to get some ideas.

I also launched GAMES MADE QUICK??? 1½, a game jam for making a game while watching GDQ, instead of just plain watching GDQ. I intend to spend the week working on the potluck game, though I’m not sure whether I’ll finish it then.

• fox flux: I started planning out a more interesting overworld and doodled a couple relevant tiles. Terrain is still hard. Also some more player frames.

• art: I finally finished a glorious new banner, which now hangs proudly above my Twitter and Patreon. I did a bedtime slate doodle. I made and animated a low-poly Yoshi. I sketched Styx based on a photo.

I keep wishing I have time to dedicate to painting experiments, but I guess this is pretty good output.

• veekun: Wow! I touched veekun on three separate occasions. I have basic item data actually physically dumping now, I fixed some stuff with Pokémon, and I got evolutions working. Progress! Getting there! So close!

• blog: Per request, I wrote about digital painting software, though it was hampered slightly by the fact that most of it doesn’t run on my operating system.

I seem to be maintaining tangible momentum on multiple big projects, which is fantastic. And there’s still 40% of the month left! I’m feeling pretty good about where I’m standing; if I can get potluck and veekun done soon, that’ll be a medium and a VERY LARGE weight off my shoulders.

# A rather dandy Pi-assisted Draisine

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/dandy-draisine/

It’s time to swap pedal power for relaxed strides with the Raspberry Pi-assisted Draisine from bicyle-modding pro Prof. Holger Hermanns.

So dandy…

## A Draisine…

If you have children yourself or have seen them in the wild on occasion, you may be aware of how much they like balance bikes – bicycle frames without pedals, propelled by striding while sitting on the seat. It’s a nice way for children to take the first steps (bah-dum tss) towards learning to ride a bicycle. However, between 1817, when the balance bike (also known as a draisine or Dandy Horse) was invented by Karl von Drais, and the introduction of the pedal bike around 1860, this vehicle was the new, fun, and exciting way to travel for everyone.

We can’t wait for the inevitable IKEA flatpack release

Having previously worked on wireless braking systems for bicycles, Prof. Hermanns is experienced in adding tech to two wheels. Now, he and his team of computer scientists at Germany’s Saarland University have updated the balance bike for the 21st century: they built the Draisine 200.0 to explore pedal-free, power-assisted movement as part of the European Research Council-funded POWVER project.

With this draisine, his team have created a beautiful, fully functional final build that would look rather fetching here on the bicycle-flooded streets of Cambridge.

The frame of the bike, except for the wheel bearings and the various screws, is made of Okoumé wood, which looks somewhat rose, has fine nerves (which means that it is easy to mill) and seems to have excellent weather resistance.

#### Draisine 200.0

Within the wooden body of the draisine lies a array of electrical components, including a 200-watt rear hub motor, a battery, an accelerometer, a magnetic sensor, and a Raspberry Pi. Checking the accelerometer and reading wheel-embedded sensors 150 times per second (wow!), the Pi activates the hub motor to assist the draisine, which allows it to reach speeds of up to 16mph (25km/h – wow again!).

The inner workings of the Draisine 200.0

More detailed information on the Draisine 200.0 build can be found here. Hermanns’s team also plan to release the code for the project once confirmation of no licence infringement has been given.