Tag Archives: weapons

Nazis, are bad

Post Syndicated from Eevee original https://eev.ee/blog/2017/08/13/nazis-are-bad/

Anonymous asks:

Could you talk about something related to the management/moderation and growth of online communities? IOW your thoughts on online community management, if any.

I think you’ve tweeted about this stuff in the past so I suspect you have thoughts on this, but if not, again, feel free to just blog about … anything 🙂

Oh, I think I have some stuff to say about community management, in light of recent events. None of it hasn’t already been said elsewhere, but I have to get this out.

Hopefully the content warning is implicit in the title.


I am frustrated.

I’ve gone on before about a particularly bothersome phenomenon that hurts a lot of small online communities: often, people are willing to tolerate the misery of others in a community, but then get up in arms when someone pushes back. Someone makes a lot of off-hand, off-color comments about women? Uses a lot of dog-whistle terms? Eh, they’re not bothering anyone, or at least not bothering me. Someone else gets tired of it and tells them to knock it off? Whoa there! Now we have the appearance of conflict, which is unacceptable, and people will turn on the person who’s pissed off — even though they’ve been at the butt end of an invisible conflict for who knows how long. The appearance of peace is paramount, even if it means a large chunk of the population is quietly miserable.

Okay, so now, imagine that on a vastly larger scale, and also those annoying people who know how to skirt the rules are Nazis.


The label “Nazi” gets thrown around a lot lately, probably far too easily. But when I see a group of people doing the Hitler salute, waving large Nazi flags, wearing Nazi armbands styled after the SS, well… if the shoe fits, right? I suppose they might have flown across the country to join a torch-bearing mob ironically, but if so, the joke is going way over my head. (Was the murder ironic, too?) Maybe they’re not Nazis in the sense that the original party doesn’t exist any more, but for ease of writing, let’s refer to “someone who espouses Nazi ideology and deliberately bears a number of Nazi symbols” as, well, “a Nazi”.

This isn’t a new thing, either; I’ve stumbled upon any number of Twitter accounts that are decorated in Nazi regalia. I suppose the trouble arises when perfectly innocent members of the alt-right get unfairly labelled as Nazis.

But hang on; this march was called “Unite the Right” and was intended to bring together various far right sub-groups. So what does their choice of aesthetic say about those sub-groups? I haven’t heard, say, alt-right coiner Richard Spencer denounce the use of Nazi symbology — extra notable since he was fucking there and apparently didn’t care to discourage it.


And so begins the rule-skirting. “Nazi” is definitely overused, but even using it to describe white supremacists who make not-so-subtle nods to Hitler is likely to earn you some sarcastic derailment. A Nazi? Oh, so is everyone you don’t like and who wants to establish a white ethno state a Nazi?

Calling someone a Nazi — or even a white supremacist — is an attack, you see. Merely expressing the desire that people of color not exist is perfectly peaceful, but identifying the sentiment for what it is causes visible discord, which is unacceptable.

These clowns even know this sort of thing and strategize around it. Or, try, at least. Maybe it wasn’t that successful this weekend — though flicking through Charlottesville headlines now, they seem to be relatively tame in how they refer to the ralliers.

I’m reminded of a group of furries — the alt-furries — who have been espousing white supremacy and wearing red armbands with a white circle containing a black… pawprint. Ah, yes, that’s completely different.


So, what to do about this?

Ignore them” is a popular option, often espoused to bullied children by parents who have never been bullied, shortly before they resume complaining about passive-aggressive office politics. The trouble with ignoring them is that, just like in smaller communitiest, they have a tendency to fester. They take over large chunks of influential Internet surface area like 4chan and Reddit; they help get an inept buffoon elected; and then they start to have torch-bearing rallies and run people over with cars.

4chan illustrates a kind of corollary here. Anyone who’s steeped in Internet Culture™ is surely familiar with 4chan; I was never a regular visitor, but it had enough influence that I was still aware of it and some of its culture. It was always thick with irony, which grew into a sort of ironic detachment — perhaps one of the major sources of the recurring online trope that having feelings is bad — which proceeded into ironic racism.

And now the ironic racism is indistinguishable from actual racism, as tends to be the case. Do they “actually” “mean it”, or are they just trying to get a rise out of people? What the hell is unironic racism if not trying to get a rise out of people? What difference is there to onlookers, especially as they move to become increasingly involved with politics?

It’s just a joke” and “it was just a thoughtless comment” are exceptionally common defenses made by people desperate to preserve the illusion of harmony, but the strain of overt white supremacy currently running rampant through the US was built on those excuses.


The other favored option is to debate them, to defeat their ideas with better ideas.

Well, hang on. What are their ideas, again? I hear they were chanting stuff like “go back to Africa” and “fuck you, faggots”. Given that this was an overtly political rally (and again, the Nazi fucking regalia), I don’t think it’s a far cry to describe their ideas as “let’s get rid of black people and queer folks”.

This is an underlying proposition: that white supremacy is inherently violent. After all, if the alt-right seized total political power, what would they do with it? If I asked the same question of Democrats or Republicans, I’d imagine answers like “universal health care” or “screw over poor people”. But people whose primary goal is to have a country full of only white folks? What are they going to do, politely ask everyone else to leave? They’re invoking the memory of people who committed genocide and also tried to take over the fucking world. They are outright saying, these are the people we look up to, this is who we think had a great idea.

How, precisely, does one defeat these ideas with rational debate?

Because the underlying core philosophy beneath all this is: “it would be good for me if everything were about me”. And that’s true! (Well, it probably wouldn’t work out how they imagine in practice, but it’s true enough.) Consider that slavery is probably fantastic if you’re the one with the slaves; the issue is that it’s reprehensible, not that the very notion contains some kind of 101-level logical fallacy. That’s probably why we had a fucking war over it instead of hashing it out over brunch.

…except we did hash it out over brunch once, and the result was that slavery was still allowed but slaves only counted as 60% of a person for the sake of counting how much political power states got. So that’s how rational debate worked out. I’m sure the slaves were thrilled with that progress.


That really only leaves pushing back, which raises the question of how to push back.

And, I don’t know. Pushing back is much harder in spaces you don’t control, spaces you’re already struggling to justify your own presence in. For most people, that’s most spaces. It’s made all the harder by that tendency to preserve illusory peace; even the tamest request that someone knock off some odious behavior can be met by pushback, even by third parties.

At the same time, I’m aware that white supremacists prey on disillusioned young white dudes who feel like they don’t fit in, who were promised the world and inherited kind of a mess. Does criticism drive them further away? The alt-right also opposes “political correctness”, i.e. “not being a fucking asshole”.

God knows we all suck at this kind of behavior correction, even within our own in-groups. Fandoms have become almost ridiculously vicious as platforms like Twitter and Tumblr amplify individual anger to deafening levels. It probably doesn’t help that we’re all just exhausted, that every new fuck-up feels like it bears the same weight as the last hundred combined.

This is the part where I admit I don’t know anything about people and don’t have any easy answers. Surprise!


The other alternative is, well, punching Nazis.

That meme kind of haunts me. It raises really fucking complicated questions about when violence is acceptable, in a culture that’s completely incapable of answering them.

America’s relationship to violence is so bizarre and two-faced as to be almost incomprehensible. We worship it. We have the biggest military in the world by an almost comical margin. It’s fairly mainstream to own deadly weapons for the express stated purpose of armed revolution against the government, should that become necessary, where “necessary” is left ominously undefined. Our movies are about explosions and beating up bad guys; our video games are about explosions and shooting bad guys. We fantasize about solving foreign policy problems by nuking someone — hell, our talking heads are currently in polite discussion about whether we should nuke North Korea and annihilate up to twenty-five million people, as punishment for daring to have the bomb that only we’re allowed to have.

But… violence is bad.

That’s about as far as the other side of the coin gets. It’s bad. We condemn it in the strongest possible terms. Also, guess who we bombed today?

I observe that the one time Nazis were a serious threat, America was happy to let them try to take over the world until their allies finally showed up on our back porch.

Maybe I don’t understand what “violence” means. In a quest to find out why people are talking about “leftist violence” lately, I found a National Review article from May that twice suggests blocking traffic is a form of violence. Anarchists have smashed some windows and set a couple fires at protests this year — and, hey, please knock that crap off? — which is called violence against, I guess, Starbucks. Black Lives Matter could be throwing a birthday party and Twitter would still be abuzz with people calling them thugs.

Meanwhile, there’s a trend of murderers with increasingly overt links to the alt-right, and everyone is still handling them with kid gloves. First it was murders by people repeating their talking points; now it’s the culmination of a torches-and-pitchforks mob. (Ah, sorry, not pitchforks; assault rifles.) And we still get this incredibly bizarre both-sides-ism, a White House that refers to the people who didn’t murder anyone as “just as violent if not more so“.


Should you punch Nazis? I don’t know. All I know is that I’m extremely dissatisfied with discourse that’s extremely alarmed by hypothetical punches — far more mundane than what you’d see after a sporting event — but treats a push for ethnic cleansing as a mere difference of opinion.

The equivalent to a punch in an online space is probably banning, which is almost laughable in comparison. It doesn’t cause physical harm, but it is a use of concrete force. Doesn’t pose quite the same moral quandary, though.

Somewhere in the middle is the currently popular pastime of doxxing (doxxxxxxing) people spotted at the rally in an attempt to get them fired or whatever. Frankly, that skeeves me out, though apparently not enough that I’m directly chastizing anyone for it.


We aren’t really equipped, as a society, to deal with memetic threats. We aren’t even equipped to determine what they are. We had a fucking world war over this, and now people are outright saying “hey I’m like those people we went and killed a lot in that world war” and we give them interviews and compliment their fashion sense.

A looming question is always, what if they then do it to you? What if people try to get you fired, to punch you for your beliefs?

I think about that a lot, and then I remember that it’s perfectly legal to fire someone for being gay in half the country. (Courts are currently wrangling whether Title VII forbids this, but with the current administration, I’m not optimistic.) I know people who’ve been fired for coming out as trans. I doubt I’d have to look very far to find someone who’s been punched for either reason.

And these aren’t even beliefs; they’re just properties of a person. You can stop being a white supremacist, one of those people yelling “fuck you, faggots”.

So I have to recuse myself from this asinine question, because I can’t fairly judge the risk of retaliation when it already happens to people I care about.

Meanwhile, if a white supremacist does get punched, I absolutely still want my tax dollars to pay for their universal healthcare.


The same wrinkle comes up with free speech, which is paramount.

The ACLU reminds us that the First Amendment “protects vile, hateful, and ignorant speech”. I think they’ve forgotten that that’s a side effect, not the goal. No one sat down and suggested that protecting vile speech was some kind of noble cause, yet that’s how we seem to be treating it.

The point was to avoid a situation where the government is arbitrarily deciding what qualifies as vile, hateful, and ignorant, and was using that power to eliminate ideas distasteful to politicians. You know, like, hypothetically, if they interrogated and jailed a bunch of people for supporting the wrong economic system. Or convicted someone under the Espionage Act for opposing the draft. (Hey, that’s where the “shouting fire in a crowded theater” line comes from.)

But these are ideas that are already in the government. Bannon, a man who was chair of a news organization he himself called “the platform for the alt-right”, has the President’s ear! How much more mainstream can you get?

So again I’m having a little trouble balancing “we need to defend the free speech of white supremacists or risk losing it for everyone” against “we fairly recently were ferreting out communists and the lingering public perception is that communists are scary, not that the government is”.


This isn’t to say that freedom of speech is bad, only that the way we talk about it has become fanatical to the point of absurdity. We love it so much that we turn around and try to apply it to corporations, to platforms, to communities, to interpersonal relationships.

Look at 4chan. It’s completely public and anonymous; you only get banned for putting the functioning of the site itself in jeopardy. Nothing is stopping a larger group of people from joining its politics board and tilting sentiment the other way — except that the current population is so odious that no one wants to be around them. Everyone else has evaporated away, as tends to happen.

Free speech is great for a government, to prevent quashing politics that threaten the status quo (except it’s a joke and they’ll do it anyway). People can’t very readily just bail when the government doesn’t like them, anyway. It’s also nice to keep in mind to some degree for ubiquitous platforms. But the smaller you go, the easier it is for people to evaporate away, and the faster pure free speech will turn the place to crap. You’ll be left only with people who care about nothing.


At the very least, it seems clear that the goal of white supremacists is some form of destabilization, of disruption to the fabric of a community for purely selfish purposes. And those are the kinds of people you want to get rid of as quickly as possible.

Usually this is hard, because they act just nicely enough to create some plausible deniability. But damn, if someone is outright telling you they love Hitler, maybe skip the principled hand-wringing and eject them.

Some memorable levels

Post Syndicated from Eevee original https://eev.ee/blog/2017/07/01/some-memorable-levels/

Another Patreon request from Nova Dasterin:

Maybe something about level design. In relation to a vertical shmup since I’m working on one of those.

I’ve been thinking about level design a lot lately, seeing as how I’ve started… designing levels. Shmups are probably the genre I’m the worst at, but perhaps some general principles will apply universally.

And speaking of general principles, that’s something I’ve been thinking about too.

I’ve been struggling to create a more expansive tileset for a platformer, due to two general problems: figuring out what I want to show, and figuring out how to show it with a limited size and palette. I’ve been browsing through a lot of pixel art from games I remember fondly in the hopes of finding some inspiration, but so far all I’ve done is very nearly copy a dirt tile someone submitted to my potluck project.

Recently I realized that I might have been going about looking for inspiration all wrong. I’ve been sifting through stuff in the hopes of finding something that would create some flash of enlightenment, but so far that aimless tourism has only found me a thing or two to copy.

I don’t want to copy a small chunk of the final product; I want to understand the underlying ideas that led the artist to create what they did in the first place. Or, no, that’s not quite right either. I don’t want someone else’s ideas; I want to identify what I like, figure out why I like it, and turn that into some kinda of general design idea. Find the underlying themes that appeal to me and figure out some principles that I could apply. You know, examine stuff critically.

I haven’t had time to take a deeper look at pixel art this way, so I’ll try it right now with level design. Here, then, are some levels from various games that stand out to me for whatever reason; the feelings they evoke when I think about them; and my best effort at unearthing some design principles from those feelings.

Doom II: MAP10, Refueling Base

Opening view of Refueling Base, showing a descent down some stairs into a room not yet visible

screenshots mine — map via doom wiki — see also textured perspective map (warning: large!) via ian albertpistol start playthrough

I’m surprising myself by picking Refueling Base. I would’ve expected myself to pick MAP08, Tricks and Traps, for its collection of uniquely bizarre puzzles and mechanisms. Or MAP13, Downtown, the map that had me convinced (erroneously) that Doom levels supported multi-story structures. Or at least MAP08, The Pit, which stands out for the unique way it feels like a plunge into enemy territory.

(Curiously, those other three maps are all Sandy Petersen’s sole work. Refueling Base was started by Tom Hall in the original Doom days, then finished by Sandy for Doom II.)

But Refueling Base is the level I have the most visceral reaction to: it terrifies me.

See, I got into Doom II through my dad, who played it on and off sometimes. My dad wasn’t an expert gamer or anything, but as a ten-year-old, I assumed he was. I watched him play Refueling Base one night. He died. Again, and again, over and over. I don’t even have very strong memories of his particular attempts, but watching my parent be swiftly and repeatedly defeated — at a time when I still somewhat revered parents — left enough of an impression that hearing the level music still makes my skin crawl.

This may seem strange to bring up as a first example in a post about level design, but I don’t think it would have impressed on me quite so much if the level weren’t designed the way it is. (It’s just a video game, of course, and since then I’ve successfully beaten it from a pistol start myself. But wow, little kid fears sure do linger.)

Map of Refueling Base, showing multiple large rooms and numerous connections between them

The one thing that most defines the map has to be its interconnected layout. Almost every major area (of which there are at least half a dozen) has at least three exits. Not only are you rarely faced with a dead end, but you’ll almost always have a choice of where to go next, and that choice will lead into more choices.

This hugely informs the early combat. Many areas near the beginning are simply adjacent with no doors between them, so it’s easy for monsters to start swarming in from all directions. It’s very easy to feel overwhelmed by an endless horde; no matter where you run, they just seem to keep coming. (In fact, Refueling Base has the most monsters of any map in the game by far: 279. The runner up is the preceding map at 238.) Compounding this effect is the relatively scant ammo and health in the early parts of the map; getting very far from a pistol start is an uphill battle.

The connections between rooms also yield numerous possible routes through the map, as well as several possible ways to approach any given room. Some of the connections are secrets, which usually connect the “backs” of two rooms. Clearing out one room thus rewards you with a sneaky way into another room that puts you behind all the monsters.

Outdoor area shown from the back; a large number of monsters are lying in wait

In fact, the map rewards you for exploring it in general.

Well, okay. It might be more accurate to say that that map punishes you for not exploring it. From a pistol start, the map is surprisingly difficult — the early areas offer rather little health and ammo, and your best chance of success is a very specific route that collects weapons as quickly as possible. Many of the most precious items are squirrelled away in (numerous!) secrets, and you’ll have an especially tough time if you don’t find any of them — though they tend to be telegraphed.

One particularly nasty surprise is in the area shown above, which has three small exits at the back. Entering or leaving via any of those exits will open one of the capsule-shaped pillars, revealing even more monsters. A couple of those are pain elementals, monsters which attack by spawning another monster and shooting it at you — not something you want to be facing with the starting pistol.

But nothing about the level indicates this, so you have to make the association the hard way, probably after making several mad dashes looking for cover. My successful attempt avoided this whole area entirely until I’d found some more impressive firepower. It’s fascinating to me, because it’s a fairly unique effect that doesn’t make any kind of realistic sense, yet it’s still built out of familiar level mechanics: walk through an area and something opens up. Almost like 2D sidescroller design logic applied to a 3D space. I really like it, and wish I saw more of it. So maybe that’s a more interesting design idea: don’t be afraid to do something weird only once, as long as it’s built out of familiar pieces so the player has a chance to make sense of it.

A similarly oddball effect is hidden in a “barracks” area, visible on the far right of the map. A secret door leads to a short U-shaped hallway to a marble skull door, which is themed nothing like the rest of the room. Opening it seems to lead back into the room you were just in, but walking through the doorway teleports you to a back entrance to the boss fight at the end of the level.

It sounds so bizarre, but the telegraphing makes it seem very natural; if anything, the “oh, I get it!” moment overrides the weirdness. It stops being something random and becomes something consciously designed. I believe that this might have been built by someone, even if there’s no sensible reason to have built it.

In fact, that single weird teleporter is exactly the kind of thing I’d like to be better at building. It could’ve been just a plain teleporter pad, but instead it’s a strange thing that adds a lot of texture to the level and makes it much more memorable. I don’t know how to even begin to have ideas like that. Maybe it’s as simple as looking at mundane parts of a level and wondering: what could I do with this instead?

I think a big problem I have is limiting myself to the expected and sensible, to the point that I don’t even consider more outlandish ideas. I can’t shake that habit simply by bolding some text in a blog post, but maybe it would help to keep this in mind: you can probably get away with anything, as long as you justify it somehow. Even “justify” here is too strong a word; it takes only the slightest nod to make an arbitrary behavior feel like part of a world. Why does picking up a tiny glowing knight helmet give you 1% armor in Doom? Does anyone care? Have you even thought about it before? It’s green and looks like armor; the bigger armor pickup is also green; yep, checks out.

A dark and dingy concrete room full of monsters; a couple are standing under light fixtures

On the other hand, the map as a whole ends up feeling very disorienting. There’s no shortage of landmarks, but every space is distinct in both texture and shape, so everything feels like a landmark. No one part of the map feels particularly central; there are a few candidates, but they neighbor other equally grand areas with just as many exits. It’s hard to get truly lost, but it’s also hard to feel like you have a solid grasp of where everything is. The space itself doesn’t make much sense, even though small chunks of it do. Of course, given that the Hellish parts of Doom were all just very weird overall, this is pretty fitting.

This sort of design fascinates me, because the way it feels to play is so different from the way it looks as a mapper with God Vision. Looking at the overhead map, I can identify all the familiar places easily enough, but I don’t know how to feel the way the map feels to play; it just looks like some rooms with doors between them. Yet I can see screenshots and have a sense of how “deep” in the level they are, how difficult they are to reach, whether I want to visit or avoid them. The lesson here might be that most of the interesting flavor of the map isn’t actually contained within the overhead view; it’s in the use of height and texture and interaction.

Dark room with numerous alcoves in the walls, all of them containing a hitscan monster

I realize as I describe all of this that I’m really just describing different kinds of contrast. If I know one thing about creative work (and I do, I only know one thing), it’s that effectively managing contrast is super duper important.

And it appears here in spades! A brightly-lit, outdoor, wide-open round room is only a short jog away from a dark, cramped room full of right angles and alcoves. A wide straight hallway near the beginning is directly across from a short, curvy, organic hallway. Most of the monsters in the map are small fry, but a couple stronger critters are sprinkled here and there, and then the exit is guarded by the toughest monster in the game. Some of the connections between rooms are simple doors; others are bizarre secret corridors or unnatural twisty passages.

You could even argue that the map has too much contrast, that it starts to lose cohesion. But if anything, I think this is one of the more cohesive maps in the first third of the game; many of the earlier maps aren’t so much places as they are concepts. This one feels distinctly like it could be something. The theming is all over the place, but enough of the parts seem deliberate.

I hadn’t even thought about it until I sat down to write this post, but since this is a “refueling base”, I suppose those outdoor capsules (which contain green slime, inset into the floor) could be the fuel tanks! I already referred to that dark techy area as “barracks”. Elsewhere is a rather large barren room, which might be where the vehicles in need of refueling are parked? Or is this just my imagination, and none of it was intended this way?

It doesn’t really matter either way, because even in this abstract world of ambiguity and vague hints, all of those rooms still feel like a place. I don’t have to know what the place is for it to look internally consistent.

I’m hesitant to say every game should have the loose design sense of Doom II, but it might be worth keeping in mind that anything can be a believable world as long as it looks consciously designed. And I’d say this applies even for natural spaces — we frequently treat real-world nature as though it were “designed”, just with a different aesthetic sense.

Okay, okay. I’m sure I could clumsily ramble about Doom forever, but I do that enough as it is. Other people have plenty to say if you’re interested.

I do want to stick in one final comment about MAP13, Downtown, while I’m talking about theming. I’ve seen a few people rag on it for being “just a box” with a lot of ideas sprinkled around — the map is basically a grid of skyscrapers, where each building has a different little mini encounter inside. And I think that’s really cool, because those encounters are arranged in a way that very strongly reinforces the theme of the level, of what this place is supposed to be. It doesn’t play quite like anything else in the game, simply because it was designed around a shape for flavor reasons. Weird physical constraints can do interesting things to level design.

Braid: World 4-7, Fickle Companion

Simple-looking platformer level with a few ladders, a switch, and a locked door

screenshots via StrategyWikiplaythroughplaythrough of secret area

I love Braid. If you’re not familiar (!), it’s a platformer where you have the ability to rewind time — whenever you want, for as long as you want, all the way back to when you entered the level.

The game starts in world 2, where you do fairly standard platforming and use the rewind ability to do some finnicky jumps with minimal frustration. It gets more interesting in world 3 with the addition of glowing green objects, which aren’t affected by the reversal of time.

And then there’s world 4, “Time and Place”. I love world 4, so much. It’s unlike anything I’ve ever seen in any other game, and it’s so simple yet so clever.

The premise is this: for everything except you, time moves forwards as you move right, and backwards as you move left.

This has some weird implications, which all come together in the final level of the world, Fickle Companion. It’s so named because you have to use one (single-use) key to open three doors, but that key is very easy to lose.

Say you pick up the key and walk to the right with it. Time continues forwards for the key, so it stays with you as expected. Now you climb a ladder. Time is frozen since you aren’t moving horizontally, but the key stays with you anyway. Now you walk to the left. Oops — the key follows its own path backwards in time, going down the ladder and back along the path you carried it in the first place. You can’t fix this by walking to the right again, because that will simply advance time normally for the key; since you’re no longer holding it, it will simply fall to the ground and stay there.

You can see how this might be a problem in the screenshot above (where you get the key earlier in the level, to the left). You can climb the first ladder, but to get to the door, you have to walk left to get to the second ladder, which will reverse the key back down to the ground.

The solution is in the cannon in the upper right, which spits out a Goomba-like critter. It has the timeproof green glow, so the critters it spits out have the same green glow — making them immune to both your time reversal power and to the effect your movement has on time. What you have to do is get one of the critters to pick up the key and carry it leftwards for you. Once you have the puzzle piece, you have to rewind time and do it again elsewhere. (Or, more likely, the other way around; this next section acts as a decent hint for how to do the earlier section.)

A puzzle piece trapped behind two doors, in a level containing only one key

It’s hard to convey how bizarre this is in just text. If you haven’t played Braid, it’s absolutely worth it just for this one world, this one level.

And it gets even better, slash more ridiculous: there’s a super duper secret hidden very cleverly in this level. Reaching it involves bouncing twice off of critters; solving the puzzle hidden there involves bouncing the critters off of you. It’s ludicrous and perhaps a bit too tricky, but very clever. Best of all, it’s something that an enterprising player might just think to do on a whim — hey, this is possible here, I wonder what happens if I try it. And the game rewards the player for trying something creative! (Ironically, it’s most rewarding to have a clever idea when it turns out the designer already had the same idea.)

What can I take away from this? Hm.

Well, the underlying idea of linking time with position is pretty novel, but getting to it may not be all that hard: just combine different concepts and see what happens.

A similar principle is to apply a general concept to everything and see what happens. This is the first sighting of a timeproof wandering critter; previously timeproofing had only been seen on keys, doors, puzzle pieces, and stationary monsters. Later it even applies to Tim himself in special circumstances.

The use of timeproofing on puzzle pieces is especially interesting, because the puzzle pieces — despite being collectibles that animate moving into the UI when you get them — are also affected by time. If the pieces in this level weren’t timeproof, then as soon as you collected one and moved left to leave its alcove, time would move backwards and the puzzle piece would reverse out of the UI and right back into the world.

Along similar lines, the music and animated background are also subject to the flow of time. It’s obvious enough that the music plays backwards when you rewind time, but in world 4, the music only plays at all while you’re moving. It’s a fantastic effect that makes the whole world feel as weird and jerky as it really is under these rules. It drives the concept home instantly, and it makes your weird influence over time feel all the more significant and far-reaching. I love when games weave all the elements of the game into the gameplaylike this, even (especially?) for the sake of a single oddball level.

Admittedly, this is all about gameplay or puzzle mechanics, not so much level design. What I like about the level itself is how simple and straightforward it is: it contains exactly as much as it needs to, yet still invites trying the wrong thing first, which immediately teaches the player why it won’t work. And it’s something that feels like it ought to work, except that the rules of the game get in the way just enough. This makes for my favorite kind of puzzle, the type where you feel like you’ve tried everything and it must be impossible — until you realize the creative combination of things you haven’t tried yet. I’m talking about puzzles again, oops; I guess the general level design equivalent of this is that players tend to try the first thing they see first, so if you put required parts later, players will be more likely to see optional parts.

I think that’s all I’ve got for this one puzzle room. I do want to say (again) that I love both endings of Braid. The normal ending weaves together the game mechanics and (admittedly loose) plot in a way that gave me chills when I first saw it; the secret ending completely changes both how the ending plays and how you might interpret the finale, all by making only the slightest changes to the level.

Portal: Testchamber 18 (advanced)

View into a Portal test chamber; the ceiling and most of the walls are covered in metal

screenshot mine — playthrough of normal mapplaythrough of advanced map

I love Portal. I blazed through the game in a couple hours the night it came out. I’d seen the trailer and instantly grasped the concept, so the very slow and gentle learning curve was actually a bit frustrating for me; I just wanted to portal around a big playground, and I finally got to do that in the six “serious” tests towards the end, 13 through 18.

Valve threw an interesting curveball with these six maps. As well as being more complete puzzles by themselves, Valve added “challenges” requiring that they be done with as few portals, time, or steps as possible. I only bothered with the portal challenges — time and steps seemed less about puzzle-solving and more about twitchy reflexes — and within them I found buried an extra layer of puzzles. All of the minimum portal requirements were only possible if you found an alternative solution to the map: skipping part of it, making do with only one cube instead of two, etc. But Valve offered no hints, only a target number. It was a clever way to make me think harder about familiar areas.

Alongside the challenges were “advanced” maps, and these blew me away. They were six maps identical in layout to the last six test chambers, but with a simple added twist that completely changed how you had to approach them. Test 13 has two buttons with two boxes to place on them; the advanced version removes a box and also changes the floor to lava. Test 14 is a live fire course with turrets you have to knock over; the advanced version puts them all in impenetrable cages. Test 17 is based around making extensive use of a single cube; the advanced version changes it to a ball.

But the one that sticks out the most to me is test 18, a potpourri of everything you’ve learned so far. The beginning part has you cross several large pits of toxic sludge by portaling from the ceilings; the advanced version simply changes the ceilings to unportalable metal. It seems you’re completely stuck after only the first jump, unless you happen to catch a glimpse of the portalable floor you pass over in mid-flight. Or you might remember from the regular version of the map that the floor was portalable there, since you used it to progress further. Either way, you have to fire a portal in midair in a way you’ve never had to do before, and the result feels very cool, like you’ve defeated a puzzle that was intended to be unsolvable. All in a level that was fairly easy the first time around, and has been modified only slightly.

I’m not sure where I’m going with this. I could say it’s good to make the player feel clever, but that feels wishy-washy. What I really appreciated about the advanced tests is that they exploited inklings of ideas I’d started to have when playing through the regular game; they encouraged me to take the spark of inspiration this game mechanic gave me and run with it.

So I suppose the better underlying principle here — the most important principle in level design, in any creative work — is to latch onto what gets you fired up and run with it. I am absolutely certain that the level designers for this game loved the portal concept as much as I do, they explored it thoroughly, and they felt compelled to fit their wilder puzzle ideas in somehow.

More of that. Find the stuff that feels like it’s going to burst out of your head, and let it burst.

Chip’s Challenge: Level 122, Totally Fair and Level 131, Totally Unfair

A small maze containing a couple monsters and ending at a brown button

screenshots mine — full maps of both levelsplaythrough of Totally Fairplaythrough of Totally Unfair

I mention this because Portal reminded me of it. The regular and advanced maps in Portal are reminiscent of parallel worlds or duality or whatever you want to call the theme. I extremely dig that theme, and it shows up in Chip’s Challenge in an unexpected way.

Totally Fair is a wide open level with a little maze walled off in one corner. The maze contains a monster called a “teeth”, which follows Chip at a slightly slower speed. (The second teeth, here shown facing upwards, starts outside the maze but followed me into it when I took this screenshot.)

The goal is to lure the teeth into standing on the brown button on the right side. If anything moves into a “trap” tile (the larger brown recesses at the bottom), it cannot move out of that tile until/unless something steps on the corresponding brown button. So there’s not much room for error in maneuvering the teeth; if it falls in the water up top, it’ll die, and if it touches the traps at the bottom, it’ll be stuck permanently.

The reason you need the brown button pressed is to acquire the chips on the far right edge of the level.

Several chips that cannot be obtained without stepping on a trap

The gray recesses turn into walls after being stepped on, so once you grab a chip, the only way out is through the force floors and ice that will send you onto the trap. If you haven’t maneuvered the teeth onto the button beforehand, you’ll be trapped there.

Doesn’t seem like a huge deal, since you can go see exactly how the maze is shaped and move the teeth into position fairly easily. But you see, here is the beginning of Totally Fair.

A wall with a single recessed gray space in it

The gray recess leads up into the maze area, so you can only enter it once. A force floor in the upper right lets you exit it.

Totally Unfair is exactly identical, except the second teeth has been removed, and the entrance to the maze looks like this.

The same wall is now completely solid, and the recess has been replaced with a hint

You can’t get into the maze area. You can’t even see the maze; it’s too far away from the wall. You have to position the teeth completely blind. In fact, if you take a single step to the left from here, you’ll have already dumped the teeth into the water and rendered the level impossible.

The hint tile will tell you to “Remember sjum”, where SJUM is the password to get back to Totally Fair. So you have to learn that level well enough to recreate the same effect without being able to see your progress.

It’s not impossible, and it’s not a “make a map” faux puzzle. A few scattered wall blocks near the chips, outside the maze area, are arranged exactly where the edges of the maze are. Once you notice that, all you have to do is walk up and down a few times, waiting a moment each time to make sure the teeth has caught up with you.

So in a sense, Totally Unfair is the advanced chamber version of Totally Fair. It makes a very minor change that force you to approach the whole level completely differently, using knowledge gleaned from your first attempt.

And crucially, it’s an actual puzzle! A lot of later Chip’s Challenge levels rely heavily on map-drawing, timing, tedium, or outright luck. (Consider, if you will, Blobdance.) The Totally Fair + Totally Unfair pairing requires a little ingenuity unlike anything else in the game, and the solution is something more than just combinations of existing game mechanics. There’s something very interesting about that hint in the walls, a hint you’d have no reason to pick up on when playing through the first level. I wish I knew how to verbalize it better.

Anyway, enough puzzle games; let’s get back to regular ol’ level design.

A 4×4 arrangement of rooms with a conspicuous void in the middle

maps via vgmaps and TCRFplaythrough with commentary

Link’s Awakening was my first Zelda (and only Zelda for a long time), which made for a slightly confusing introduction to the series — what on earth is a Zelda and why doesn’t it appear in the game?

The whole game is a blur of curiosities and interesting little special cases. It’s fabulously well put together, especially for a Game Boy game, and the dungeons in particular are fascinating microcosms of design. I never really appreciated it before, but looking at the full maps, I’m struck by how each dungeon has several large areas neatly sliced into individual screens.

Much like with Doom II, I surprise myself by picking Eagle’s Tower as the most notable part of the game. The dungeon isn’t that interesting within the overall context of the game; it gives you only the mirror shield, possibly the least interesting item in the game, second only to the power bracelet upgrade from the previous dungeon. The dungeon itself is fairly long, full of traps, and overflowing with crystal switches and toggle blocks, making it possibly the most frustrating of the set. Getting to it involves spending some excellent quality time with a flying rooster, but you don’t really do anything — mostly you just make your way through nondescript caves and mountaintops.

Having now thoroughly dunked on it, I’ll tell you what makes it stand out: the player changes the shape of the dungeon.

That’s something I like a lot about Doom, as well, but it’s much more dramatic in Eagle’s Tower. As you might expect, the dungeon is shaped like a tower, where each floor is on a 4×4 grid. The top floor, 4F, is a small 2×2 block of rooms in the middle — but one of those rooms is the boss door, and there’s no way to get to that floor.

(Well, sort of. The “down” stairs in the upper-right of 3F actually lead up to 4F, but the connection is bogus and puts you in a wall, and both of the upper middle rooms are unreachable during normal gameplay.)

The primary objective of the dungeon is to smash four support columns on 2F by throwing a huge iron ball at them, which causes 4F to crash down into the middle of 3F.

The same arrangement of rooms, but the four in the middle have changed

Even the map on the pause screen updates to reflect this. In every meaningful sense, you, the player, have fundamentally reconfigured the shape of this dungeon.

I love this. It feels like I have some impact on the world, that I came along and did something much more significant than mere game mechanics ought to allow. I saw that the tower was unsolvable as designed, so I fixed it.

It’s clear that the game engine supports rearranging screens arbitrarily — consider the Wind Fish’s Egg — but this is s wonderfully clever and subtle use of that. Let the player feel like they have an impact on the world.

The cutting room floor

This is getting excessively long so I’m gonna cut it here. Some other things I thought of but don’t know how to say more than a paragraph about:

  • Super Mario Land 2: Six Golden Coins has a lot of levels with completely unique themes, backed by very simple tilesets but enhanced by interesting one-off obstacles and enemies. I don’t even know how to pick a most interesting one. Maybe just play the game, or at least peruse the maps.

  • This post about density of detail in Team Fortress 2 is really good so just read that I guess. It’s really about careful balance of contrast again, but through the lens of using contrasting amounts of detail to draw the player’s attention, while still carrying a simple theme through less detailed areas.

  • Metroid Prime is pretty interesting in a lot of ways, but I mostly laugh at how they spaced rooms out with long twisty hallways to improve load times — yet I never really thought about it because they all feel like they belong in the game.

One thing I really appreciate is level design that hints at a story, that shows me a world that exists persistently, that convinces me this space exists for some reason other than as a gauntlet for me as a player. But it seems what comes first to my mind is level design that’s clever or quirky, which probably says a lot about me. Maybe the original Fallouts are a good place to look for that sort of detail.

Conversely, it sticks out like a sore thumb when a game tries to railroad me into experiencing the game As The Designer Intended. Games are interactive, so the more input the player can give, the better — and this can be as simple as deciding to avoid rather than confront enemies, or deciding to run rather than walk.

I think that’s all I’ve got in me at the moment. Clearly I need to meditate on this a lot more, but I hope some of this was inspiring in some way!

Ceramic Knife Used in Israel Stabbing

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/ceramic_knife_u.html

I have no comment on the politics of this stabbing attack, and only note that the attacker used a ceramic knife — that will go through metal detectors.

I have used a ceramic knife in the kitchen. It’s sharp.

EDITED TO ADD (6/22): It looks like the knife had nothing to do with the attack discussed in the article.

CIA’s Pandemic Toolkit

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/06/cias_pandemic_t.html

WikiLeaks is still dumping CIA cyberweapons on the Internet. Its latest dump is something called “Pandemic”:

The Pandemic leak does not explain what the CIA’s initial infection vector is, but does describe it as a persistent implant.

“As the name suggests, a single computer on a local network with shared drives that is infected with the ‘Pandemic’ implant will act like a ‘Patient Zero’ in the spread of a disease,” WikiLeaks said in its summary description. “‘Pandemic’ targets remote users by replacing application code on-the-fly with a Trojaned version if the program is retrieved from the infected machine.”

The key to evading detection is its ability to modify or replace requested files in transit, hiding its activity by never touching the original file. The new attack then executes only on the machine requesting the file.

Version 1.1 of Pandemic, according to the CIA’s documentation, can target and replace up to 20 different files with a maximum size of 800MB for a single replacement file.

“It will infect remote computers if the user executes programs stored on the pandemic file server,” WikiLeaks said. “Although not explicitly stated in the documents, it seems technically feasible that remote computers that provide file shares themselves become new pandemic file servers on the local network to reach new targets.”

The CIA describes Pandemic as a tool that runs as kernel shellcode that installs a file system filter driver. The driver is used to replace a file with a payload when a user on the local network accesses the file over SMB.

WikiLeaks page. News article.

Who Are the Shadow Brokers?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/who_are_the_sha.html

In 2013, a mysterious group of hackers that calls itself the Shadow Brokers stole a few disks full of NSA secrets. Since last summer, they’ve been dumping these secrets on the Internet. They have publicly embarrassed the NSA and damaged its intelligence-gathering capabilities, while at the same time have put sophisticated cyberweapons in the hands of anyone who wants them. They have exposed major vulnerabilities in Cisco routers, Microsoft Windows, and Linux mail servers, forcing those companies and their customers to scramble. And they gave the authors of the WannaCry ransomware the exploit they needed to infect hundreds of thousands of computer worldwide this month.

After the WannaCry outbreak, the Shadow Brokers threatened to release more NSA secrets every month, giving cybercriminals and other governments worldwide even more exploits and hacking tools.

Who are these guys? And how did they steal this information? The short answer is: we don’t know. But we can make some educated guesses based on the material they’ve published.

The Shadow Brokers suddenly appeared last August, when they published a series of hacking tools and computer exploits­ — vulnerabilities in common software — ­from the NSA. The material was from autumn 2013, and seems to have been collected from an external NSA staging server, a machine that is owned, leased, or otherwise controlled by the US, but with no connection to the agency. NSA hackers find obscure corners of the Internet to hide the tools they need as they go about their work, and it seems the Shadow Brokers successfully hacked one of those caches.

In total, the group has published four sets of NSA material: a set of exploits and hacking tools against routers, the devices that direct data throughout computer networks; a similar collection against mail servers; another collection against Microsoft Windows; and a working directory of an NSA analyst breaking into the SWIFT banking network. Looking at the time stamps on the files and other material, they all come from around 2013. The Windows attack tools, published last month, might be a year or so older, based on which versions of Windows the tools support.

The releases are so different that they’re almost certainly from multiple sources at the NSA. The SWIFT files seem to come from an internal NSA computer, albeit one connected to the Internet. The Microsoft files seem different, too; they don’t have the same identifying information that the router and mail server files do. The Shadow Brokers have released all the material unredacted, without the care journalists took with the Snowden documents or even the care WikiLeaks has taken with the CIA secrets it’s publishing. They also posted anonymous messages in bad English but with American cultural references.

Given all of this, I don’t think the agent responsible is a whistleblower. While possible, it seems like a whistleblower wouldn’t sit on attack tools for three years before publishing. They would act more like Edward Snowden or Chelsea Manning, collecting for a time and then publishing immediately­ — and publishing documents that discuss what the US is doing to whom. That’s not what we’re seeing here; it’s simply a bunch of exploit code, which doesn’t have the political or ethical implications that a whistleblower would want to highlight. The SWIFT documents are records of an NSA operation, and the other posted files demonstrate that the NSA is hoarding vulnerabilities for attack rather than helping fix them and improve all of our security.

I also don’t think that it’s random hackers who stumbled on these tools and are just trying to harm the NSA or the US. Again, the three-year wait makes no sense. These documents and tools are cyber-Kryptonite; anyone who is secretly hoarding them is in danger from half the intelligence agencies in the world. Additionally, the publication schedule doesn’t make sense for the leakers to be cybercriminals. Criminals would use the hacking tools for themselves, incorporating the exploits into worms and viruses, and generally profiting from the theft.

That leaves a nation state. Whoever got this information years before and is leaking it now has to be both capable of hacking the NSA and willing to publish it all. Countries like Israel and France are capable, but would never publish, because they wouldn’t want to incur the wrath of the US. Country like North Korea or Iran probably aren’t capable. (Additionally, North Korea is suspected of being behind WannaCry, which was written after the Shadow Brokers released that vulnerability to the public.) As I’ve written previously, the obvious list of countries who fit my two criteria is small: Russia, China, and­ — I’m out of ideas. And China is currently trying to make nice with the US.

It was generally believed last August, when the first documents were released and before it became politically controversial to say so, that the Russians were behind the leak, and that it was a warning message to President Barack Obama not to retaliate for the Democratic National Committee hacks. Edward Snowden guessed Russia, too. But the problem with the Russia theory is, why? These leaked tools are much more valuable if kept secret. Russia could use the knowledge to detect NSA hacking in its own country and to attack other countries. By publishing the tools, the Shadow Brokers are signaling that they don’t care if the US knows the tools were stolen.

Sure, there’s a chance the attackers knew that the US knew that the attackers knew — ­and round and round we go. But the “we don’t give a damn” nature of the releases points to an attacker who isn’t thinking strategically: a lone hacker or hacking group, which clashes with the nation-state theory.

This is all speculation on my part, based on discussion with others who don’t have access to the classified forensic and intelligence analysis. Inside the NSA, they have a lot more information. Many of the files published include operational notes and identifying information. NSA researchers know exactly which servers were compromised, and through that know what other information the attackers would have access to. As with the Snowden documents, though, they only know what the attackers could have taken and not what they did take. But they did alert Microsoft about the Windows vulnerability the Shadow Brokers released months in advance. Did they have eavesdropping capability inside whoever stole the files, as they claimed to when the Russians attacked the State Department? We have no idea.

So, how did the Shadow Brokers do it? Did someone inside the NSA accidentally mount the wrong server on some external network? That’s possible, but seems very unlikely for the organization to make that kind of rookie mistake. Did someone hack the NSA itself? Could there be a mole inside the NSA?

If it is a mole, my guess is that the person was arrested before the Shadow Brokers released anything. No country would burn a mole working for it by publishing what that person delivered while he or she was still in danger. Intelligence agencies know that if they betray a source this severely, they’ll never get another one.

That points to two possibilities. The first is that the files came from Hal Martin. He’s the NSA contractor who was arrested in August for hoarding agency secrets in his house for two years. He can’t be the publisher, because the Shadow Brokers are in business even though he is in prison. But maybe the leaker got the documents from his stash, either because Martin gave the documents to them or because he himself was hacked. The dates line up, so it’s theoretically possible. There’s nothing in the public indictment against Martin that speaks to his selling secrets to a foreign power, but that’s just the sort of thing that would be left out. It’s not needed for a conviction.

If the source of the documents is Hal Martin, then we can speculate that a random hacker did in fact stumble on it — ­no need for nation-state cyberattack skills.

The other option is a mysterious second NSA leaker of cyberattack tools. Could this be the person who stole the NSA documents and passed them on to someone else? The only time I have ever heard about this was from a Washington Post story about Martin:

There was a second, previously undisclosed breach of cybertools, discovered in the summer of 2015, which was also carried out by a TAO employee [a worker in the Office of Tailored Access Operations], one official said. That individual also has been arrested, but his case has not been made public. The individual is not thought to have shared the material with another country, the official said.

Of course, “not thought to have” is not the same as not having done so.

It is interesting that there have been no public arrests of anyone in connection with these hacks. If the NSA knows where the files came from, it knows who had access to them — ­and it’s long since questioned everyone involved and should know if someone deliberately or accidentally lost control of them. I know that many people, both inside the government and out, think there is some sort of domestic involvement; things may be more complicated than I realize.

It’s also not over. Last week, the Shadow Brokers were back, with a rambling and taunting message announcing a “Data Dump of the Month” service. They’re offering to sell unreleased NSA attack tools­ — something they also tried last August­ — with the threat to publish them if no one pays. The group has made good on their previous boasts: In the coming months, we might see new exploits against web browsers, networking equipment, smartphones, and operating systems — Windows in particular. Even scarier, they’re threatening to release raw NSA intercepts: data from the SWIFT network and banks, and “compromised data from Russian, Chinese, Iranian, or North Korean nukes and missile programs.”

Whoever the Shadow Brokers are, however they stole these disks full of NSA secrets, and for whatever reason they’re releasing them, it’s going to be a long summer inside of Fort Meade­ — as it will be for the rest of us.

This essay previously appeared in the Atlantic, and is an update of this essay from Lawfare.

Some notes on Trump’s cybersecurity Executive Order

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/some-notes-on-trumps-cybersecurity.html

President Trump has finally signed an executive order on “cybersecurity”. The first draft during his first weeks in power were hilariously ignorant. The current draft, though, is pretty reasonable as such things go. I’m just reading the plain language of the draft as a cybersecurity expert, picking out the bits that interest me. In reality, there’s probably all sorts of politics in the background that I’m missing, so I may be wildly off-base.

Holding managers accountable

This is a great idea in theory. But government heads are rarely accountable for anything, so it’s hard to see if they’ll have the nerve to implement this in practice. When the next breech happens, we’ll see if anybody gets fired.
“antiquated and difficult to defend Information Technology”

The government uses laughably old computers sometimes. Forces in government wants to upgrade them. This won’t work. Instead of replacing old computers, the budget will simply be used to add new computers. The old computers will still stick around.
“Legacy” is a problem that money can’t solve. Programmers know how to build small things, but not big things. Everything starts out small, then becomes big gradually over time through constant small additions. What you have now is big legacy systems. Attempts to replace a big system with a built-from-scratch big system will fail, because engineers don’t know how to build big systems. This will suck down any amount of budget you have with failed multi-million dollar projects.
It’s not the antiquated systems that are usually the problem, but more modern systems. Antiquated systems can usually be protected by simply sticking a firewall or proxy in front of them.

“address immediate unmet budgetary needs necessary to manage risk”

Nobody cares about cybersecurity. Instead, it’s a thing people exploit in order to increase their budget. Instead of doing the best security with the budget they have, they insist they can’t secure the network without more money.

An alternate way to address gaps in cybersecurity is instead to do less. Reduce exposure to the web, provide fewer services, reduce functionality of desktop computers, and so on. Insisting that more money is the only way to address unmet needs is the strategy of the incompetent.

Use the NIST framework
Probably the biggest thing in the EO is that it forces everyone to use the NIST cybersecurity framework.
The NIST Framework simply documents all the things that organizations commonly do to secure themselves, such run intrusion-detection systems or impose rules for good passwords.
There are two problems with the NIST Framework. The first is that no organization does all the things listed. The second is that many organizations don’t do the things well.
Password rules are a good example. Organizations typically had bad rules, such as frequent changes and complexity standards. So the NIST Framework documented them. But cybersecurity experts have long opposed those complex rules, so have been fighting NIST on them.

Another good example is intrusion-detection. These days, I scan the entire Internet, setting off everyone’s intrusion-detection systems. I can see first hand that they are doing intrusion-detection wrong. But the NIST Framework recommends they do it, because many organizations do it, but the NIST Framework doesn’t demand they do it well.
When this EO forces everyone to follow the NIST Framework, then, it’s likely just going to increase the amount of money spent on cybersecurity without increasing effectiveness. That’s not necessarily a bad thing: while probably ineffective or counterproductive in the short run, there might be long-term benefit aligning everyone to thinking about the problem the same way.
Note that “following” the NIST Framework doesn’t mean “doing” everything. Instead, it means documented how you do everything, a reason why you aren’t doing anything, or (most often) your plan to eventually do the thing.
preference for shared IT services for email, cloud, and cybersecurity
Different departments are hostile toward each other, with each doing things their own way. Obviously, the thinking goes, that if more departments shared resources, they could cut costs with economies of scale. Also obviously, it’ll stop the many home-grown wrong solutions that individual departments come up with.
In other words, there should be a single government GMail-type service that does e-mail both securely and reliably.
But it won’t turn out this way. Government does not have “economies of scale” but “incompetence at scale”. It means a single GMail-like service that is expensive, unreliable, and in the end, probably insecure. It means we can look forward to government breaches that instead of affecting one department affecting all departments.

Yes, you can point to individual organizations that do things poorly, but what you are ignoring is the organizations that do it well. When you make them all share a solution, it’s going to be the average of all these things — meaning those who do something well are going to move to a worse solution.

I suppose this was inserted in there so that big government cybersecurity companies can now walk into agencies, point to where they are deficient on the NIST Framework, and say “sign here to do this with our shared cybersecurity service”.
“identify authorities and capabilities that agencies could employ to support the cybersecurity efforts of critical infrastructure entities”
What this means is “how can we help secure the power grid?”.
What it means in practice is that fiasco in the Vermont power grid. The DHS produced a report containing IoCs (“indicators of compromise”) of Russian hackers in the DNC hack. Among the things it identified was that the hackers used Yahoo! email. They pushed these IoCs out as signatures in their “Einstein” intrusion-detection system located at many power grid locations. The next person that logged into their Yahoo! email was then flagged as a Russian hacker, causing all sorts of hilarity to ensue, such as still uncorrected stories by the Washington Post how the Russians hacked our power-grid.
The upshot is that federal government help is also going to include much government hindrance. They really are this stupid sometimes and there is no way to fix this stupid. (Seriously, the DHS still insists it did the right thing pushing out the Yahoo IoCs).
Resilience Against Botnets and Other Automated, Distributed Threats

The government wants to address botnets because it’s just the sort of problem they love, mass outages across the entire Internet caused by a million machines.

But frankly, botnets don’t even make the top 10 list of problems they should be addressing. Number #1 is clearly “phishing” — you know, the attack that’s been getting into the DNC and Podesta e-mails, influencing the election. You know, the attack that Gizmodo recently showed the Trump administration is partially vulnerable to. You know, the attack that most people blame as what probably led to that huge OPM hack. Replace the entire Executive Order with “stop phishing”, and you’d go further fixing federal government security.

But solving phishing is tough. To begin with, it requires a rethink how the government does email, and how how desktop systems should be managed. So the government avoids complex problems it can’t understand to focus on the simple things it can — botnets.

Dealing with “prolonged power outage associated with a significant cyber incident”

The government has had the hots for this since 2001, even though there’s really been no attack on the American grid. After the Russian attacks against the Ukraine power grid, the issue is heating up.

Nation-wide attacks aren’t really a threat, yet, in America. We have 10,000 different companies involved with different systems throughout the country. Trying to hack them all at once is unlikely. What’s funny is that it’s the government’s attempts to standardize everything that’s likely to be our downfall, such as sticking Einstein sensors everywhere.

What they should be doing is instead of trying to make the grid unhackable, they should be trying to lessen the reliance upon the grid. They should be encouraging things like Tesla PowerWalls, solar panels on roofs, backup generators, and so on. Indeed, rather than industrial system blackout, industry backup power generation should be considered as a source of grid backup. Factories and even ships were used to supplant the electric power grid in Japan after the 2011 tsunami, for example. The less we rely on the grid, the less a blackout will hurt us.

“cybersecurity risks facing the defense industrial base, including its supply chain”

So “supply chain” cybersecurity is increasingly becoming a thing. Almost anything electronic comes with millions of lines of code, silicon chips, and other things that affect the security of the system. In this context, they may be worried about intentional subversion of systems, such as that recent article worried about Kaspersky anti-virus in government systems. However, the bigger concern is the zillions of accidental vulnerabilities waiting to be discovered. It’s impractical for a vendor to secure a product, because it’s built from so many components the vendor doesn’t understand.

“strategic options for deterring adversaries and better protecting the American people from cyber threats”

Deterrence is a funny word.

Rumor has it that we forced China to backoff on hacking by impressing them with our own hacking ability, such as reaching into China and blowing stuff up. This works because the Chinese governments remains in power because things are going well in China. If there’s a hiccup in economic growth, there will be mass actions against the government.

But for our other cyber adversaries (Russian, Iran, North Korea), things already suck in their countries. It’s hard to see how we can make things worse by hacking them. They also have a strangle hold on the media, so hacking in and publicizing their leader’s weird sex fetishes and offshore accounts isn’t going to work either.

Also, deterrence relies upon “attribution”, which is hard. While news stories claim last year’s expulsion of Russian diplomats was due to election hacking, that wasn’t the stated reason. Instead, the claimed reason was Russia’s interference with diplomats in Europe, such as breaking into diplomat’s homes and pooping on their dining room table. We know it’s them when they are brazen (as was the case with Chinese hacking), but other hacks are harder to attribute.

Deterrence of nation states ignores the reality that much of the hacking against our government comes from non-state actors. It’s not clear how much of all this Russian hacking is actually directed by the government. Deterrence polices may be better directed at individuals, such as the recent arrest of a Russian hacker while they were traveling in Spain. We can’t get Russian or Chinese hackers in their own countries, so we have to wait until they leave.

Anyway, “deterrence” is one of those real-world concepts that hard to shoe-horn into a cyber (“cyber-deterrence”) equivalent. It encourages lots of bad thinking, such as export controls on “cyber-weapons” to deter foreign countries from using them.

“educate and train the American cybersecurity workforce of the future”

The problem isn’t that we lack CISSPs. Such blanket certifications devalue the technical expertise of the real experts. The solution is to empower the technical experts we already have.

In other words, mandate that whoever is the “cyberczar” is a technical expert, like how the Surgeon General must be a medical expert, or how an economic adviser must be an economic expert. For over 15 years, we’ve had a parade of non-technical people named “cyberczar” who haven’t been experts.

Once you tell people technical expertise is valued, then by nature more students will become technical experts.

BTW, the best technical experts are software engineers and sysadmins. The best cybersecurity for Windows is already built into Windows, whose sysadmins need to be empowered to use those solutions. Instead, they are often overridden by a clueless cybersecurity consultant who insists on making the organization buy a third-party product instead that does a poorer job. We need more technical expertise in our organizations, sure, but not necessarily more cybersecurity professionals.

Conclusion

This is really a government document, and government people will be able to explain it better than I. These are just how I see it as a technical-expert who is a government-outsider.

My guess is the most lasting consequential thing will be making everyone following the NIST Framework, and the rest will just be a lot of aspirational stuff that’ll be ignored.

"Fast and Furious 8: Fate of the Furious"

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/04/fast-and-furious-8-fate-of-furious.html

So “Fast and Furious 8” opened this weekend to world-wide box office totals of $500,000,000. I thought I’d write up some notes on the “hacking” in it. The tl;dr version is this: yes, while the hacking is a bit far fetched, it’s actually more realistic than the car chase scenes, such as winning a race with the engine on fire while in reverse.

[SPOILERS]


Car hacking


The most innovative cyber-thing in the movie is the car hacking. In one scene, the hacker takes control of the cars in a parking structure, and makes them rain on to the street. In another scene, the hacker takes control away from drivers, with some jumping out of their moving cars in fear.

How real is this?

Well, today, few cars have a mechanical link between the computer and the steering wheel. No amount of hacking will fix the fact that this component is missing.

With that said, most new cars have features that make hacking possible. I’m not sure, but I’d guess more than half of new cars have internet connections (via the mobile phone network), cameras (for backing up, but also looking forward for lane departure warnings), braking (for emergencies), and acceleration.

In other words, we are getting really close.

As this Wikipedia article describes, there are levels for autonomous cars. At level 2 or 3, cars get automated steering, either for parking or for staying in the lane. Level 3 autonomy is especially useful, as it means you can sit back and relax while your car is sitting in a traffic jam. Higher levels of autonomy are still decades away, but most new cars, even the cheapest low end cars, will be level 3 within 5 years. That they make traffic jams bearable makes this an incredibly attractive feature.

Thus, while this scene is laughable today, it’ll be taken seriously in 10 years. People will look back on how smart this movie was at predicting the future.

Car hacking, part 2

Quite apart from the abilities of cars, let’s talk about the abilities of hackers.

The recent ShadowBrokers dump of NSA hacking tools show that hackers simply don’t have a lot of range. Hacking one car is easy — hacking all different models, makes, and years of cars is far beyond the ability of any hacking group, even the NSA.

I mean, a single hack may span more than one car model, and even across more than one manufacturer, because they buy such components from third-party manufacturers. Most cars that have cameras buy them from MobileEye, which was recently acquired by Intel.  As I blogged before, both my Parrot drone and Tesla car have the same WiFi stack, and both could be potential hacked with the same vulnerability. So hacking many cars at once isn’t totally out of the question.

It’s just that hacking all the different cars in a garage is completely implausible.

God’s Eye

The plot of the last two movies as been about the “God’s Eye”, a device that hacks into every camera and satellite to view everything going on in the world.

First of all, all hacking is software. The idea of stealing a hardware device in order enable hacking is therefore (almost) always fiction. There’s one corner case where a quantum chip factoring RSA would enable some previously impossible hacking, but it still can’t reach out and hack a camera behind a firewall.

Hacking security cameras around the world is indeed possible, though. The Mirai botnet of last year demonstrated this. It wormed its way form camera to camera, hacking hundreds of thousands of cameras that weren’t protected by firewalls. It used these devices as simply computers, to flood major websites, taking them offline. But it could’ve also used the camera features, to upload pictures and video’s to the hacker controlling these cameras.

However, most security cameras are behind firewalls, and can’t be reached. Building a “Gody’s Eye” view of the world, to catch a target every time they passed in front of a camera, would therefore be unrealistic.

Moreover, they don’t have either the processing power nor the bandwidth to work like that. It takes heavy number crunching in order to detect faces, or even simple things like license plates, within videos. The cameras don’t have that. Instead, cameras could upload the videos/pictures to supercomputers controlled by the hypothetical hacker, but the bandwidth doesn’t exist. The Internet is being rapidly upgraded, but still, Internet links are built for low-bandwidth webpages, not high-bandwidth streaming from millions of sources.

This rapidly changing. Cameras are rapidly being upgraded with “neural network” chips that will have some rudimentary capabilities to recognize things like license plates, or the outline of a face that could then be uploaded for more powerful number crunching elsewhere. Your car’s cameras already have this, for backup warnings and lane departure warnings, soon all security cameras will have something like this. Likewise, the Internet is steadily being upgraded to replace TV broadcast, where everyone can stream from Netflix all the time, so high-bandwidth streams from cameras will become more of the norm.

Even getting behind a firewall to the camera will change in the future, as owners will simply store surveillance video in the cloud instead of locally. Thus, the hypothetical hacker would only need to hack a small number of surveillance camera companies instead of a billion security cameras.

Evil villain lair: ghost airplane

The evil villain in the movie (named “Cipher”, or course) has her secret headquarters on an airplane that flies along satellite “blind spots” so that it can’t be tracked.

This is nonsense. Low resolution satellites, like NOAA satellites tracking the weather, cover the entire planet (well, as far as such airplanes are concerned, unless you are landing in Antartica). While such satellites might not see the plane, they can track the contrail (I mean, chemtrail). Conversely high resolution satellites miss most of the planet. If they haven’t been tasked to aim at something, they won’t see it. And they can’t be aimed at you unless they already know where you are. Sure, there are moving blind spots where even tasked satellites can’t find you, but it’s unlikely they’d be tracking you anyway.

Since the supervillain was a hacker, the airplane was full of computers. This is nonsense. Any compute power I need as a hacker is better left on the Earth’s surface, either by hacking cloud providers (like Amazon AWS, Microsoft Azure, or Rackspace), or by hiding data centers in Siberia and Tibet. All I need is satellite communication to the Internet from my laptop to be a supervillain. Indeed, I’m unlikely to get the bandwidth I need to process things on the plane. Instead, I’ll need to process everything on the Earth anyway, and send the low-bandwidth results to the plane.

In any case, if I were writing fiction, I’d have nuclear-powered airplanes that stayed aloft for months, operating out of remote bases in the Himalayas or Antartica.

EMP pulses

Small EMP pulse weapons exist, that’s not wholly fictional.

However, an EMP with the features, power, and effects in the movie is, of course, fictional. EMPs, even non-nuclear ones, are abused in films/TV so much that the Wikipedia pages on them spend a lot of time debunking them.

It would be cool if, one day, they used EMP realistically. In this movie, real missile-tipped with non-nuclear explosively-pumped flux compression generators could’ve been used for the same effect. Of course, simple explosives that blow up electronics also work.

Since hacking is the goto deus ex machina these days, they could’ve just had the hackers disable the power instead of using the EMP to do it.

Conclusion

In the movie, the hero uses his extraordinary driving skills to blow up a submarine. Given this level of willing disbelief, the exaggerated hacking is actually the least implausible bits of the movie. Indeed, as technology changes, making some of this more possible, the movie might be seen as predicting the future.

Surveillance and our Insecure Infrastructure

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/04/surveillance_an_2.html

Since Edward Snowden revealed to the world the extent of the NSA’s global surveillance network, there has been a vigorous debate in the technological community about what its limits should be.

Less discussed is how many of these same surveillance techniques are used by other — smaller and poorer — more totalitarian countries to spy on political opponents, dissidents, human rights defenders; the press in Toronto has documented some of the many abuses, by countries like Ethiopia , the UAE, Iran, Syria, Kazakhstan , Sudan, Ecuador, Malaysia, and China.

That these countries can use network surveillance technologies to violate human rights is a shame on the world, and there’s a lot of blame to go around.

We can point to the governments that are using surveillance against their own citizens.

We can certainly blame the cyberweapons arms manufacturers that are selling those systems, and the countries — mostly European — that allow those arms manufacturers to sell those systems.

There’s a lot more the global Internet community could do to limit the availability of sophisticated Internet and telephony surveillance equipment to totalitarian governments. But I want to focus on another contributing cause to this problem: the fundamental insecurity of our digital systems that makes this a problem in the first place.

IMSI catchers are fake mobile phone towers. They allow someone to impersonate a cell network and collect information about phones in the vicinity of the device and they’re used to create lists of people who were at a particular event or near a particular location.

Fundamentally, the technology works because the phone in your pocket automatically trusts any cell tower to which it connects. There’s no security in the connection protocols between the phones and the towers.

IP intercept systems are used to eavesdrop on what people do on the Internet. Unlike the surveillance that happens at the sites you visit, by companies like Facebook and Google, this surveillance happens at the point where your computer connects to the Internet. Here, someone can eavesdrop on everything you do.

This system also exploits existing vulnerabilities in the underlying Internet communications protocols. Most of the traffic between your computer and the Internet is unencrypted, and what is encrypted is often vulnerable to man-in-the-middle attacks because of insecurities in both the Internet protocols and the encryption protocols that protect it.

There are many other examples. What they all have in common is that they are vulnerabilities in our underlying digital communications systems that allow someone — whether it’s a country’s secret police, a rival national intelligence organization, or criminal group — to break or bypass what security there is and spy on the users of these systems.

These insecurities exist for two reasons. First, they were designed in an era where computer hardware was expensive and inaccessibility was a reasonable proxy for security. When the mobile phone network was designed, faking a cell tower was an incredibly difficult technical exercise, and it was reasonable to assume that only legitimate cell providers would go to the effort of creating such towers.

At the same time, computers were less powerful and software was much slower, so adding security into the system seemed like a waste of resources. Fast forward to today: computers are cheap and software is fast, and what was impossible only a few decades ago is now easy.

The second reason is that governments use these surveillance capabilities for their own purposes. The FBI has used IMSI-catchers for years to investigate crimes. The NSA uses IP interception systems to collect foreign intelligence. Both of these agencies, as well as their counterparts in other countries, have put pressure on the standards bodies that create these systems to not implement strong security.

Of course, technology isn’t static. With time, things become cheaper and easier. What was once a secret NSA interception program or a secret FBI investigative tool becomes usable by less-capable governments and cybercriminals.

Man-in-the-middle attacks against Internet connections are a common criminal tool to steal credentials from users and hack their accounts.

IMSI-catchers are used by criminals, too. Right now, you can go onto Alibaba.com and buy your own IMSI catcher for under $2,000.

Despite their uses by democratic governments for legitimate purposes, our security would be much better served by fixing these vulnerabilities in our infrastructures.

These systems are not only used by dissidents in totalitarian countries, they’re also used by legislators, corporate executives, critical infrastructure providers, and many others in the US and elsewhere.

That we allow people to remain insecure and vulnerable is both wrongheaded and dangerous.

Earlier this month, two American legislators — Senator Ron Wyden and Rep Ted Lieu — sent a letter to the chairman of the Federal Communications Commission, demanding that he do something about the country’s insecure telecommunications infrastructure.

They pointed out that not only are insecurities rampant in the underlying protocols and systems of the telecommunications infrastructure, but also that the FCC knows about these vulnerabilities and isn’t doing anything to force the telcos to fix them.

Wyden and Lieu make the point that fixing these vulnerabilities is a matter of US national security, but it’s also a matter of international human rights. All modern communications technologies are global, and anything the US does to improve its own security will also improve security worldwide.

Yes, it means that the FBI and the NSA will have a harder job spying, but it also means that the world will be a safer and more secure place.

This essay previously appeared on AlJazeera.com.

Some notes on the RAND 0day report

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/some-notes-on-rand-0day-report.html

The RAND Corporation has a research report on the 0day market [ * ]. It’s pretty good. They talked to all the right people. It should be considered the seminal work on the issue. They’ve got the pricing about right ($1 million for full chain iPhone exploit, but closer to $100k for others). They’ve got the stats about right (5% chance somebody else will discover an exploit).

Yet, they’ve got some problems, namely phrasing the debate as activists want, rather than a neutral view of the debate.

The report frequently uses the word “stockpile”. This is a biased term used by activists. According to the dictionary, it means:

a large accumulated stock of goods or materials, especially one held in reserve for use at a time of shortage or other emergency.

Activists paint the picture that the government (NSA, CIA, DoD, FBI) buys 0day to hold in reserve in case they later need them. If that’s the case, then it seems reasonable that it’s better to disclose/patch the vuln then let it grow moldy in a cyberwarehouse somewhere.

But that’s not how things work. The government buys vulns it has immediate use for (primarily). Almost all vulns it buys are used within 6 months. Most vulns in its “stockpile” have been used in the previous year. These cyberweapons are not in a warehouse, but in active use on the front lines.

This is top secret, of course, so people assume it’s not happening. They hear about no cyber operations (except Stuxnet), so they assume such operations aren’t occurring. Thus, they build up the stockpiling assumption rather than the active use assumption.

If the RAND wanted to create an even more useful survey, they should figure out how many thousands of times per day our government (NSA, CIA, DoD, FBI) exploits 0days. They should characterize who they target (e.g. terrorists, child pornographers), success rate, and how many people they’ve killed based on 0days. It’s this data, not patching, that is at the root of the policy debate.

That 0days are actively used determines pricing. If the government doesn’t have immediate need for a vuln, it won’t pay much for it, if anything at all. Conversely, if the government has urgent need for a vuln, it’ll pay a lot.

Let’s say you have a remote vuln for Samsung TVs. You go to the NSA and offer it to them. They tell you they aren’t interested, because they see no near term need for it. Then a year later, spies reveal ISIS has stolen a truckload of Samsung TVs, put them in all the meeting rooms, and hooked them to Internet for video conferencing. The NSA then comes back to you and offers $500k for the vuln.

Likewise, the number of sellers affects the price. If you know they desperately need the Samsung TV 0day, but they are only offering $100k, then it likely means that there’s another seller also offering such a vuln.

That’s why iPhone vulns are worth $1 million for a full chain exploit, from browser to persistence. They use it a lot, it’s a major part of ongoing cyber operations. Each time Apple upgrades iOS, the change breaks part of the existing chain, and the government is keen on getting a new exploit to fix it. They’ll pay a lot to the first vuln seller who can give them a new exploit.

Thus, there are three prices the government is willing to pay for an 0day (the value it provides to the government):

  • the price for an 0day they will actively use right now (high)
  • the price for an 0day they’ll stockpile for possible use in the future (low)
  • the price for an 0day they’ll disclose to the vendor to patch (very low)

That these are different prices is important to the policy debate. When activists claim the government should disclose the 0day they acquire, they are ignoring the price the 0day was acquired for. Since the government actively uses the 0day, they are acquired for a high-price, with their “use” value far higher than their “patch” value. It’s an absurd argument to make that they government should then immediately discard that money, to pay “use value” prices for “patch” results.

If the policy becomes that the NSA/CIA should disclose/patch the 0day they buy, it doesn’t mean business as usual acquiring vulns. It instead means they’ll stop buying 0day.

In other words, “patching 0day” is not an outcome on either side of the debate. Either the government buys 0day to use, or it stops buying 0day. In neither case does patching happen.

The real argument is whether the government (NSA, CIA, DoD, FBI) should be acquiring, weaponizing, and using 0day in the first place. It demands that we unilaterally disarm our military, intelligence, and law enforcement, preventing them from using 0days against our adversaries while our adversaries continue to use 0days against us.

That’s the gaping hole in both the RAND paper and most news reporting of this controversy. They characterize the debate the way activists want, as if the only question is the value of patching. They avoid talking about unilateral cyberdisarmament, even though that’s the consequence of the policy they are advocating. They avoid comparing the value of 0days to our country for active use (high) compared to the value to to our country for patching (very low).

Conclusion

It’s nice that the RAND paper studied the value of patching and confirmed it’s low, that only around 5% of our cyber-arsenal is likely to be found by others. But it’d be nice if they also looked at the point of view of those actively using 0days on a daily basis, rather than phrasing the debate the way activists want.

Only lobbyist and politicians matter, not techies

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/03/only-lobbyist-and-politicians-matter.html

The NSA/CIA will only buy an 0day if they can use it. They can’t use it if they disclose the bug.

I point this out, yet again, because of this WaPo article [*] built on the premise that the NSA/CIA spend millions of dollars on 0day they don’t use, while unilaterally disarming tiself. Since that premise is false, the entire article is false. It’s the sort of article you get when all you interview are Washington D.C. lobbyists and Washington D.C. politicians — and no outside experts.

It quotes former cyberczar (under Obama) Michael Daniel explaining that the “default assumption” is to disclose 0days that the NSA/CIA get. This is a Sean Spicer style lie. He’s paid to say this, but it’s not true. The NSA/CIA only buy 0day if they can use it. They won’t buy 0day if the default assumption is that they will disclose it. QED: the default assumption of such 0day is they won’t disclose them.

The story quotes Ben Wizner of the ACLU saying that we should patch 0days instead of using them. Patching isn’t an option. If we aren’t using them, then we aren’t buying them, and hence, there are no 0days to patch. The two options are to not buy 0days at all (and not patch) or buy to use them (and not patch). Either way, patching doesn’t happen.

Wizner didn’t actually say “use them”. He said “stockpiling” them, a word that means “hold in reserve for use in the future”. That’s not what the NSA/CIA does. They buy 0days to use, now. They’ve got budgets and efficiency ratings. They don’t buy 0days which they can’t use in the near future. In other words, Wizner paints the choice between an 0day that has no particular value to the government, and one would have value being patched.

The opposite picture is true. Almost all the 0days possessed by the NSA/CIA have value, being actively used against our adversaries right now. Conversely, patching an 0day provides little value for defense. Nobody else knew about the 0day anyway (that’s what 0day means), so nobody was in danger, so nobody was made safer by patching it.

Wizner and Snowden are quoted in the article that somehow the NSA/CIA is “maintaining vulnerabilities” and “keeping the holes open”. This phrasing is deliberately misleading. The NSA/CIA didn’t create the holes. They aren’t working to keep them open. If somebody else finds the same 0day hole and tells the vendor (like Apple), then the NSA/CIA will do nothing to stop them. They just won’t work to close the holes.

Activists like Wizner and Snowden deliberate mislead on the issue because they can’t possibly win a rational debate. The government is not going to continue to spend millions of dollars on buying 0days just to close them, because everyone agrees the value proposition is crap, that the value of fixing yet another iPhone hole is not worth the $1 million it’ll cost, and do little to stop Russians from finding an unrelated hole. Likewise, while the peacenicks (rightfully, in many respects) hate the militarization of cyberspace, they aren’t going to win the argument that the NSA/CIA should unilaterally disarm themselves. So instead they’ve tried to morph the debate into some crazy argument that makes no sense.

This is the problem with Washington D.C. journalism. It presumes the only people who matter are those in Washington, either the lobbyists of one position, or government defenders of another position. At no point did they go out and talk to technical experts, such as somebody who has discovered, weaponized, used an 0day exploit. So they write articles premised on the fact that the NSA/CIA, out of their offensive weapons budget, will continue to buy 0days that are immediately patched and fixed without ever being useful.

Adm. Rogers Talks about Buying Cyberweapons

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/adm_rogers_talk.html

At a talk last week, the head of US Cyber Command and the NSA Mike Rogers talked about the US buying cyberweapons from arms manufacturers.

“In the application of kinetic functionality — weapons — we go to the private sector and say, ‘Build this thing we call a [joint directed-attack munition], a [Tomahawk land-attack munition].’ Fill in the blank,” he said.

“On the offensive side, to date, we have done almost all of our weapons development internally. And part of me goes — five to ten years from now is that a long-term sustainable model? Does that enable you to access fully the capabilities resident in the private sector? I’m still trying to work my way through that, intellectually.”

Businesses already flog exploits, security vulnerability details, spyware, and similar stuff to US intelligence agencies, and Rogers is clearly considering stepping that trade up a notch.

Already, Third World countries are buying from cyberweapons arms manufacturers. My guess is that he’s right and the US will be doing that in the future, too.

Security and the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/02/security_and_th.html

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analog for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Are We Becoming More Moral Faster Than We’re Becoming More Dangerous?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/01/are_we_becoming.html

In The Better Angels of Our Nature, Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, “the world continues to improve in just about every way.” He’s right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year — and we can’t make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious… and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker’s trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker’s trends are noisy, but over the long term they’re strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he’s making a bet. He’s betting that his trend lines and my trend lines won’t cross. That is, that our society’s gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn’t that these security measures won’t work — even as they shred our freedoms and liberties — it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we’re not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that’s the thing about exponential curves; it’s hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won’t cross.

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

Embedding Lua in ZDoom

Post Syndicated from Eevee original https://eev.ee/blog/2016/11/26/embedding-lua-in-zdoom/

I’ve spent a little time trying to embed a Lua interpreter in ZDoom. I didn’t get too far yet; it’s just an experimental thing I poke at every once and a while. The existing pile of constraints makes it an interesting problem, though.

Background

ZDoom is a “source port” (read: fork) of the Doom engine, with all the changes from the commercial forks merged in (mostly Heretic, Hexen, Strife), and a lot of internal twiddles exposed. It has a variety of mechanisms for customizing game behavior; two are major standouts.

One is ACS, a vaguely C-ish language inherited from Hexen. It’s mostly used to automate level behavior — at the simplest, by having a single switch perform multiple actions. It supports the usual loops and conditionals, it can store data persistently, and ZDoom exposes a number of functions to it for inspecting and altering the state of the world, so it can do some neat tricks. Here’s an arbitrary script from my DUMP2 map.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
script "open_church_door" (int tag)
{
    // Open the door more quickly on easier skill levels, so running from the
    // arch-vile is a more viable option
    int skill = GameSkill();
    int speed;
    if (skill < SKILL_NORMAL)
        speed = 64;  // blazing door speed
    else if (skill == SKILL_NORMAL)
        speed = 16;  // normal door speed
    else
        speed = 8;  // very dramatic door speed

    Door_Raise(tag, speed, 68);  // double usual delay
}

However, ZDoom doesn’t actually understand the language itself; ACS is compiled to bytecode. There’s even at least one alternative language that compiles to the same bytecode, which is interesting.

The other big feature is DECORATE, a mostly-declarative mostly-interpreted language for defining new kinds of objects. It’s a fairly direct reflection of how Doom actors are implemented, which is in terms of states. In Doom and the other commercial games, actor behavior was built into the engine, but this language has allowed almost all actors to be extracted as text files instead. For example, the imp is implemented partly as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  Melee:
  Missile:
    TROO EF 8 A_FaceTarget
    TROO G 6 A_TroopAttack
    Goto See
  ...
  }

TROO is the name of the imp’s sprite “family”. A, B, and so on are individual frames. The numbers are durations in tics (35 per second). All of the A_* things (which are optional) are action functions, behavioral functions (built into the engine) that run when the actor switches to that frame. An actor starts out at its Spawn state, so an imp behaves as follows:

  • Spawn. Render as TROO frame A. (By default, action functions don’t run on the very first frame they’re spawned.)
  • Wait 10 tics.
  • Change to TROO frame B. Run A_Look, which checks to see if a player is within line of sight, and if so jumps to the See state.
  • Wait 10 tics.
  • Repeat. (This time, frame A will also run A_Look, since the imp was no longer just spawned.)

All monster and item behavior is one big state table. Even the player’s own weapons work this way, which becomes very confusing — at some points a weapon can be running two states simultaneously. Oh, and there’s A_CustomMissile for monster attacks but A_FireCustomMissile for weapon attacks, and the arguments are different, and if you mix them up you’ll get extremely confusing parse errors.

It’s a little bit of a mess. It’s fairly flexible for what it is, and has come a long way — for example, even original Doom couldn’t pass arguments to action functions (since they were just function pointers), so it had separate functions like A_TroopAttack for every monster; now that same function can be written generically. People have done some very clever things with zero-delay frames (to run multiple action functions in a row) and storing state with dummy inventory items, too. Still, it’s not quite a programming language, and it’s easy to run into walls and bizarre quirks.

When DECORATE lets you down, you have one interesting recourse: to call an ACS script!

Unfortunately, ACS also has some old limitations. The only type it truly understands is int, so you can’t manipulate an actor directly or even store one in a variable. Instead, you have to work with TIDs (“thing IDs”). Every actor has a TID (zero is special-cased to mean “no TID”), and most ACS actor-related functions are expressed in terms of TIDs. For level automation, this is fine, and probably even what you want — you can dump a group of monsters in a map, give them all a TID, and then control them as a group fairly easily.

But if you want to use ACS to enhance DECORATE, you have a bit of a problem. DECORATE defines individual actor behavior. Also, many DECORATE actors are designed independently of a map and intended to be reusable anywhere. DECORATE should thus not touch TIDs at all, because they’re really the map‘s concern, and mucking with TIDs might break map behavior… but ACS can’t refer to actors any other way. A number of action functions can, but you can’t call action functions from ACS, only DECORATE. The workarounds for this are not pretty, especially for beginners, and they’re very easy to silently get wrong.

Also, ultimately, some parts of the engine are just not accessible to either ACS or DECORATE, and neither language is particularly amenable to having them exposed. Adding more native types to ACS is rather difficult without making significant changes to both the language and bytecode, and DECORATE is barely a language at all.

Some long-awaited work is finally being done on a “ZScript”, which purports to solve all of these problems by expanding DECORATE into an entire interpreted-C++-ish scripting language with access to tons of internals. I don’t know what I think of it, and it only seems to half-solve the problem, since it doesn’t replace ACS.

Trying out Lua

Lua is supposed to be easy to embed, right? That’s the one thing it’s famous for. Before ZScript actually started to materialize, I thought I’d take a little crack at embedding a Lua interpreter and exposing some API stuff to it.

It’s not very far along yet, but it can do one thing that’s always been completely impossible in both ACS and DECORATE: print out the player’s entire inventory. You can check how many of a given item the player has in either language, but neither has a way to iterate over a collection. In Lua, it’s pretty easy.

1
2
3
4
5
6
function lua_test_script(activator, ...)
    for item, amount in pairs(activator.inventory) do
        -- This is Lua's builtin print(), so it goes to stdout
        print(item.class.name, amount)
    end
end

I made a tiny test map with a switch that tries to run the ACS script named lua_test_script. I hacked the name lookup to first look for the name in Lua’s global scope; if the function exists, it’s called immediately, and ACS isn’t consulted at all. The code above is just a regular (global) function in a regular Lua file, embedded as a lump in the map. So that was a good start, and was pretty neat to see work.

Writing the bindings

I used the bare Lua API at first. While its API is definitely very simple, actually using it to define and expose a large API in practice is kind of repetitive and error-prone, and I was never confident I was doing it quite right. It’s plain C and it works entirely through stack manipulation and it relies on a lot of casting to/from void*, so virtually anything might go wrong at any time.

I was on the cusp of writing a bunch of gross macros to automate the boring parts, and then I found sol2, which is pretty great. It makes heavy use of basically every single C++11 feature, so it’s a nightmare when it breaks (and I’ve had to track down a few bugs), but it’s expressive as hell when it works:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
lua.new_usertype<AActor>("zdoom.AActor",
    "__tostring", [](AActor& actor) { return "<actor>"; },
    // Pointer to an unbound method.  Sol automatically makes this an attribute
    // rather than a method because it takes no arguments, then wraps its
    // return value to pass it back to Lua, no manual wrapper code required.
    "class", &AActor::GetClass,
    "inventory", sol::property([](AActor& actor) -> ZLuaInventory { return ZLuaInventory(actor); }),
    // Pointers to unbound attributes.  Sol turns these into writable
    // attributes on the Lua side.
    "health", &AActor::health,
    "floorclip", &AActor::Floorclip,
    "weave_index_xy", &AActor::WeaveIndexXY,
    "weave_index_z", &AActor::WeaveIndexZ);

This is the type of the activator argument from the script above. It works via template shenanigans, so most of the work is done at compile time. AActor has a lot of properties of various types; wrapping them with the bare Lua API would’ve been awful, but wrapping them with Sol is fairly straightforward.

Lifetime

activator.inventory is a wrapper around a ZLuaInventory object, which I made up. It’s just a tiny proxy struct that tries to represent the inventory of a particular actor, because the engine itself doesn’t quite have such a concept — an actor’s “inventory” is a single item (itself an actor), and each item has a pointer to the next item in the inventory. Creating an intermediate type lets me hide that detail from Lua and pretend the inventory is a real container.

The inventory is thus not a real table; pairs() works on it because it provides the __pairs metamethod. It calls an iter method returning a closure, per Lua’s iteration API, which Sol makes just work:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
struct ZLuaInventory {
    ...
    std::function<AInventory* ()> iter()
    {
        TObjPtr<AInventory> item = this->actor->Inventory;
        return [item]() mutable {
            AInventory* ret = item;
            if (ret)
                item = ret->NextInv();
            return ret;
        };
    }
}

C++’s closures are slightly goofy and it took me a few tries to land on this, but it works.

Well, sort of.

I don’t know how I got this idea in my head, but I was pretty sure that ZDoom’s TObjPtr did reference counting and would automatically handle the lifetime problems in the above code. Eventually Lua reaps the closure, then C++ reaps the closure, then the wrapped AInventorys refcount drops, and all is well.

Turns out TObjPtr doesn’t do reference counting. Rather, all the game objects participate in tracing garbage collection. The basic idea is to start from some root object and recursively traverse all the objects reachable from that root; whatever isn’t reached is garbage and can be deleted.

Unfortunately, the Lua interpreter is not reachable from ZDoom’s own object tree. If an object ends up only being held by Lua, ZDoom will think it’s garbage and delete it prematurely, leaving a dangling reference. Those are bad.

I think I can fix without too much trouble. Sol allows customizing how it injects particular types, so I can use that for the type tree that participates in this GC scheme and keep an unordered_set of all objects that are alive in Lua. The Lua interpreter itself is already wrapped in an object that participates in the GC, so when the GC descends to the wrapper, it’s easy to tell it that that set of objects is alive. I’ll probably need to figure out read/write barriers, too, but I haven’t looked too closely at how ZDoom uses those yet. I don’t know whether it’s possible for an object to be “dead” (as in no longer usable, not just 0 health) before being reaped, but if so, I’ll need to figure out something there too.

It’s a little ironic that I have to do this weird workaround when ZDoom’s tracing garbage collector is based on… Lua’s.

ZDoom does have types I want to expose that aren’t garbage collected, but those are all map structures like sectors, which are never created or destroyed at runtime. I will have to be careful with the Lua interpreter itself to make sure those can’t live beyond the current map, but I haven’t really dealt with map changes at all yet. The ACS approach is that everything is map-local, and there’s some limited storage for preserving values across maps; I could do something similar, perhaps only allowing primitive scalars.

Asynchronicity

Another critical property of ACS scripts is that they can pause themselves. They can either wait for a set number of tics with delay(), or wait for map geometry to stop being busy with something like tagwait(). So you can raise up some stairs, wait for the stairs to finish appearing, and then open the door they lead to. Or you can simulate game rules by running a script in an infinite loop that waits for a few tics between iterations. It’s pretty handy. It’s incredibly handy. It’s non-negotiable.

Luckily, Lua can emulate this using coroutines. I implemented the delay case yesterday:

1
2
3
4
5
function lua_test_script(activator, ...)
    zprint("hey it's me what's up", ...)
    coroutine.yield("delay", 70)
    zprint("i'm back again")
end

When I press the switch, I see the first message, then there’s a two-second pause (Doom is 35fps), then I see the second message.

A lot more details need to be hammered out before this is really equivalent to what ACS can do, but the basic functionality is there. And since these are full-stack coroutines, I can trivially wrap that yield gunk in a delay(70) function, so you never have to know the difference.

Determinism

ZDoom has demos and peer-to-peer multiplayer. Both features rely critically on the game state’s unfolding exactly the same way, given the same seed and sequence of inputs.

ACS goes to great lengths to preserve this. It executes deterministically. It has very, very few ways to make decisions based on anything but the current state of the game. Netplay and demos just work; modders and map authors never have to think about it.

I don’t know if I can guarantee the same about Lua. I’d think so, but I don’t know so. Will the order of keys in a table be exactly the same on every system, for example? That’s important! Even the ACS random-number generator is deterministic.

I hope this is the case. I know some games, like Starbound, implicitly assume for multiplayer purposes that scripts will execute the same way on every system. So it’s probably fine. I do wish Lua made some sort of guarantee here, though, especially since it’s such an obvious and popular candidate for game scripting.

Savegames

ZDoom allows you to quicksave at any time.

Any time.

Not while a script is running, mind you. Script execution blocks the gameplay thread, so only one thing can actually be happening at a time. But what happens if you save while a script is in the middle of a tagwait?

The coroutine needs to be persisted, somehow. More importantly, when the game is loaded, the coroutine needs to be restored to the same state: paused in the same place, with locals set to the same values. Even if those locals were wrapped pointers to C++ objects, which now have different addresses.

Vanilla Lua has no way to do this. Vanilla Lua has a pretty poor serialization story overall — nothing is built in — which is honestly kind of shocking. People use Lua for games, right? Like, a lot? How is this not an extremely common problem?

A potential solution exists in the form of Eris, a modified Lua that does all kinds of invasive things to allow absolutely anything to be serialized. Including coroutines!

So Eris makes this at least possible. I haven’t made even the slightest attempt at using it yet, but a few gotchas already stand out to me.

For one, Eris serializes everything. Even regular ol’ functions are serialized as Lua bytecode. A naïve approach would thus end up storing a copy of the entire game script in the save file.

Eris has a thing called the “permanent object table”, which allows giving names to specific Lua values. Those values are then serialized by name instead, and the names are looked up in the same table to deserialize. So I could walk the Lua namespace myself after the initial script load and stick all reachable functions in this table to avoid having them persisted. (That won’t catch if someone loads new code during play, but that sounds like a really bad idea anyway, and I’d like to prevent it if possible.) I have to do this to some extent anyway, since Eris can’t persist the wrapped C++ functions I’m exposing to Lua. Even if a script does some incredibly fancy dynamic stuff to replace global functions with closures at runtime, that’s okay; they’ll be different functions, so Eris will fall back to serializing them.

Then when the save is reloaded, Eris will replace any captured references to a global function with the copy that already exists in the map script. ZDoom doesn’t let you load saves across different mods, so the functions should be the same. I think. Hmm, maybe I should check on exactly what the load rules are. If you can load a save against a more recent copy of a map, you’ll want to get its updated scripts, but stored closures and coroutines might be old versions, and that is probably bad. I don’t know if there’s much I can do about that, though, unless Eris can somehow save the underlying code from closures/coros as named references too.

Eris also has a mechanism for storing wrapped native objects, so all I have to worry about is translating pointers, and that’s a problem Doom has already solved (somehow). Alas, that mechanism is also accessible to pure Lua code, and the docs warn that it’s possible to get into an infinite loop when loading. I’d rather not give modders the power to fuck up a save file, so I’ll have to disable that somehow.

Finally, since Eris loads bytecode, it’s possible to do nefarious things with a specially-crafted save file. But since the save file is full of a web of pointers anyway, I suspect it’s not too hard to segfault the game with a specially-crafted save file anyway. I’ll need to look into this. Or maybe I won’t, since I don’t seriously expect this to be merged in.

Runaway scripts

Speaking of which, ACS currently has detection for “runaway scripts”, i.e. those that look like they might be stuck in an infinite loop (or are just doing a ludicrous amount of work). Since scripts are blocking, the game does not actually progress while a script is running, and a very long script would appear to freeze the game.

I think ACS does this by counting instructions. I see Lua has its own mechanism for doing that, so limiting script execution “time” shouldn’t be too hard.

Defining new actors

I want to be able to use Lua with (or instead of) DECORATE, too, but I’m a little hung up on syntax.

I do have something slightly working — I was able to create a variant imp class with a bunch more health from Lua, then spawn it and fight it. Also, I did it at runtime, which is probably bad — I don’t know that there’s any way to destroy an actor class, so having them be map-scoped makes no sense.

That could actually pose a bit of a problem. The Lua interpreter should be scoped to a single map, but actor classes are game-global. Do they live in separate interpreters? That seems inconvenient. I could load the game-global stuff, take an internal-only snapshot of the interpreter with Lua (bytecode and all), and then restore it at the beginning of each level? Hm, then what happens if you capture a reference to an actor method in a save file…? Christ.

I could consider making the interpreter global and doing black magic to replace all map objects with nil when changing maps, but I don’t think that can possibly work either. ZDoom has hubs — levels that can be left and later revisited, preserving their state just like with a save — and that seems at odds with having a single global interpreter whose state persists throughout the game.

Er, anyway. So, the problem with syntax is that DECORATEs own syntax is extremely compact and designed for its very specific goal of state tables. Even ZScript appears to preserve the state table syntax, though it lets you write your own action functions or just provide a block of arbitrary code. Here’s a short chunk of the imp implementation again, for reference.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
  States
  {
  Spawn:
    TROO AB 10 A_Look
    Loop
  See:
    TROO AABBCCDD 3 A_Chase
    Loop
  ...
  }

Some tricky parts that stand out to me:

  • Labels are important, since these are state tables, and jumping to a particular state is very common. It’s tempting to use Lua coroutines here somehow, but short of using a lot of goto in Lua code (yikes!), jumping around arbitrarily doesn’t work. Also, it needs to be possible to tell an actor to jump to a particular state from outside — that’s how A_Look works, and there’s even an ACS function to do it manually.

  • Aside from being shorthand, frames are fine. Though I do note that hacks like AABBCCDD 3 are relatively common. The actual animation that’s wanted here is ABCD 6, but because animation and behavior are intertwined, the labels need to be repeated to run the action function more often. I wonder if it’s desirable to be able to separate display and behavior?

  • The durations seem straightforward, but they can actually be a restricted kind of expression as well. So just defining them as data in a table doesn’t quite work.

  • This example doesn’t have any, but states can also have a number of flags, indicated by keywords after the duration. (Slightly ambiguous, since there’s nothing strictly distinguishing them from action functions.) Bright, for example, is a common flag on projectiles, weapons, and important pickups; it causes the sprite to be drawn fullbright during that frame.

  • Obviously, actor behavior is a big part of the game sim, so ideally it should require dipping into Lua-land as little as possible.

Ideas I’ve had include the following.

Emulate state tables with arguments? A very straightforward way to do the above would be to just, well, cram it into one big table.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
define_actor{
    ...
    states = {
        'Spawn:',
        'TROO', 'AB', 10, A_Look,
        'loop',
        'See:',
        'TROO', 'AABBCCDD', 3, A_Chase,
        'loop',
        ...
    },
}

It would work, technically, I guess, except for non-literal durations, but I’d basically just be exposing the DECORATE parser from Lua and it would be pretty ridiculous.

Keep the syntax, but allow calling Lua from it? DECORATE is okay, for the most part. For simple cases, it’s great, even. Would it be good enough to be able to write new action functions in Lua? Maybe. Your behavior would be awkwardly split between Lua and DECORATE, though, which doesn’t seem ideal. But it would be the most straightforward approach, and it would completely avoid questions of how to emulate labels and state counts.

As an added benefit, this would keep DECORATE almost-purely declarative — which means editor tools could still reliably parse it and show you previews of custom objects.

Split animation from behavior? This could go several ways, but the most obvious to me is something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
define_actor{
    ...
    states = {
        spawn = function(self)
            self:set_animation('AB', 10)
            while true do
                A_Look(self)
                delay(10)
            end
        end,
        see = function(self)
            self:set_animation('ABCD', 6)
            while true do
                A_Chase(self)
                delay(3)
            end
        end,
    },
}

This raises plenty of other API questions, like how to wait until an animation has finished or how to still do work on a specific frame, but I think those are fairly solvable. The big problems are that it’s very much not declarative, and it ends up being rather wordier. It’s not all boilerplate, though; it’s fairly straightforward. I see some value in having state delays and level script delays work the same way, too. And in some cases, you have only an animation with no code at all, so the heavier use of Lua should balance out. I don’t know.

A more practical problem is that, currently, it’s possible to jump to an arbitrary number of states past a given label, and that would obviously make no sense with this approach. It’s pretty rare and pretty unreadable, so maybe that’s okay. Also, labels aren’t blocks, so it’s entirely possible to have labels that don’t end with a keyword like loop and instead carry straight on into the next label — but those are usually used for logic more naturally expressed as for or while, so again, maybe losing that ability is okay.

Or… perhaps it makes sense to do both of these last two approaches? Built-in classes should stay as DECORATE anyway, so that existing code can still inherit from them and perform jumps with offsets, but new code could go entirely Lua for very complex actors.

Alas, this is probably one of those questions that won’t have an obvious answer unless I just build several approaches and port some non-trivial stuff to them to see how they feel.

And further

An enduring desire among ZDoom nerds has been the ability to write custom “thinkers”. Thinkers are really anything that gets to act each tic, but the word also specifically refers to the logic responsible for moving floors, opening doors, changing light levels, and so on. Exposing those more directly to Lua, and letting you write your own, would be pretty interesting.

Anyway

I don’t know if I’ll do all of this. I somewhat doubt it, in fact. I pick it up for half a day every few weeks to see what more I can make it do, just because it’s interesting. It has virtually no chance of being upstreamed anyway (the only active maintainer hates Lua, and thinks poorly of dynamic languages in general; plus, it’s redundant with ZScript) and I don’t really want to maintain my own yet another Doom fork, so I don’t expect it to ever be a serious project.

The source code for what I’ve done so far is available, but it’s brittle and undocumented, so I’m not going to tell you where to find it. If it gets far enough along to be useful as more than a toy, I’ll make a slightly bigger deal about it.

Another Shadow Brokers Leak

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/another_shadow_.html

There’s another leak of NSA hacking tools and data from the Shadow Brokers. This one includes a list of hacked sites.

According to analyses from researchers here and here, Monday’s dump contains 352 distinct IP addresses and 306 domain names that purportedly have been hacked by the NSA. The timestamps included in the leak indicate that the servers were targeted between August 22, 2000 and August 18, 2010. The addresses include 32 .edu domains and nine .gov domains. In all, the targets were located in 49 countries, with the top 10 being China, Japan, Korea, Spain, Germany, India, Taiwan, Mexico, Italy, and Russia. Vitali Kremez, a senior intelligence analyst at security firm Flashpoint, also provides useful analysis here.

The dump also includes various other pieces of data. Chief among them are configuration settings for an as-yet unknown toolkit used to hack servers running Unix operating systems. If valid, the list could be used by various organizations to uncover a decade’s worth of attacks that until recently were closely guarded secrets. According to this spreadsheet, the servers were mostly running Solaris, an operating system from Sun Microsystems that was widely used in the early 2000s. Linux and FreeBSD are also shown.

The data is old, but you can see if you’ve been hacked.

Honestly, I am surprised by this release. I thought that the original Shadow Brokers dump was everything. Now that we know they held things back, there could easily be more releases.

EDITED TO ADD (11/6): More on the NSA targets. Note that the Hague-based Organization for the Prohibition of Chemical Weapons is on the list, hacked in 2000.

Accessible games

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/29/accessible-games/

I’ve now made a few small games. One of the trickiest and most interesting parts of designing them has been making them accessible.

I mean that in a very general and literal sense. I want as many people as possible to experience as much of my games as possible. Finding and clearing out unnecessary hurdles can be hard, but every one I leave risks losing a bunch of players who can’t or won’t clear it.

I’ve noticed three major categories of hurdle, all of them full of tradeoffs. Difficulty is what makes a game challenging, but if a player can’t get past a certain point, they can never see the rest of the game. Depth is great, but not everyone has 80 hours to pour into a game, and it’s tough to spend weeks of dev time on stuff most people won’t see. Distribution is a question of who can even get your game in the first place.

Here are some thoughts.

Mario Maker

Mario Maker is most notable for how accessible it is to budding game designers, which is important but also a completely different sense of accessibility.

The really nice thing about Mario Maker is that its levels are also accessible to players. Virtually everyone who’s heard of video games has heard of Mario. You don’t need to know many rules to be able to play. Move to the right, jump over/on things, and get to the flag.

(The “distribution” model is a bit of a shame, though — you need to own a particular console and a $60 game. If I want people to play a single individual level I made, that’s a lot of upfront investment to ask for. Ultimately Nintendo is in this to sell their own game more than to help people show off their own.)

But the emergent depth of Mario Maker’s myriad objects — the very property that makes the platform more than a toy — also makes it less accessible. Everyone knows you move around and jump, but not everyone knows you can pick up an item with B, or that you can put on a hat you’re carrying by pressing , or that you can spinjump on certain hazards. And these are fairly basic controls — Mario Maker contains plenty of special interactions between more obscure objects, and no manual explaining them all.

I thought it was especially interesting that Nintendo’s own comic series on building Mario Maker levels specifically points out that running jumps don’t come naturally to everyone. It’s hard to imagine too many people playing Mario Maker and not knowing how to jump while running.

And yet.

And yet, imagine being one such person, and encountering a level that requires a running jump early on. You can’t get past it. You might not even understand how to get past it; perhaps you don’t even know Mario can run. Now what? That’s it, you’re stuck. You’ll never see the rest of that level. It’s a hurdle, in a somewhat more literal sense.

Why make the level that way in the first place, then? Does any seasoned Mario player jump over a moderate-width gap and come away feeling proud for having conquered it? Seems unlikely.

I’ve tried playing through 100 Mario Challenge on Expert a number of times (without once managing to complete it), and I’ve noticed three fuzzy categories. Some levels are an arbitrary mess of hazards right from the start, so I don’t expect them to get any easier. Some levels are clearly designed as difficult obstacle courses, so again, I assume they’ll be just as hard all the way through. In both cases, if I give up and skip to the next level, I don’t feel like I’m missing out on anything — I’m not the intended audience.

But there are some Expert-ranked levels that seem pretty reasonable… until this one point where all hell breaks loose. I always wonder how deliberate those parts are, and I vaguely regret skipping them — would the rest of the level have calmed back down and been enjoyable?

That’s the kind of hurdle I think about when I see conspicuous clusters of death markers in my own levels. How many people died there and gave up? I make levels intending for people to play them, to see them through, but how many players have I turned off with some needlessly tricky part?

One of my levels is a Boo house with a few cute tricks in it. Unfortunately, I also put a ring of Boos right at the beginning that’s tricky to jump through, so it’s very easy for a player to die several times right there and never see anything else.

I wanted my Boo house to be interesting rather than difficult, but I let difficulty creep in accidentally, and so I’ve reduced the number of people who can appreciate the interestingness. Every level I’ve made since then, I’ve struggled to keep the difficulty down, and still sometimes failed. It’s easy to make a level that’s very hard; it’s surprisingly hard to make a level that’s fairly easy. All it takes is a single unintended hurdle — a tricky jump, an awkwardly-placed enemy — to start losing players.

This isn’t to say that games should never be difficult, but difficulty needs to be deliberately calibrated, and that’s a hard thing to do. It’s very easy to think only in terms of “can I beat this”, and even that’s not accurate, since you know every nook and cranny of your own level. Can you beat it blind, on the first few tries? Could someone else?

Those questions are especially important in Mario Maker, where the easiest way to encounter an assortment of levels is to play 100 Mario Challenge. You have 100 lives and need to beat 16 randomly-chosen levels. If you run out of lives, you’re done, and you have to start over. If I encounter your level here, I can’t afford to burn more than six or seven lives on it, or I’ll game over and have wasted my time. So if your level looks ridiculously hard (and not even in a fun way), I’ll just skip it and hope I get a better level next time.

I wonder if designers forget to calibrate for this. When you spend a lot of time working on something, it’s easy to imagine it exists in a vacuum, to assume that other people will be as devoted to playing it as you were to making it.

Mario Maker is an extreme case: millions of levels are available, and any player can skip to another one with the push of a button. That might be why I feel like I’ve seen a huge schism in level difficulty: most Expert levels are impossible for me, whereas most Normal levels are fairly doable with one or two rough patches. I haven’t seen much that’s in the middle, that feels like a solid challenge. I suspect that people who are very good at Mario are looking for an extreme challenge, and everyone else just wants to play some Mario, so moderate-difficulty levels just aren’t as common. The former group will be bored by them, and the latter group will skip them.

Or maybe that’s a stretch. It’s hard to generalize about the game’s pool of levels when they number in the millions, and I can’t have played more than a few hundred.

What Mario Maker has really taught me is what a hurdle looks like. The game keeps track of everywhere a player has ever died. I may not be able to watch people play my levels, but looking back at them later and seeing clumps of death markers is very powerful. Those are the places people failed. Did they stop playing after that? Did I intend for those places to be so difficult?

Doom

Doom is an interesting contrast to Mario Maker. A great many Doom maps have been produced over the past two decades, but nowhere near as many levels as Mario Maker has produced in a couple years. On the other hand, many people who still play Doom have been playing Doom this entire time, so a greater chunk of the community is really good at the game and enjoys a serious challenge.

I’ve only released a couple Doom maps of my own: Throughfare (the one I contributed to DUMP 2 earlier this year) and a few one-hour speedmaps I made earlier this week. I like building in Doom, with its interesting balance of restrictions — it’s a fairly accessible way to build an interesting 3D world, and nothing else is quite like it.

I’ve had the privilege of watching a few people play through my maps live, and I have learned some things.

The first is that the community’s love of difficulty is comically misleading. It’s not wrong, but, well, that community isn’t actually my target audience. So far I’ve “published” maps on this blog and Twitter, where my audience hasn’t necessarily even played Doom in twenty years. If at all! Some of my followers are younger than Doom.

Most notably, this creates something of a distribution problem: to play my maps, you need to install a thing (ZDoom) and kinda figure out how to use it and also get a copy of Doom 2 which probably involves spending five bucks. Less of a hurdle than getting Mario Maker, yes, but still some upfront effort.

Also, ZDoom’s default settings are… not optimal. Out of the box, it’s similar to classic Doom: no WASD, no mouselook. I don’t know who this is meant to appeal to. If you’ve never played Doom, the controls are goofy. If you’ve played other shooters, the controls are goofy. If you played Doom when it came out but not since, you probably don’t remember the controls, so they’re still goofy. Oof.

Not having mouselook is more of a problem than you’d think. If you as the designer play with mouselook, it’s really easy to put important things off the top or bottom of the screen and never realize it’ll be a problem. I watched someone play through Throughfare a few days ago and get completely stuck at what seemed to be a dead end — because he needed to drop down a hole in a small platform, and the hole was completely hidden by the status bar.

That’s actually an interesting example for another reason. Here’s the room where he got stuck.

A small room with a raised platform at the end, a metal section in the floor, and a switch on the side wall

When you press the switch, the metal plates on the ground rise up and become stairs, so you can get onto the platform. He did that, saw nowhere obvious to go, and immediately turned around and backtracked quite a ways looking for some other route.

This surprised me! The room makes no sense as a dead end. It’s not an easter egg or interesting feature; it has no obvious reward; it has a button that appears to help you progress. If I were stuck here, I’d investigate the hell out of this room — yet this player gave up almost immediately.

Not to say that the player is wrong and the level is right. This room was supposed to be trivially simple, and I regret that it became a hurdle for someone. It’s just a difference in playstyle I didn’t account for. Besides the mouselook problem, this player tended to move very quickly in general, charging straight ahead in new areas without so much as looking around; I play more slowly, looking around for nooks and crannies. He ended up missing the plasma gun for much the same reason — it was on a ledge slightly below the default view angle, making it hard to see without mouselook.

Speaking of nooks and crannies: watching someone find or miss secrets in a world I built is utterly fascinating. I’ve watched several people play Throughfare now, and the secrets are the part I love watching the most. I’ve seen people charge directly into secrets on accident; I’ve seen people run straight to a very clever secret just because they had the same idea I did; I’ve seen people find a secret switch and then not press it. It’s amazing how different just a handful of players have been.

I think the spread of secrets in Throughfare is pretty good, though I slightly regret using the same trick three times; either you get it right away and try it everywhere, or you don’t get it at all and miss out on a lot of goodies. Of course, the whole point of secrets is that not everyone will find them on the first try (or at all), so it’s probably okay to err on the trickier side.


As for the speedmaps, I’ve only watched one person play them live. The biggest hurdle was a room I made that required jumping.

Jumping wasn’t in the original Doom games. People thus don’t really expect to need to jump in Doom maps. Worse, ZDoom doesn’t even have a key bound to jump out of the box, which I only discovered later.

See, when I made the room (very quickly), I was imagining a ZDoom veteran seeing it and immediately thinking, “oh, this is one of those maps where I need to jump”. I’ve heard people say that about other maps before, so it felt like common knowledge. But it’s only common knowledge if you’re part of the community and have run into a few maps that require jumping.

The situation is made all the more complicated by the way ZDoom handles it. Maps can use a ZDoom-specific settings file to explicitly allow or forbid jumping, but the default is to allow it. The stock maps and most third-party vanilla maps won’t have this ZDoom-specific file, so jumping will be allowed, even though they’re not designed for it. Most mappers only use this file at all if they’re making something specifically for ZDoom, in which case they might as well allow jumping anyway. It’s opt-out, but the maps that don’t want it are the ones least likely to use the opt-out, so in practice everyone has to assume jumping isn’t allowed until they see some strong indication otherwise. It’s a mess. Oh, and ZDoom also supports crouching, which is even more obscure.

I probably should’ve thought of all that at the time. In my defense, you know, speedmap.

One other minor thing was that, of course, ZDoom uses the traditional Doom HUD out of the box, and plenty of people play that way on purpose. I’m used to ZDoom’s “alternative” HUD, which not only expands your field of view slightly, but also shows a permanent count of how many secrets are in the level and how many you’ve found. I love that, because it tells me how much secret-hunting I’ll need to do from the beginning… but if you don’t use that HUD (and don’t look at the count on the automap), you won’t even know whether there are secrets or not.


For a third-party example: a recent (well, late 2014) cool release was Going Down, a set of small and devilish maps presented as the floors of a building you’re traversing from the roof downwards. I don’t actually play a lot of Doom, but I liked this concept enough to actually play it, and I enjoyed the clever traps and interwoven architecture.

Then I reached MAP12, Dead End. An appropriate name, because I got stuck here. Permanently stuck. The climax of the map is too many monsters in not enough space, and it’s cleverly rigged to remove the only remaining cover right when you need it. I couldn’t beat it.

That was a year ago. I haven’t seen any of the other 20 maps beyond this point. I’m sure they’re very cool, but I can’t get to them. This one is too high a hurdle.

Granted, hopping around levels is trivially easy in Doom games, but I don’t want to cheat my way through — and anyway, if I can’t beat MAP12, what hope do I have of beating MAP27?

I feel ambivalent about this. The author describes the gameplay as “chaotic evil”, so it is meant to be very hard, and I appreciate the design of the traps… but I’m unable to appreciate any more of them.

This isn’t the author’s fault, anyway; it’s baked into the design of Doom. If you can’t beat one level, you don’t get to see any future levels. In vanilla Doom it was particularly bad: if you die, you restart the level with no weapons or armor, probably making it even harder than it was before. You can save any time, and some modern source ports like ZDoom will autosave when you start a level, but the original game never saved automatically.

Isaac’s Descent

Isaac’s Descent is the little PICO-8 puzzle platformer I made for Ludum Dare 36 a couple months ago. It worked out surprisingly well; pretty much everyone who played it (and commented on it to me) got it, finished it, and enjoyed it. The PICO-8 exports to an HTML player, too, so anyone with a keyboard can play it with no further effort required.

I was really happy with the puzzle design, especially considering I hadn’t really made a puzzle game before and was rushing to make some rooms in a very short span of time. Only two were perhaps unfair. One was the penultimate room, which involved a tricky timing puzzle, so I’m not too bothered about that. The other was this room:

A cavern with two stone slab doors, one much taller than the other, and a wooden wheel on the wall

Using the wheel raises all stone doors in the room. Stone doors open at a constant rate, wait for a fixed time, and then close again. The tricky part with this puzzle is that by the time the very tall door has opened, the short door has already closed again. The solution is simply to use the wheel again right after the short door has closed, while the tall door is still opening. The short door will reopen, while the tall door won’t be affected since it’s already busy.

This isn’t particularly difficult to figure out, but it did catch a few people, and overall it doesn’t sit particularly well with me. Using the wheel while a door is opening feels like a weird edge case, not something that a game would usually rely on, yet I based an entire puzzle around it. I don’t know. I might be overthinking this. The problem might be that “ignore the message” is a very computery thing to do and doesn’t match with how such a wheel would work in practice; perhaps I’d like the puzzle more if the wheel always interrupted whatever a door was doing and forced it to raise again.

Overall, though, the puzzles worked well.

The biggest snags I saw were control issues with the PICO-8 itself. The PICO-8 is a “fantasy console” — effectively an emulator for a console that never existed. One of the consequences of this is that the controls aren’t defined in terms of keyboard keys, but in terms of the PICO-8’s own “controller”. Unfortunately, that controller is only defined indirectly, and the web player doesn’t indicate in any way how it works.

The controller’s main inputs — the only ones a game can actually read — are a directional pad and two buttons, and , which map to z and x on a keyboard. The PICO-8 font has glyphs for and , so I used those to indicate which button does what. Unfortunately, if you aren’t familiar with the PICO-8, those won’t make a lot of sense to you. It’s nice that looks like the keyboard key it’s bound to, but looks like the wrong keyboard key. This caused a little confusion.

Well,” I hear you say, “why not just refer to the keys directly?” Ah, but there’s a very good reason the PICO-8 is defined in terms of buttons: those aren’t the only keys you can use! n and m also work, as do c and v. The PocketCHIP also allows… 0 and =, I think, which is good because z and x are directly under the arrow keys on the PocketCHIP keyboard. And of course you can play on a USB controller, or rebind the keys.

I could’ve mentioned that z and x are the defaults, but that’s wrong for the PocketCHIP, and now I’m looking at a screenful of text explaining buttons that most people won’t read anyway.

A similar problem is the pause menu, accessible with p or enter. I’d put an option on the pause menu for resetting the room you’re in, just in case, but didn’t bother to explain how to get to the pause menu.Or that a pause menu exists. Also, the ability to put custom things on the pause menu is new, so a lot of people might not even know about it. I’m sure you can see this coming: a few rooms (including the two-door one) had places you could get stuck, and without any obvious way to restart the room, a few people thought they had to start the whole game over. Whoops.

In my defense, the web player is actively working against me here: it has a “pause” link below the console, but all the link does is freeze the player, not bring up the pause menu.

This is a recurring problem, and perhaps a fundamental question of making games accessible: how much do you need to explain to people who aren’t familiar with the platform or paradigm? Should every single game explain itself? Players who don’t need the explanation can easily get irritated by it, and that’s a bad way to start a game. The PICO-8 in particular has the extra wrinkle that its cartridge space is very limited, and any kind of explanation/tutorial costs space you could be using for gameplay. On the other hand, I’ve played more than one popular PICO-8 game that was completely opaque to me because it didn’t explain its controls at all.

I’m reminded of Counterfeit Monkey, a very good interactive fiction game that goes out of its way to implement a hint system and a gentle tutorial. The tutorial knits perfectly with the story, and the hints are trivially turned off, so neither is a bother. The game also has a hard mode, which eliminates some of the more obvious solutions and gives a nod to seasoned IF players as well. The author is very interested in making interactive fiction more accessible in general, and it definitely shows. I think this game alone convinced me it’s worth the effort — I’m putting many of the same touches in my own IF foray.

Under Construction

Under Construction is the PICO-8 game that Mel and I made early this year. It’s a simple, slightly surreal, slightly obtuse platformer.

Traditional wisdom has it that you don’t want games to be obtuse. That acts as a hurdle, and loses you players. Here, though, it’s part of the experience, so the question becomes how to strike a good balance without losing the impact.

A valid complaint we heard was that the use of color is slightly inconsistent in places. For the most part, foreground objects (those you can stand on) are light and background decorations are gray, but a couple tiles break that pattern. A related problem that came up almost immediately in beta testing was that spikes were difficult to pick out. I addressed that — fairly effectively, I think — by adding a single dark red pixel to the tip of the spikes.

But the most common hurdle by far was act 3, which caught us completely by surprise. Spoilers!

From the very beginning, the world contains a lot of pillars containing eyeballs that look at you. They don’t otherwise do anything, beyond act as platforms you can stand on.

In act 2, a number of little radios appear throughout the world. Mr. 5 complains that it’s very noisy, so you need to break all the radios by jumping on them.

In act 3, the world seems largely the same… but the eyes in the pillars now turn to ❌’s when you touch them. If this happens before you make it to the end, Mr. 5 complains that he’s in pain, and the act restarts.

The correct solution is to avoid touching any of the eye pillars. But because this comes immediately after act 2, where we taught the player to jump on things to defeat them — reinforcing a very common platforming mechanic — some players thought you were supposed to jump on all of them.

I don’t know how we could’ve seen that coming. The acts were implemented one at a time and not in the order they appear in the game, so we were both pretty used to every individual mechanic before we started playing through the entire game at once. I suppose when a game is developed and tested in pieces (as most games are), the order and connection between those pieces is a weak point and needs some extra consideration.

We didn’t change the game to address this, but the manual contains a strong hint.

Under Construction also contains a couple of easter eggs and different endings. All are fairly minor changes, but they added a lot of character to the game and gave its fans something else to delve into once they’d beaten it.

Crucially, these things worked as well as they did because they weren’t accessible. Easily-accessed easter eggs aren’t really easter eggs any more, after all. I don’t think the game has any explicit indication that the ending can vary, which meant that players would only find out about it from us or other fans.

I don’t yet know the right answer for balancing these kinds of extras, and perhaps there isn’t one. If you spend a lot of time on easter eggs, multiple endings, or even just multiple paths through the game, you’re putting a lot of effort into stuff that many players will never see. On the other hand, they add an incredible amount of depth and charm to a game and reward those players who do stick around to explore.

This is a lot like the balancing act with software interfaces. You want your thing to be accessible in the sense that a newcomer can sit down and get useful work done, but you also want to reward long-time users with shortcuts and more advanced features. You don’t want to hide advanced features too much, but you also don’t want to have an interface with a thousand buttons.

How larger and better-known games deal with this

I don’t have the patience for Zelda I. I never even tried it until I got it for free on my 3DS, as part of a pack of Virtual Console games given to everyone who bought a 3DS early. I gave it a shot, but I got bored really quickly. The overworld was probably the most frustrating part: the connections between places are weird, everything looks pretty much the same, the map is not very helpful, and very little acts as a landmark. I could’ve drawn my own map, but, well, I usually can’t be bothered to do that for games.

I contrast this with Skyward Sword, which I mostly enjoyed. Ironically, one of my complaints is that it doesn’t quite have an overworld. It almost does, but they stopped most of the way, leaving us with three large chunks of world and a completely-open sky area reminiscent of Wind Waker’s ocean.

Clearly, something about huge open spaces with no barriers whatsoever appeals to the Zelda team. I have to wonder if they’re trying to avoid situations like my experience with Zelda I. If a player gets lost in an expansive overworld, either they’ll figure out where to go eventually, or they’ll give up and never see the rest of the game. Losing players that way, especially in a story-driven game, is a huge shame.

And this is kind of a problem with the medium in general. For all the lip service paid to nonlinearity and sandboxes, the vast majority of games require some core progression that’s purely linear. You may be able to wander around a huge overworld, but you still must complete these dungeons and quests in this specific order. If something prevents you from doing one of them, you won’t be able to experience the others. You have to do all of the first x parts of the game before you can see part x + 1.

This is really weird! No other media is like this. If you watch a movie or read a book or listen to a song and some part of it is inaccessible for whatever reason — the plot is poorly explained, a joke goes over your head, the lyrics are mumbled — you can still keep going and experience the rest. The stuff that comes later might even help you make sense of the part you didn’t get.

In games, these little bumps in the road can become walls.

It’s not even necessarily difficulty, or getting lost, or whatever. A lot of mobile puzzle games use the same kind of artificial progression where you can only do puzzles in sequential batches; solving enough of the available puzzles will unlock the next batch. But in the interest of padding out the length, many of these games will have dozens of trivially easy and nearly identical puzzles in the beginning, which you have to solve to get to the later interesting ones. Sometimes I’ve gotten so bored by this that I’ve given up on a game before reaching the interesting puzzles.

In a way, that’s the same problem as getting lost in an overworld. Getting lost isn’t a hard wall, after all — you can always do an exhaustive search and talk to every NPC twice. But that takes time, and it’s not fun, much like the batches of required baby puzzles. People generally don’t like playing games that waste their time.

I love the Picross “e” series on the 3DS, because over time they’ve largely figured out that this is pointless: in the latest game in the series, everything is available from the beginning. Want to do easy puzzles? Do easy puzzles. Want to skip right to the hard stuff? Sure, do that. Don’t like being told when you made a wrong move? Turn it off.

(It’s kinda funny that the same people then made Pokémon Picross, which has some of the most absurd progression I’ve ever seen. Progressing beyond the first half-dozen puzzles requires spending weeks doing a boring minigame every day to grind enough pseudocurrency to unlock more puzzles. Or you can just pay for pseudocurrency, and you’ll have unlocked pretty much the whole game instantly. It might as well just be a demo; the non-paid progression is useless.)

Chip’s Challenge also handled this pretty well. You couldn’t skip around between levels arbitrarily, which was somewhat justified by the (very light) plot. Instead, if you died or restarted enough times, the game would offer to skip you to the next level, and that would be that. You weren’t denied the rest of the game just because you couldn’t figure out an ice maze or complete some horrible nightmare like Blobnet.

I wish this sort of mechanic were more common. Not so games could be more difficult, but so games wouldn’t have to worry as much about erring on the side of ease. I don’t know how it could work for a story-driven game where much of the story is told via experiencing the game itself, though — skipping parts of Portal would work poorly. On the other hand, Portal took the very clever step of offering “advanced” versions of several levels, which were altered very slightly to break all the obvious easy solutions.

Slapping on difficulty settings is nice for non-puzzle games (and even some puzzle games), but unless your game lets you change the difficulty partway through, someone who hits a wall still has to replay the entire game to change the difficulty. (Props to Doom 4, which looks to have taken difficulty levels very seriously — some have entirely different rules, and you can change whenever you want.)

I have a few wisps of ideas for how to deal with this in Isaac HD, but I can’t really talk about them before the design of the game has solidified a little more. Ultimately, my goal is the same as with everything else I do: to make something that people have a chance to enjoy, even if they don’t otherwise like the genre.