Tag Archives: Mozilla

Raspberry Pi at MozFest 2016

Post Syndicated from Olivia Robinson original https://www.raspberrypi.org/blog/raspberry-pi-mozfest-2016/

MozFest, or Mozilla Festival, is an annual celebration of the Mozilla community and the wider open internet movement. People from all over the world gather to explore ways of making the internet a resource that’s open and inclusive to all. This year MozFest was held at Ravensbourne College in London from Friday 28 – Sunday 29 October.


Colleagues from the Raspberry Pi Foundation joined members of the community to run workshops across two classrooms in the Youth Zone; this meant more space than last year, bringing more opportunities to engage. Our community volunteers were really enthusiastic and varied in ages. Together we ran workshops ranging from Your Code in Space with Astro Pi, to how to create a burping Jelly Baby, to Physical Computing in Scratch and Hacking Minecraft.

A workshop leader leans over to point out something on a computer display to a young boy and a woman who are working together.

Families and young people at a Raspberry Pi workshop in the Youth Zone at MozFest 2016

One of the workshops I attended was how to create a burping Jelly Baby, run by Bethanie Fentiman (@bfentiman). She led a great session, especially given the technical hitches she encountered during the session: despite all of this, Bethanie and her team of helpers helped me to create a burping Jelly Baby by the end of the workshop. Thank you for all your patience and hard work! You can read Bethanie’s laconic take on MozFest in her blog.

Bethanie Fentiman helps workshop attendees make Jelly Babies burp
Jelly Babies. Their time is short

All the workshops were well attended by a mix of families, children and teenagers.


Vincent Lee ran a workshop on making a Pi-powered automatic Twitter photo booth. His before and after MozFest blogs have some lovely photos, as well as candid insights into the frantic below-the-surface paddling that happens in order to deliver an event like this one!


MozFest 2016 was a great place to find out what you can do with a Raspberry Pi and discover what other members of the Raspberry Pi community have created. People were really impressed at the workshops run by the young volunteers, such as 11-year-old Elise with her workshop on Spooktacular Sounds with Sonic Pi. A massive thank you to them: it’s not easy to teach grown-ups alongside younger people! Elise’s MozFest 2016 blog describes her busy, sociable and exciting weekend.


Aoibheann, who ran Beginners’ Guide to Scratching Maths with Things from the Kitchen, travelled to MozFest from “the middle of nowhere” in the Republic of Ireland (so middle-of-nowhere, she has dial-up internet at home!). Aoibheann’s MozFest blog describes adapting her workshop to accommodate last-minute obstacles and finding that, despite the busy-ness, the Youth Zone was a home from home.


Two very popular workshops at MozFest were LASERS! Create your own jewellery/keyring using a laser cutter and LASERS! Bringing drawings to life! Both were run by Amy Mather, whose enthusiasm for lasers is just one of many things for which she’s become well known in the Pi community. Participants learned how to use Raspberry Pis and Inkscape, an open source design program, to create designs which were then sent to the laser cutter to be made. Amy’s MozFest 2016 blog is full of fantastic photos of laser-cut works-in-progress and finished products.


A huge thank-you to Joseph Thomas for his help with the laser workshop and for running Castles, code and capacitive buttons: Building castles in Minecraft with touch of a button not once, but twice. Joseph’s MozFest 2016 blog explains why, despite ending up with trench foot (really), he’ll still be back in 2017.

A laser cutter head cuts a child's Inkscape drawing of a bus into a piece of wood

A laser cutter brings a workshop participant’s Inkscape design into being at MozFest 2016

Cerys Lock for ran a workshop on Displaying Images and Animations on the Sense HAT – thank you, Cerys! Her pre- and post-MozFest blogs have an excellent photo log and an intriguing credits section.


A massive thank you to the amazing team of 45+ volunteers, from the Pi community and beyond, who helped out over the weekend! Without you, Youth Zone simply would not have happened, let alone been the fantastic, creative space for exploration, discovery and excitement that it was. And particular thanks to Dorine Flies and Andrew Mulholland for their ridiculously hard work as Space Wranglers of Youth Zone this year. Andrew’s blog on MozFest 2016 describes the months of planning and the many long evenings of work that go into the Youth Zone, and he’s drawn together wonderful highlights from the weekend.

Having just joined the Raspberry Pi Foundation, I went to MozFest to get a taste of Raspberry Pi activities before I begin helping to organise other events future in the future. I was incredibly impressed with the skill and patience of all the volunteers and their ability to teach me things that seemed very complicated at first. I’m really looking forward to getting to know the community better, as I work with the Raspberry Pi to deliver events that I hope will have just as much energy and passion as MozFest.

The post Raspberry Pi at MozFest 2016 appeared first on Raspberry Pi.

Firefox 50.0

Post Syndicated from ris original http://lwn.net/Articles/706511/rss

Mozilla has released Firefox 50.0. This version features improved
performance for SDK extensions or extensions using the SDK module loader,
added download protection for a large number of executable file types,
added option to Find in page that allows users to limit search to whole
words only, and more. See the release
for details.

Project for porting C to Rust gains Mozilla’s backing (InfoWorld)

Post Syndicated from ris original http://lwn.net/Articles/705266/rss

InfoWorld takes
a look
at a C-to-Rust translation project called Corrode. “What Corrode does not do (yet) is take constructs specific to C and rewrite them in memory-safe Rust equivalents. In other words, it performs the initial grunt work involved in porting a project from C to Rust, but it leaves the heavier lifting — for example, using Rust’s idioms and language features — to the developer.

Yes, we can validate the Wikileaks emails

Post Syndicated from Robert Graham original http://blog.erratasec.com/2016/10/yes-we-can-validate-wikileaks-emails.html

Recently, WikiLeaks has released emails from Democrats. Many have repeatedly claimed that some of these emails are fake or have been modified, that there’s no way to validate each and every one of them as being true. Actually, there is, using a mechanism called DKIM.

DKIM is a system designed to stop spam. It works by verifying the sender of the email. Moreover, as a side effect, it verifies that the email has not been altered.
Hillary’s team uses “hillaryclinton.com”, which as DKIM enabled. Thus, we can verify whether some of these emails are true.

Recently, in response to a leaked email suggesting Donna Brazile gave Hillary’s team early access to debate questions, she defended herself by suggesting the email had been “doctored” or “falsified”. That’s not true. We can use DKIM to verify it.
You can see the email in question at the WikiLeaks site: https://wikileaks.org/podesta-emails/emailid/5205. The title suggests they have early access to debate questions, and includes one specifically on the death penalty, with the text:

since 1973, 156 people have been on death row and later set free. Since 1976, 1,414 people have been executed in the U.S

Indeed, during the debate the next day, they asked the question:

Secretary Clinton, since 1976, we have executed 1,414 people in this country.  Since 1973, 156 who were convicted have been exonerated from the death row.

It’s not a smoking gun, but at the same time, it both claims they got questions in advance while having a question in advance. Trump gets hung on similar chains of evidence, so it’s not something we can easily ignore.
Anyway, this post isn’t about the controversy, but the fact that we can validate the email. When an email server sends a message, it’ll include an invisible “header”. They aren’t especially hidden, most email programs allow you to view them, it’s just that they are boring, so hidden by default. The DKIM header in this email looks like:
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=hillaryclinton.com; s=google;
How do you verify this is true. There are a zillion ways with various “DKIM verifiers”. I use the popular Thunderbird email reader (from the Mozilla Firefox team). They have an addon designed specifically to verify DKIM. Normally, email readers don’t care, because it’s the email server‘s job to verify DKIM, not the client. So we need a client addon to enable verification.
Downloading the raw email from WikiLeaks and opening in Thunderbird, with the addon, I get the following verification that the email is valid. Specifically, it validates that the HillaryClinton.com sent precisely this content, with this subject, on that date.
Let’s see what happens when somebody tries to doctor the email. In the following, I added “MAKE AMERICA GREAT AGAIN” to the top of the email.
As you can see, we’ve proven that DKIM will indeed detect if anybody has “doctored” or “falsified” this email.
I was just listening to ABC News about this story. It repeated Democrat talking points that the WikiLeaks emails weren’t validated. That’s a lie. This email in particular has been validated. I just did it, and shown you how you can validate it, too.

Btw, if you can forge an email that validates correctly as I’ve shown, I’ll give you 1-bitcoin. It’s the easiest way of solving arguments whether this really validates the email — if somebody tells you this blogpost is invalid, then tell them they can earn about $600 (current value of BTC) proving it. Otherwise, no.

Update: I’m a bit late writing this blog post. Apparently, others have validated these, too.

Update: In the future, when HilaryClinton.com changes their DKIM key, it will no longer be able to verify. Thus, I’m recording the domain key here:

google._domainkey.hillaryclinton.com: type TXT, class IN
v=DKIM1; k=rsa; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCJdAYdE2z61YpUMFqFTFJqlFomm7C4Kk97nzJmR4YZuJ8SUy9CF35UVPQzh3EMLhP+yOqEl29Ax2hA/h7vayr/f/a19x2jrFCwxVry+nACH1FVmIwV3b5FCNEkNeAIqjbY8K9PeTmpqNhWDbvXeKgFbIDwhWq0HP2PbySkOe4tTQIDAQAB

Succeeding MegaZeux

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/06/succeeding-megazeux/

In the beginning, there was ZZT. ZZT was a set of little shareware games for DOS that used VGA text mode for all the graphics, leading to such whimsical Rogue-like choices as ä for ammo pickups, Ω for lions, and for keys. It also came with an editor, including a small programming language for creating totally custom objects, which gave it the status of “game creation system” and a legacy that survives even today.

A little later on, there was MegaZeux. MegaZeux was something of a spiritual successor to ZZT, created by (as I understand it) someone well-known for her creative abuse of ZZT’s limitations. It added quite a few bells and whistles, most significantly a built-in font editor, which let aspiring developers draw simple sprites rather than rely on whatever they could scrounge from the DOS font.

And then…

And then, nothing. MegaZeux was updated for quite a while, and (unlike ZZT) has even been ported to SDL so it can actually run on modern operating systems. But there was never a third entry in this series, another engine worthy of calling these its predecessors.

I think that’s a shame.

The legacy

Plenty of people have never heard of ZZT, and far more have never heard of MegaZeux, so here’s a brief primer.

Both were released as “first-episode” shareware: they came with one game free, and you could pony up some cash to get the sequels. Those first games — Town of ZZT and Caverns of Zeux — have these moderately iconic opening scenes.

Town of ZZT
Caverns of Zeux

In the intervening decades, all of the sequels have been released online for free. If you want to try them yourself, ZZT 3.2 includes Town of ZZT and its sequels (but must be run in DOSBox), and you can get MegaZeux 2.84c, Caverns of Zeux, and the rest of the Zeux series separately.

Town of ZZT has you, the anonymous player, wandering around a loosely-themed “town” in search of five purple keys. It’s very much a game of its time: the setting is very vague but manages to stay distinct and memorable with very light touches; the puzzles range from trivial to downright cruel; the interface itself fights against you, as you can’t carry more than one purple key at a time; and the game can be softlocked in numerous ways, only some of which have advance warning in the form of “SAVE!!!” written carved directly into the environment.

The armory, and a gruff guardian
Darkness, which all players love
A few subtle hints

Caverns of Zeux is a little more cohesive, with a (thin) plot that unfolds as you progress through the game. Your objectives are slightly vaguer; you start out only knowing you’re trapped in a cave, and further information must be gleaned from NPCs. The gameplay is shaken up a couple times throughout — you discover spellbooks that give you new abilities, but later lose your primary weapon. The meat of the game is more about exploring and less about wacky Sokoban puzzles, though with many of the areas looking very similar and at least eight different-colored doors scattered throughout the game, the backtracking can get frustrating.

A charming little town
A chasm with several gem holders
The ice caves, or maybe caverns

Those are obviously a bit retro-looking now, but they’re not bad for VGA text made by individual hobbyists in 1991 and 1994. ZZT only even uses CGA’s eight bright colors. MegaZeux takes a bit more advantage of VGA capabilities to let you edit the palette as well as the font, but games are still restricted to only using 16 colors at one time.

The font ZZT was stuck with
MegaZeux's default character set

That’s great, but who cares?

A fair question!

ZZT and MegaZeux both occupy a unique game development niche. It’s the same niche as (Z)Doom, I think, and a niche that very few other tools fill.

I’ve mumbled about this on Twitter a couple times, and several people have suggested that the PICO-8 or Mario Maker might be in the same vein. I disagree wholeheartedly! ZZT, MegaZeux, and ZDoom all have two critical — and rare — things in common.

  1. You can crack open the editor, draw a box, and have a game. On the PICO-8, you are a lonely god in an empty void; you must invent physics from scratch before you can do anything else. ZZT, MegaZeux, and Doom all have enough built-in gameplay to make a variety of interesting levels right out of the gate. You can treat them as nothing more than level editors, and you’ll be hitting the ground running — no code required. And unlike most “no programming” GCSes, I mean that literally!

  2. If and when you get tired of only using the built-in objects, you can extend the engine. ZZT and MegaZeux have programmable actors built right in. Even vanilla Doom was popular enough to gain a third-party tool, DEHACKED, which could edit the compiled doom.exe to customize actor behavior. Mario Maker might be a nice and accessible environment for making games, but at the end of the day, the only thing you can make with it is Mario.

Both of these properties together make for a very smooth learning curve. You can open the editor and immediately make something, rather than needing to absorb a massive pile of upfront stuff before you can even get a sprite on the screen. Once you need to make small tweaks, you can dip your toes into robots — a custom pickup that gives you two keys at once is four lines of fairly self-explanatory code. Want an NPC with a dialogue tree? That’s a little more complex, but not much. And then suddenly you discover you’re doing programming. At the same time, you get rendering, movement, combat, collision, health, death, pickups, map transitions, menus, dialogs, saving/loading… all for free.

MegaZeux has one more nice property, the art learning curve. The built-in font is perfectly usable, but a world built from monochrome 8×14 tiles is a very comfortable place to dabble in sprite editing. You can add eyebrows to the built-in player character or slightly reshape keys to fit your own tastes, and the result will still fit the “art style” of the built-in assets. Want to try making your own sprites from scratch? Go ahead! It’s much easier to make something that looks nice when you don’t have to worry about color or line weight or proportions or any of that stuff.

It’s true that we’re in an “indie” “boom” right now, and more game-making tools are available than ever before. A determined game developer can already choose from among dozens (hundreds?) of editors and engines and frameworks and toolkits and whatnot. But the operative word there is “determined“. Not everyone has their heart set on this. The vast majority of people aren’t interested in devoting themselves to making games, so the most they’d want to do (at first) is dabble.

But programming is a strange and complex art, where dabbling can be surprisingly difficult. If you want to try out art or writing or music or cooking or dance or whatever, you can usually get started with some very simple tools and a one-word Google search. If you want to try out game development, it usually requires programming, which in turn requires a mountain of upfront context and tool choices and explanations and mysterious incantations and forty-minute YouTube videos of some guy droning on in monotone.

To me, the magic of MegaZeux is that anyone with five minutes to spare can sit down, plop some objects around, and have made a thing.

Deep dive

MegaZeux has a lot of hidden features. It also has a lot of glass walls. Is that a phrase? It should be a phrase. I mean that it’s easy to find yourself wanting to do something that seems common and obvious, yet find out quite abruptly that it’s structurally impossible.

I’m not leading towards a conclusion here, only thinking out loud. I want to explain what makes MegaZeux interesting, but also explain what makes MegaZeux limiting, but also speculate on what might improve on it. So, you know, something for everyone.

Big picture

MegaZeux is a top-down adventure-ish game engine. You can make platformers, if you fake your own gravity; you can make RPGs, if you want to build all the UI that implies.

MegaZeux games can only be played in, well, MegaZeux. Games that need instructions and multiple downloads to be played are fighting an uphill battle. It’s a simple engine that seems reasonable to deploy to the web, and I’ve heard of a couple attempts at either reimplementing the engine in JavaScript or throwing the whole shebang at emscripten, but none are yet viable.

People have somewhat higher expectations from both games and tools nowadays. But approachability is often at odds with flexibility. The more things you explicitly support, the more complicated and intimidating the interface — or the more hidden features you have to scour the manual to even find out about.

I’ve looked through the advertising screenshots of Game Maker and RPG Maker, and I’m amazed how many things are all over the place at any given time. It’s like trying to configure the old Mozilla Suite. Every new feature means a new checkbox somewhere, and eventually half of what new authors need to remember is the set of things they can safely ignore.

SLADE’s Doom map editor manages to be much simpler, but I’m not particularly happy with that, either — it’s not clever enough to save you from your mistakes (or necessarily detect them), and a lot of the jargon makes no sense unless you’ve already learned what it means somewhere else. Plus, making the most of ZDoom’s extra features tends to involve navigating ten different text files that all have different syntax and different rules.

MegaZeux has your world, some menus with objects in them, and spacebar to place something. The UI is still very DOS-era, but once you get past that, it’s pretty easy to build something.

How do you preserve that in something “modern”? I’m not sure. The only remotely-similar thing I can think of is Mario Maker, which cleverly hides a lot of customization options right in the world editor UI: placing wings on existing objects, dropping objects into blocks, feeding mushrooms to enemies to make them bigger. The downside is that Mario Maker has quite a lot of apocryphal knowledge that isn’t written down anywhere. (That’s not entirely a downside… but I could write a whole other post just exploring that one sentence.)


Oh, no.

Graphics don’t make the game, but they’re a significant limiting factor for MegaZeux. Fixing everything to a grid means that even a projectile can only move one tile at a time. Only one character can be drawn per grid space, so objects can’t usefully be drawn on top of each other. Animations are difficult, since they eat into your 255-character budget, which limits real-time visual feedback. Most individual objects are a single tile — creating anything larger requires either a lot of manual work to keep all the parts together, or the use of multi-tile sprites which don’t quite exist on the board.

And yet! The same factors are what make MegaZeux very accessible. The tiles are small and simple enough that different art styles don’t really clash. Using a grid means simple games don’t have to think about collision detection at all. A monochromatic font can be palette-shifted, giving you colorful variants of the same objects for free.

How could you scale up the graphics but preserve the charm and approachability? Hmm.

I think the palette restrictions might be important here, but merely bumping from 2 to 8 colors isn’t quite right. The palette-shifting in MegaZeux always makes me think of keys first, and multi-colored keys make me think of Chip’s Challenge, where the key sprites were simple but lightly shaded.

All four Chips Challenge 2 keys

The game has to contain all four sprites separately. If you wanted to have a single sprite and get all of those keys by drawing it in different colors, you’d have to specify three colors per key: the base color, a lighter color, and a darker color. In other words, a ramp — a short gradient, chosen from a palette, that can represent the same color under different lighting. Here are some PICO-8 ramps, for example. What about a sprite system that drew sprites in terms of ramps rather than individual colors?

A pixel-art door in eight different color schemes

I whipped up this crappy example to illustrate. All of the doors are fundamentally the same image, and all of them use only eight colors: black, transparent, and two ramps of three colors each. The top-left door could be expressed as just “light gray” and “blue” — those colors would be expanded into ramps automatically, and black would remain black.

I don’t know how well this would work, but I’d love to see someone try it. It may not even be necessary to require all sprites be expressed this way — maybe you could import your own truecolor art if you wanted. ZDoom works kind of this way, though it’s more of a historical accident: it does support arbitrary PNGs, but vanilla Doom sprites use a custom format that’s in terms of a single global palette, and only that custom format can be subjected to palette manipulation.

Now, MegaZeux has the problem that small sprites make it difficult to draw bigger things like UI (or a non-microscopic player). The above sprites are 32×32 (scaled up 2× for ease of viewing here), which creates the opposite problem: you can’t possibly draw text or other smaller details with them.

I wonder what could be done here. I know that the original Pokémon games have a concept of “metatiles”: every map is defined in terms of 4×4 blocks of smaller tiles. You can see it pretty clearly on this map of Pallet Town. Each larger square is a metatile, and many of them repeat, even in areas that otherwise seem different.

Pallet Town from Pokémon Red, carved into blocks

I left the NPCs in because they highlight one of the things I found most surprising about this scheme. All the objects you interact with — NPCs, signs, doors, items, cuttable trees, even the player yourself — are 16×16 sprites. The map appears to be made out of 16×16 sprites, as well — but it’s really built from 8×8 tiles arranged into bigger 32×32 tiles.

This isn’t a particularly nice thing to expose directly to authors nowadays, but it demonstrates that there are other ways to compose tiles besides the obvious. Perhaps simple terrain like grass and dirt could be single large tiles, but you could also make a large tile by packing together several smaller tiles?

Text? Oh, text can just be a font.

Player status

MegaZeux has no HUD. To know how much health you have, you need to press Enter to bring up the pause menu, where your health is listed in a stack of other numbers like “gems” and “coins”. I say “menu”, but the pause menu is really a list of keyboard shortcuts, not something you can scroll through and choose items from.

MegaZeux's in-game menu, showing a list of keyboard shortcuts on the left and some stats on the right

To be fair, ZZT does reserve the right side of the screen for your stats, and it puts health at the top. I find myself scanning the MegaZeux pause menu for health every time, which seems a somewhat poor choice for the number that makes the game end when you run out of it.

Unlike most adventure games, your health is an integer starting at 100, not a small number of hearts or whatever. The only feedback when you take damage is a sound effect and an “Ouch!” at the bottom of the screen; you don’t flinch, recoil, or blink. Health pickups might give you any amount of health, you can pick up health beyond 100, and nothing on the screen tells you how much you got when you pick one up. Keeping track of your health in your head is, ah, difficult.

MegaZeux also has a system of multiple lives, but those are also just a number, and the default behavior on “death” is for your health to reset to 100 and absolutely nothing else happens. Walking into lava (which hurts for 100 at a time) will thus kill you and strip you of all your lives quite rapidly.

It is possible to manually create a HUD in MegaZeux using the “overlay” layer, a layer that gets drawn on top of everything else in the world. The downside is that you then can’t use the overlay for anything in-world, like roofs or buildings that can be walked behind. The overlay can be in multiple modes, one that’s attached to the viewport (like a HUD) and one that’s attached to the world (like a ceiling layer), so an obvious first step would be offering these as separate features.

An alternative is to use sprites, blocks of tiles created and drawn as a single unit by Robotic code. Sprites can be attached to the viewport and can even be drawn even above the overlay, though they aren’t exposed in the editor and must be created entirely manually. Promising, if clumsy and a bit non-obvious — I only just now found out about this possibility by glancing at an obscure section of the manual.

Another looming problem is that text is the same size as everything else — but you generally want a HUD to be prominent enough to glance at very quickly.

This makes me wonder how more advanced drawing could work in general. Instead of writing code by hand to populate and redraw your UI, could you just drag and drop some obvious components (like “value of this number”) onto a layer? Reuse the same concept for custom dialogs and menus, perhaps?


MegaZeux has no inventory. Or, okay, it has sort of an inventory, but it’s all over the place.

The stuff in the pause menu is kind of like an inventory. It counts ammo, gems, coins, two kinds of bombs, and a variety of keys for you. The game also has multiple built-in objects that can give you specific numbers of gems and coins, which is neat, except that gems and coins don’t do actually anything. I think they increase your score, but until now I’d forgotten that MegaZeux has a score.

A developer can also define six named “counters” (i.e., integers) that will show up on the pause menu when nonzero. Caverns of Zeux uses this to show you how many rainbow gems you’ve discovered… but it’s just a number labeled RainbowGems, and there’s no way to see which ones you have.

Other than that, you’re on your own. All of the original Zeux games made use of an inventory, so this is a really weird oversight. Caverns of Zeux also had spellbooks, but you could only see which ones you’d found by trying to use them and seeing if it failed. Chronos Stasis has maybe a dozen items you can collect and no way to see which ones you have — though, to be fair, you use most of them in the same place. Forest of Ruin has a fairly standard inventory, but no way to view it. All three games have at least one usable item that they just bind to a key, which you’d better remember, because it’s game-specific and thus not listed in the general help file.

To be fair, this is preposterously flexible in a way that a general inventory might not be. But it’s also tedious for game authors and potentially confusing for players.

I don’t think an inventory would be particularly difficult to support, and MegaZeux is already halfway there. Most likely, the support is missing because it would need to be based on some concept of a custom object, and MegaZeux doesn’t have that either. I’ll get to that in a bit.

Creating new objects

MegaZeux allows you to create “robots”, objects that are controlled entirely through code you write in a simple programming language. You can copy and paste robots around as easily as any other object on the map. Cool.

What’s less cool is that robots can’t share code — when you place one, you make a separate copy of all of its code. If you create a small horde of custom monsters, then later want to make a change, you’ll have to copy/paste all the existing ones. Hope you don’t have them on other boards!

Some workarounds exist: you could make use of robots’ ability to copy themselves at runtime, and it’s possible to save or load code to/from an external file at runtime. More cumbersome than defining a template object and dropping it wherever you want, and definitely much less accessible.

This is really, really bad, because the only way to extend any of the builtin objects is to replace them with robots!

I’m a little spoiled by ZDoom, where you can create as many kinds of actor as you want. Actors can even inherit from one another, though the mechanism is a little limited and… idiosyncratic, so I wouldn’t call it beginner-friendly. It’s pretty nice to be able to define a type of monster or decoration and drop it all over a map, and I’m surprised such a thing doesn’t exist in MegaZeux, where boards and the viewport both tend to be fairly large.

This is the core of how ZDoom’s inventory works, too. I believe that inventories contain only kinds, not individual actors — that is, you can have 5 red keys, but the game only knows “5 of RedCard” rather than having five distinct RedCard objects. I’m sure part of the reason MegaZeux has no general-purpose inventory is that every custom object is completely distinct, with nothing fundamentally linking even identical copies of the same robot together.


By default, the player can shoot bullets by holding Space and pressing a direction. (Moving and shooting at the same time is… difficult.) Like everything else, bullets are fixed to the character grid, so they move an entire tile at a time.

Bullets can also destroy other projectiles, sometimes. A bullet hitting another bullet will annihilate both. A bullet hitting a fireball might either turn the fireball into a regular fire tile or simple be destroyed, depending on which animation frame the fireball is in when the bullet hits it. I didn’t know this until someone told me only a couple weeks ago; I’d always just thought it was random and arbitrary and frustrating. Seekers can’t be destroyed at all.

Most enemies charge directly at you; most are killed in one hit; most attack you by colliding with you; most are also destroyed by the collision.

The (built-in) combat is fairly primitive. It gives you something to do, but it’s not particularly satisfting, which is unfortunate for an adventure game engine.

Several factors conspire here. Graphical limitations make it difficult to give much visual feedback when something (including the player) takes damage or is destroyed. The motion of small, fast-moving objects on a fixed grid can be hard to keep track of. No inventory means weapons aren’t objects, either, so custom weapons need to be implemented separately in the global robot. No custom objects means new enemies and projectiles are difficult to create. No visual feedback means hitscan weapons are implausible.

I imagine some new and interesting directions would make themselves obvious in an engine with a higher resolution and custom objects.


Robotic is MegaZeux’s programming language for defining the behavior of robots, and it’s one of the most interesting parts of the engine. A robot that acts like an item giving you two keys might look like this:

: "touch"
* "You found two keys!"
givekey c04
givekey c05
die as an item
MegaZeux's Robotic editor

Robotic has no blocks, loops, locals, or functions — though recent versions can fake functions by using special jumps. All you get is a fixed list of a few hundred commands. It’s effectively a form of bytecode assembly, with no manual assembling required.

And yet! For simple tasks, it works surprisingly well. Creating a state machine, as in the code above, is straightforward. end stops execution, since all robots start executing from their first line on start. : "touch" is a label (:"touch" is invalid syntax) — all external stimuli are received as jumps, and touch is a special label that a robot jumps to when the player pushes against it. * displays a message in the colorful status line at the bottom of the screen. givekey gives a key of a specific color — colors are a first-class argument type, complete with their own UI in the editor and an automatic preview of the particular colors. die as an item destroys the robot and simultaneously moves the player on top of it, as though the player had picked it up.

A couple other interesting quirks:

  • Most prepositions, articles, and other English glue words are semi-optional and shown in grey. The line die as an item above has as an greyed out, indicating that you could just type die item and MegaZeux would fill in the rest. You could also type die as item, die an item, or even die through item, because all of as, an, and through act like whitespace. Most commands sprinkle a few of these in to make themselves read a little more like English and clarify the order of arguments.

  • The same label may appear more than once. However, labels may be zapped, and a jump will always go to the first non-zapped occurrence of a label. This lets an author encode a robot’s state within the state of its own labels, obviating the need for state-tracking variables in many cases. (Zapping labels predates per-robot variables — “local counters” — which are unhelpfully named local through local32.)

    Of course, this can rapidly spiral out of control when state changes are more complicated or several labels start out zapped or different labels are zapped out of step with each other. Robotic offers no way to query how many of a label have been zapped and MegaZeux has no debugger for label states, so it’s not hard to lose track of what’s going on. Still, it’s an interesting extension atop a simple label-based state machine.

  • The built-in types often have some very handy shortcuts. For example, GO [dir] # tells a robot to move in some direction, some number of spaces. The directions you’d expect all work: NORTH, SOUTH, EAST, WEST, and synonyms like N and UP. But there are some extras like RANDNB to choose a random direction that doesn’t block the robot, or SEEK to move towards the player, or FLOW to continue moving in its current direction. Some of the extras only make sense in particular contexts, which complicates them a little, but the ability to tell an NPC to wander aimlessly with only RANDNB is incredible.

  • Robotic is more powerful than you might expect; it can change anything you can change in the editor, emulate the behavior of most other builtins, and make use of several features not exposed in the editor at all.

Nowadays, the obvious choice for an embedded language is Lua. It’d be much more flexible, to be sure, but it’d lose a little of the charm. One of the advantages of creating a totally custom language for a game is that you can add syntax for very common engine-specific features, like colors; in a general-purpose language, those are a little clumsier.

function myrobot:ontouch(toucher)
    if not toucher.is_player then
        return false
    world:showstatus("You found two keys!")
    return true

Changing the rules

MegaZeux has a couple kinds of built-in objects that are difficult to replicate — and thus difficult to customize.

One is projectiles, mentioned earlier. Several variants exist, and a handful of specific behaviors can be toggled with board or world settings, but otherwise that’s all you get. It should be feasible to replicate them all with robots, but I suspect it’d involve a lot of subtleties.

Another is terrain. MegaZeux has a concept of a floor layer (though this is not explicitly exposed in the editor) and some floor tiles have different behavior. Ice is slippery; forest blocks almost everything but can be trampled by the player; lava hurts the player a lot; fire hurts the player and can spread, but burns out after a while. The trick with replicating these is that robots cannot be walked on. An alternative is to use sensors, which can be walked on and which can be controlled by a robot, but anything other than the player will push a sensor rather than stepping onto it. The only other approach I can think of is to keep track of all tiles that have a custom terrain, draw or animate them manually with custom floor tiles, and constantly check whether something’s standing there.

Last are powerups, which are really effects that rings or potions can give you. Some of them are special cases of effects that Robotic can do more generally, such as giving 10 health or changing all of one object into another. Some are completely custom engine stuff, like “Slow Time”, which makes everything on the board (even robots!) run at half speed. The latter are the ones you can’t easily emulate. What if you want to run everything at a quarter speed, for whatever reason? Well, you can’t, short of replacing everything with robots and doing a multiplication every time they wait.

ZDoom has a similar problem: it offers fixed sets of behaviors and powerups (which mostly derive from the commercial games it supports) and that’s it. You can manually script other stuff and go quite far, but some surprisingly simple ideas are very difficult to implement, just because the engine doesn’t offer the right kind of hook.

The tricky part of a generic engine is that a game creator will eventually want to change the rules, and they can only do that if the engine has rules for changing those rules. If the engine devs never thought of it, you’re out of luck.

Someone else please carry on this legacy

MegaZeux still sees development activity, but it’s very sporadic — the last release was in 2012. New features tend to be about making the impossible possible, rather than making the difficult easier. I think it’s safe to call MegaZeux finished, in the sense that a novel is finished.

I would really like to see something pick up its torch. It’s a very tricky problem, especially with the sprawling complexity of games, but surely it’s worth giving non-developers a way to try out the field.

I suppose if ZZT and MegaZeux and ZDoom have taught us anything, it’s that the best way to get started is to just write a game and give it very flexible editing tools. Maybe we should do that more. Maybe I’ll try to do it with Isaac’s Descent HD, and we’ll see how it turns out.

MOSS supports four more open source projects

Post Syndicated from ris original http://lwn.net/Articles/702567/rss

The Mozilla Open Source Support program has awarded
to four projects this quarter. “On the Foundational
Technology track, we awarded $100,000 to Redash, a tool for building
visualizations of data for better decision-making within organizations, and
$50,000 to Review Board,
software for doing web-based source code review. Both of these pieces of
software are in heavy use at Mozilla. We also awarded $100,000 to Kea, the successor to the venerable ISC
DHCP codebase, which deals with allocation of IP addresses on a
network. Mozilla uses ISC DHCP, which makes funding its replacement a
natural move even though we haven’t deployed it yet. On the Mission
Partners track, we awarded $56,000 to Speech Rule Engine,
a code library which converts mathematical markup into vocalised form
(speech) for the sight-impaired, allowing them to fully appreciate
mathematical and scientific content on the web.
” (Thanks to Paul Wise)

Firefox OS, B2G OS, and Gecko

Post Syndicated from ris original http://lwn.net/Articles/702018/rss

Ari Jaaksi and David Bryant posted
a note
to the B2G (Boot to Gecko) OS community looking at the end of
Firefox OS development and at what happens to the code base going forward. “In the spring and summer of 2016 the Connected Devices team dug deeper into opportunities for Firefox OS. They concluded that Firefox OS TV was a project to be run by our commercial partner and not a project to be led by Mozilla. Further, Firefox OS was determined to not be sufficiently useful for ongoing Connected Devices work to justify the effort to maintain it. This meant that development of the Firefox OS stack was no longer a part of Connected Devices, or Mozilla at all. Firefox OS 2.6 would be the last release from Mozilla.

Today we are announcing the next phase in that evolution. While work at
Mozilla on Firefox OS has ceased, we very much need to continue to evolve
the underlying code that comprises Gecko, our web platform engine, as part
of the ongoing development of Firefox. In order to evolve quickly and
enable substantial new architectural changes in Gecko, Mozilla’s Platform
Engineering organization needs to remove all B2G-related code from
mozilla-central. This certainly has consequences for B2G OS. For the
community to continue working on B2G OS they will have to maintain a code
base that includes a full version of Gecko, so will need to fork Gecko and
proceed with development on their own, separate branch.” (Thanks to
Paul Wise)

What It Costs to Run Let’s Encrypt

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2016/09/20/what-it-costs-to-run-lets-encrypt.html

Today we’d like to explain what it costs to run Let’s Encrypt. We’re doing this because we strive to be a transparent organization, we want people to have some context for their contributions to the project, and because it’s interesting.

Let’s Encrypt will require about $2.9M USD to operate in 2017. We believe this is an incredible value for a secure and reliable service that is capable of issuing certificates globally, to every server on the Web free of charge.

We’re currently working to raise the money we need to operate through the next year. Please consider donating or becoming a sponsor if you’re able to do so! In the event that we end up being able to raise more money than we need to just keep Let’s Encrypt running we can look into adding other services to improve access to a more secure and privacy-respecting Web.

Here’s how our 2017 budget breaks down:

Expense Cost
Staffing $2.06M USD
Hardware/Software $0.20M USD
Hosting/Auditing $0.30M USD
Legal/Administrative $0.35M USD
Total $2.91M USD

Staffing is our dominant cost. We currently have eight full time employees, plus two full time staff that are employed by other entities (Mozilla and EFF). This includes five operations/sysadmin staff, three software developers, one communications and fundraising person, and an executive director.

Our systems administration staff are at the heart of our day to day operations. They are responsible for building and improving our server, networking, and deployed software infrastructure, as well as monitoring the systems every hour of every day. It’s the critical 24/7 nature of the work that makes this our biggest team. Any issues need to be dealt with immediately, ideally with multiple people on hand.

Our software developers work primarily on boulder, our open source CA software. We needed to write our own software in order to create a secure, reliable, and fully-automated CA that is capable of issuing and managing enough certificates to serve the entire Web. Our software development staff also allow us to support new features much more quickly than we could if we relied on third party software for implementation.

The majority of our administrative support (e.g. HR, payroll, accounting) is provided by the Linux Foundation, so we don’t hire for those roles and related expenses come in under the “Legal/Administrative” category.

Hardware expenses include compute, storage, networking, and HSM hardware, as well as the associated support contracts. There is quite a bit of duplication for redundancy. Software expenses are low since the majority of the software we use is freely available open source software.

Hosting costs include space in two different highly secure geographically separated rooms inside secure data centers, as well as internet connections and power. The hardware and physical infrastructure we have in place is capable of issuing hundreds of millions of certificates – enough for every server on the Web. We need to maintain strong physical control over all hardware and infrastructure related to certificate issuance and management for security and auditing reasons.

Auditing costs include the required annual WebTrust audits as well as third party expert security review and testing. The third party security audits include code review, infrastructure review, penetration testing, and ACME protocol analysis. We are not required to do third party auditing beyond the WebTrust audits, but it would be irresponsible of us not to.

Legal costs go towards attorney time, primarily in the areas of corporate governance, contract development and review, and trademarks. Administrative costs include HR, payroll and benefits management, accounting and tax services, as well as travel and other miscellaneous operating costs.

Our 2016 budget is very similar to our 2017 budget, the major difference being that we will only spend approximately $2.0M USD due to a number of our staff starting after the beginning of the year. We will pay full staffing costs next year because all of the staff that joined us in 2016 will be on our payroll for the entirety of 2017.

Currently, the majority of our funding comes from corporate sponsorships. If your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org. We’re working to make grants and individual contributions more significant sources of income over the next year.

We’re grateful for the industry and community support that we receive, and we look forward to continuing to create a more secure and privacy-respecting Web!

Thursday’s security updates

Post Syndicated from n8willis original http://lwn.net/Articles/698982/rss

Debian-LTS has updated cacti
(authentication bypass).

Mageia has updated eog (M5:
out-of-bounds write), python3/python
(M5: HTTPoxy attack), redis (M5: information leak), and webkit2 (M5: multiple vulnerabilities).

openSUSE has updated cracklib (Leap 42.1: code execution), gd (13.2: out-of-bounds read), and libgcrypt (13.2: flawed random number generation).

Red Hat has updated ipa
(RHEL 6,7: denial of service).

Slackware has updated mozilla thunderbird (14.1, 14.2:
unspecified vulnerabilities).

Tuesday’s security updates

Post Syndicated from n8willis original http://lwn.net/Articles/698052/rss

Arch Linux has updated libgcrypt (information disclosure).

Fedora has updated kernel
(F24: use-after-free vulnerability), pagure (F24: cross-site scripting), and postgresql (F24: multiple vulnerabilities).

Red Hat has updated qemu-kvm-rhev (RHEL7 OSP5; RHEL7 OSP7; RHEL6 OSP5; RHEL7 OSP6:
multiple vulnerabilities).

SUSE has updated MozillaFirefox (SLE12: multiple vulnerabilities).

Security advisories for Thursday

Post Syndicated from jake original http://lwn.net/Articles/697017/rss

Arch Linux has updated jq (code
execution from 2015) and websvn (cross-site

Debian-LTS has updated postgresql-9.1 (two vulnerabilities).

Gentoo has updated optipng (three

openSUSE has updated typo3 (13.1:
three vulnerabilities from 2013 and 2014) and firefox, mozilla-nss (13.1: many vulnerabilities).

Red Hat has updated java-1.7.0-ibm (RHEL5: two vulnerabilities),
java-1.7.1-ibm (RHEL6&7: two
vulnerabilities), java-1.8.0-ibm
(RHEL6&7: two vulnerabilities), and python-django (RHOSP8; RHOSP7; RHEL7:
cross-site scripting).

Scientific Linux has updated qemu-kvm (SL6: denial of service).

Ubuntu has updated libgd2 (16.04,
14.04: three vulnerabilities) and xmlrpc-epi (16.04: code execution).

Let’s Encrypt will be trusted by Firefox 50

Post Syndicated from n8willis original http://lwn.net/Articles/696587/rss

The Let’s Encrypt project, which provides a free SSL/TLS certificate authority (CA), has announced that Mozilla has accepted the project’s root key into the Mozilla root program and will be trusted by default as of Firefox 50. This is a step forward from Let’s Encrypt’s earlier status. “In order to start issuing widely trusted certificates as soon as possible, we partnered with another CA, IdenTrust, which has a number of existing trusted roots. As part of that partnership, an IdenTrust root ‘vouches for’ the certificates that we issue, thus making our certificates trusted. We’re incredibly grateful to IdenTrust for helping us to start carrying out our mission as soon as possible. However, our plan has always been to operate as an independently trusted CA. Having our root trusted directly by the Mozilla root program represents significant progress towards that independence.” The project has also applied for inclusion the CA trust roots maintained by Apple, Microsoft, Google, Oracle, and Blackberry. News on those programs is still pending.

Friday’s security updates

Post Syndicated from n8willis original http://lwn.net/Articles/696545/rss

Arch Linux has updated firefox (multiple vulnerabilities), jdk7-openjdk (multiple vulnerabilities), jre7-openjdk (multiple vulnerabilities), and jre7-openjdk-headless (multiple vulnerabilities).

Debian has updated openjdk-7 (multiple vulnerabilities).

Debian-LTS has updated curl
(multiple vulnerabilities) and mysql-5.5 (multiple vulnerabilities).

Fedora has updated collectd (F23; F24:
code execution),
dietlibc (F23; F24: insecure default PATH), perl (F24: privilege escalation), perl-DBD-MySQL (F24: code execution), and python-autobahn (F24: insecure origin validation).

openSUSE has updated MozillaFirefox, mozilla-nss (13.2, Leap
42.1: multiple vulnerabilities).

Oracle has updated kernel (O7; O6:
multiple vulnerabilities; O7; O6; O6; O5:
privilege escalation)
and squid (O6: code execution).

Scientific Linux has updated squid (SL6: code execution).

SUSE has updated kernel
(SLE12-LP: multiple vulnerabilities).

Ubuntu has updated firefox
(12.04, 14.04, 16.04: multiple vulnerabilities), libreoffice (12.04: code execution), oxide-qt (14.04, 16.04: multiple vulnerabilities), and qemu, qemu-kvm (12.04, 14.04, 16.04: multiple vulnerabilities).

Let’s Encrypt Root to be Trusted by Mozilla

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2016/08/05/le-root-to-be-trusted-by-mozilla.html

The Let’s Encrypt root key (ISRG Root X1) will be trusted by default in Firefox 50, which is scheduled to ship in Q4 2016. Acceptance into the Mozilla root program is a major milestone as we aim to rely on our own root for trust and have greater independence as a certificate authority (CA).

Public CAs need their certificates to be trusted by browsers and devices. CAs that want to issue independently under their own root accomplish this by either buying an existing trusted root, or by creating a new root and working to get it trusted. Let’s Encrypt chose to go the second route.

Getting a new root trusted and propagated broadly can take 3-6 years. In order to start issuing widely trusted certificates as soon as possible, we partnered with another CA, IdenTrust, which has a number of existing trusted roots. As part of that partnership, an IdenTrust root “vouches for” the certificates that we issue, thus making our certificates trusted. We’re incredibly grateful to IdenTrust for helping us to start carrying out our mission as soon as possible.

Chain of trust between Firefox and Let's Encrypt certificates.
Chain of Trust Between Firefox and Let’s Encrypt Certificates

However, our plan has always been to operate as an independently trusted CA. Having our root trusted directly by the Mozilla root program represents significant progress towards that independence.

We have also applied to the Microsoft, Apple, Google, Oracle and Blackberry root programs. We look forward to acceptance into these programs as well.

Let’s Encrypt depends on industry and community support. Please consider getting involved, and if your company or organization would like to sponsor Let’s Encrypt please email us at sponsor@letsencrypt.org.

Рецепта за бисквитки

Не става дума за захарни изделия, а за информационни технологии. Терминът е заимстван с директен превод от английския език, където се използва думата cookies (въпреки, че буквалният превод е курабийки, а не бисквитки).

И все пак, каква е целта на бисквитките? Бисквитките представляват порции структурирана информация, която съдържа различни параметри. Те се създават от сървърите, които предоставят достъп до уеб страници. Предава се чрез служебните части на HTTP протокола (т. нар. HTTP headers) – това е трансферен протокол, който се използва от браузърите за обмен на информация със сървърите. Бисквитките са добавени в спецификацията на HTTP протокола във версия 1.0 в началото на 90те години. По това време Интернет не беше толкова развит, колкото е в момента и затова HTTP протоколът има някои специфични особености. Протоколът е базиран на заявки (от клиента) и отговори (от сървъра), като всяка двойка заявка и отговор се правят в отделна връзка (socket) към между клиента и сървъра. Тази схема на работа е изключително удобна, тъй като не изисква постоянна и стабилна връзка с Интернет, тъй като всъщност връзката се използва само за кратък момент. За съжаление, заради тази особеност често HTTP протоколът е наричан state less протокол (протокол без запазване на състоянието). А именно – сървърът няма как да знае, че редица от последователни заявки са изпратени от един и същи клиент. За разлика от IRC, SMTP, FTP и други протоколи създадени през същите години, които отварят една връзка и предават и приемат данни двупосочно. При такива протоколи комуникацията започва с hand shake или аутентикация между участниците в комуникацията, след което и за двете страни е ясно, че докато връзката остава отворена, комуникацията тече с конкретен участник.

За да бъде преодолян този недостатък на протокола, след версия 0.9 (първата версия навлязла в реална експлоатация), във версия 1.0 създават механизма на бисквитките. Кръщават технологията cookies заради приказката на братя Грим за Хенцел и Гретел, които маркират пътя, по който са минали пре тъмната гора, като поръсват трохи. В повечето български преводи в приказката се използват трохи от хляб (bread crumbs) термин, който намира друго място в IT сферата и уеб, но в действителност приказката е германска народна приказка с много различни версии през годините. Сравнението е очевидно – cookies позволяват да се проследи пътя на потребителя в тъмната гора на state less протокола HTTP.

Как работят бисквитките? Бисквитките се създават от сървърите и се изпращат на потребителите. При всяка следваща заявка от потребителя към същия сървър, той трябва да изпраща обратно копие на получените от сървъра бисквитки. По този начин, сървърът получава механизъм по който да проследи потребителя по пътя му, т.е. да знае, когато един и същи потребител е направил поредица от множество, иначе несвързани, заявки.

Представете си, че изпращате първа заявка към сървъра и той ви връща отговор, че иска да се аутентикирате – да посочите потребителско име и парола. Вие ги изпращате и сървърът верифицира, че това наистина сте вие. Но заради особеностите на HTTP протокола, след като изпратите името и паролата, връзката между вас и сървъра се прекъсва. По-късно изпращате нова заявка. Сървърът няма как да знае, че това отново сте вие, тъй като за новата заявка е направена нова връзка.

Сега си представете същата схема, но с добавена технологията на бисквитите. Изпращате заявката към сървъра, той ви връща отговор, че иска име и парола. Изпращате име и парола и сървърът ви връща бисквитка в която записва че собственикът на тази бисквитка вече е дал име и парола. Връзката прекъсва. По-късно изпращате нова заявка и тъй като тя е към същия сървър, клиентът трябва да върне обратно копие на получените от сървъра бисквитки. Тогава новата заявка съдържа и бисквитката, в която пише, че по-рано сте предоставили име и парола, които са били валидни.

И така, бисквитките се създават от сървърите, като данните се изпращат от уеб сървъра заедно с отговора на заявка за достъп до ресурс (например отделна уеб страница, текст, картинка или друг файл). Бисквитката се връща като част от служебната информация на протокола. Когато клиент (клиентският софтуер – браузър) получи отговор от сървър, който съдържа бисквитка, той следва да я обработи. Ако клиентът вече разполага с подобна бисквитка – следва да си обнови информацията за нея, ако не разполага с нея, следва да я създаде. Ако срокът на бисквитката е изтекъл – следва да я унищожи и т.н.

Често се споменава, че бисквитките са малки текстови файлове, които се съхраняват на компютъра на потребителя. Това не винаги е вярно – бисквитките представляваха отделни текстови файлове в ранните версии на някои браузъри (например в Internet Explorer и Netscape Navigator), повечето съвременни браузъри съхраняват бисквитките по различен начин. Например съвременните версии на браузърите Mozilla Firefox и Google Chrome съхранява всички бисквитки в един файл, който представлява sqlite база данни. Подходът с база данни е доста подходящ, тъй като бисквитките представляват структурирана информация, която е удобно да се съхранява в база, а достъпът до информацията е доста по ефективен. Въпреки това, браузърът Microsoft Edge продължава да съхранява бисквитките във вид на текстови файлове в AppData директорията.

Какви параметри съдържат бисквитките, които сървърите изпращат? Всяка бисквитка може да съдържа: име, стойност, домейн, адрес (път), срок на варидност и някои параметри за сигурност (да важат само по https и да бъдат достъпни само от трансфертия протокол, т.е. на не бъдат достъпни за javascript и други приложни слоеве при клиента). Името и стойността са задължителни за всяка бисквитка – те задават името на променливата, в която ще се съхранява информацията и съответната стойност – съхранената информация.

Домейнът е основната част от адреса на интернет сайта, който изпраща бисквитката – частта, която идентифицира машината (сървъра) в мрежата. Според техническите спецификации домейнът на бисквитката задължително трябва да съвпада с домейна на сървъра, който ги изпраща, но има някои изключения. Първото изключение е, чекогато се използват няколко нива домейни, те могат да създават бисквитки за различни части от нивата. Например отговорът на заявка към домейна example.com може да създаде бисквитка само за домейна example.com, както и бисквитка валидна за същия домейн и всички негови поддомейни. Ако бисквитката е валидна за всички поддомейни, изписването става с точка пред името на домейна или .example.com. Второто изключение е валидно, когато бисквитката не се създава от сървъра, а от приложен слой при клиента (например от javascript). Тогава е възможно js файлът да е зареден от един домейн, но в html страница от друг домейн – сървърът, който изпраща бисквитка може да я изпрати от името на домейна където е js файлът, но самият js, докато е зареден в страница от друг домейн може да създаде (и прочете) бисквитка от домейна на html страницата.

Адресът (или пътят) е останалата част от URL адреса. По подризбиране бисквитките са валидни за път / (т.е. за корена на уеб сайта). Това означава, че бисквитката е валидна за всички възможни адреси на сървъра. Въпреки това, има възможност изрично да се укаже конкретен адрес, за който да бъдат валидни бисквитките.По идея, адресите са репрезентация на път в йерархична файлова структура върху сървъра (въпреки, че не е задължително). Затова и адресите на бисквитките представят мястото на тяхното създаване в йерархична файлова система. Например ако пътят на бисквитката е /dir/ – това означава, че тя е валидна в директорията с име dir, включително и всички нейни поддиректории.

Да дадем малко по-реалистичен пример, ако имаме бисквитки, които използваме за съпраняване на информация за аутентикацията на потребителите в администраторски панел на уеб сайт, който е разположен в директория /admin/ – можем да посочим, че дадената бисквитка е валидна само за адрес /admin/, по този начин бисквитките създадени от сървъра за нуждите на администраторския панел няма да се използват при заявки за други ресурси от същия сървър.

Срокът на валидност определя колко време потребителят трябва да пази бисквитката при себе си и да я връща с всяка следваща заявка към същия сървър и адрес (път). Когато сървърът иска да изтрие бисквитка при потребителя, той трябва да я изпрати със срок на валидност в миналото, по този начин предизвиква изтичане и автоматично изтриване на бисквитката при потребителя.

Бисквитките имат и параметри, които имат грижата да осигурят сигурността на предаваните данни. Това включва два булеви параметъра – единият определя, дали бисквитката да бъде достъпна (както за четене, така и за писане) само от http протокола или да бъде достъпна и за приложния слой при клиента (например за javascript). Вторият параметър определя, дали бисквитката да се предава по всички протоколи или само по https (защитен http).

Както се досещате, в рамките на една и съща комуникация може да има повече от една бисквитка. Както сървърът може да създаде повече от една бисквитка едновременно, така и клиентът може да върне повече от една бисквитка обратно към сървъра. Именно затова освен домейни и адреси, бисквитките имат и имена.

В допълнение бисквитките имат и редица ограничения. Повечето браузъри не позволяват да има повече от 20 едновременно валидни бисквитки за един и същи домейн. Във Mozilla Firefox това ограничение е 50 броя, а в Opera 30 броя. Също така е ограничен и размерът на всяка отделна бисквитка – не повече от 4KB (4096 Bytes). В спецификациите за бисквитки RFC2109 от 1997 г. е посочено че клиентът може да съхранява до 300 бисквитки по 20 за един и същи домейн и всяка с размер до 4KB. В по-късната спецификация Rfc6265 от 2011 г. лимитите са увеличение до 3000 броя общо и 50 броя за един домейн. Все пак, не трябва да се забравя, че всяка бисквитка се изпраща от клиента при всяка следваща заявка към сървъра, ако чукнем тавана на лимитите и имаме 50 бисквитки по 4KB, това означава, че с всяка заявка ще пътуват близо 200KB само под формата на бисквитки, което може да се окаже сериозен товар за трафика, дори и при техническите възможности на съвременния достъп до Интернет.

Разбира се, по-рано приведеният пример, при който запазваме успешната аутентикация на потребителя в бисквитка има множество особености свързани с гарантиране на сигурността. На първо място – не е добра идея да запазим потребителското име и паролата на потребителя в бисквитка, тъй като тя се запазва на неговия компютър. Това означава, че по всяко време, ако злонамерено лице получи достъп до компютъра на потребителя, може да прочете потребителското име и паролата от записаните там бисквитки. От друга страна, ако в бисквитката съхраним само името на потребителя, без неговата парола – няма как да се предпазим от фалшиви бисквитки – всяко злонамерено лице може да създаде фалшива бисквитка, в която да посочи произволно (чуждо) потребителско име и да се представи пред сървъра от чуждо име.

Затова най-често използваният механизъм е, че при всяка аутентикация с име и парола пред сървъра, след като той ги верифицира, създава някакъв временно валиден идентификатор, който изпраща като бисквитка. В различните технологии този идентификатор може да се намери с различни имена, one time password (OTP), token, session или по друг начин. При тази схема сървърът съхранява за ограничено време (живот на сесията) информация за потребителя. Всяка такава информация (често наричана сесия) получава идентификационен номер, който се изпраща като бисквитка на потребителя. Тъй като той ще връща този идентификатор с всяка следваща заявка, сървърът ще може да възстановява съхранената информация за потребителя и тя да бъде достъпна при всяка следваща заявка. В същото време, информацията е съхранена на сървъра, а не при клиента, което не позволява на злонамерен потребител да я модифицира или фалшифицира. Освен това, идентификаторът е валиден за ограничен период от време (например за 30 минути). Дори и бисквитката с идентификатора да остане на компютъра на потребителя, заисаният в нея идентификатор няма да върши работа след половин час. Не на последно място, при натискане на бутона за изход съхранените на сървъра данни за потребителя се изтриват дори и да не е изтекъл срокът от 30 минути. Именно затова е важно винаги да се използват бутоните за изход при излизане от онлайн системи.

Какво друго се съхранява в бисквитки? На практика всичко! Много често бисквитките се използват за съхраняване на информация за индивидуални настройки на потребителя. Когато потребителят промени някоя настройка сървърът му изпраща бисквитка със съответната настройка и дълъг срок на валидност. При всяко следващо посещение на същия потребител на същия сайт, той ще изпраща запазената в бисквитка настройка, заедно със заявката към сървъра. Сървърът ще знае за желаната настройка и ще я приложи при връщането на отговор на изпратената заявка. Пример за такава настройка е броят на записите които се показват на страница. Ако веднъж промените този брой, избраната стойност може да се запази в бисквитка и при всяка следваща заявка сървърът винаги ще знае за настройката и ще връща правилен отговор.

В бисквитки се съхранява и друга информация, например поръчани стоки в пазарската кошница на онлайн магазин, статистическа информация кога сте посетили даден сайт за последно, колко пъти сте го посетили, кои страници сте посещавали, колко време сте се задържали на всяка от страниците и т.н.

Забранява ли Европейският съюз бисквитките? Бисквитеното чудовище от улица Сезам би било доста разстроено, ако разбере, че ЕС иска да ограничи използването на бисквитки. В действителност истината е доста по-различна, но информацията е масово грешно интепретирана. Да излезем от технологичната сфера и да навлезем малко в юридическата. На първо място, кои са нормативните документи в тази връзка? Масово се цитира европейската Директива 2009/136/ЕО от 25 ноември 2009 г. Истината е, че тази директива не засяга директно бисквитките. Директивата внася изменения в друга Директива 2002/22/ЕО от 7 март 2002 г. относно универсалната услуга (час от която е и достъпът до Интернет) и правата на потребителите. Изменението от директивата от 2009 г. гласи следното (чл. 5, стр. 30):

5. Член 5, параграф 3 се заменя със следния текст:

„3. Държавите-членки гарантират, че съхраняването на информация или получаването на достъп до информация, вече съхранявана в крайното оборудване на абоната или ползвателя, е позволено само при условие, че съответният абонат или ползвател е дал своето съгласие след получаване на предоставена ясна и изчерпателна информация в съответствие с Директива 95/46/ЕО, inter alia, относно целите на обработката. Това не пречи на всякакво техническо съхранение или достъп с единствена цел осъществяване на предаването на съобщение по електронна съобщителна мрежа или доколкото е строго необходимо, за да може доставчикът да предостави услуга на информационното общество, изрично поискана от абоната или ползвателя.“

Също така, в увода на същата директива се споменава още:

(66) Трети страни може да желаят да съхраняват информация върху оборудване на даден ползвател или да получат достъп до вече съхраняваната информация за различни цели, които варират от легитимни (някои видове „бисквитки“ (cookies) до такива, включващи непозволено навлизане в личната сфера (като шпионски софтуер или вируси). Следователно е от първостепенно значение ползвателите да получат ясна и всеобхватна информация при извършване на дейност, която би могла да доведе до подобно съхраняване или получаване на достъп. Методите на предоставяне на информация и на предоставяне на правото на отказ следва да се направят колкото може по-удобни за ползване. Изключенията от задължението за предоставяне на информация и на право на отказ следва да се ограничават до такива ситуации, в които техническото съхранение или достъп е стриктно необходимо за легитимната цел на даване възможност за ползване на специфична услуга, изрично поискана от абоната или ползвателя. Когато е технически възможно и ефективно, съгласно приложимите разпоредби на Директива 95/46/ЕО, съгласието на ползвателя с обработката може да бъде изразено чрез използване на съответните настройки на браузер или друго приложение. Прилагането на тези изисквания следва да се направи по-ефективно чрез разширените правомощия, дадени на съответните национални органи.

От двете цитирания следва да обърнем внамание и на още нещо – цитира се и Директива 95/46/ЕО от 24 октомври 1995 г, която пък третира защитата на физическите лица при обработването на лични данни. Разбира се, трудно е да се каже, че директивата от средата на 90те години засяга директно функционирането на Интернет, който по това време е доста слабо разпространен, а технологията на бисквитките – появила се само преди няколко години, все още изключително нова и рядко използвана.

Преди да продължим с анализа нататък, трябва да отбележим, че и трите цитирани директиви, по отношение на защитата на потребителите от бисквитки, засега не са транспонирани в националното законодателство (поне не в Закона за електронното управление, Закона за електронните съобщения и Закона за защита на личните данни). Слава богу, механизмът на директивите в Европейския съюз е така създаден, че ги прави задължителни за спазване от всички държави-членки, независимо дали има национално законодателство за съответната сфера или не.

И все пак – защо ЕС иска да ограничи използването на бисквитките? Сред изброените множество приложения на технологията – за съхраняване на аутентикация, за съхраняване на настройки, за следене на поведението на потребителя, за следене на статистика за потребителя, за маркетингови анализи, за съхраняване на данни за пазарски колички и др. (някои от изброените се припокриват или допълват), бисквитките още може да се използват и за сериозно навлизане в личното пространство на потребителите, като се извършват различни форми на следене и анализ на потреблението и поведението на потребителите в Интернет с цел предоставяне на таргетирана реклама или с други, дори и незаконни, цели.

Да разгледаме един по-реалистичен пример на базата на вече разгледаната по-рано технология. Имаме прост информационен сайт с автомобилна тематика, който няма никакви специфични функции, не предоставя услуги или нещо друго. Сайтът обаче използва Google Analytics (безплатен инструмент за събиране на статистика за посетителите, предоставят от Google), също така, собственикът на сайта, с цел да монетизира поне в някаква минимална степен събраната на сайта си информация е пуснал и Google Adwords (услуга за публикуване на рекламни банери предоставяна от Google). Също така имаме и потребител, който търси в Google информация за ремонт на спукана автомобилна гума. Потребителят открива цитирания по-горе сайт в Google, където кликва линк и отива на сайта. Същият потребител има и email в Gmail (безплатна email услуга предоставяна от Google). Както забелязвате, до момента имаме един неизменно преследващ ни по целия път общ знаменател – Google. В случая това далеч не е единствения голям играч на този пазар, просто примерът с него е най-достъпен за широката публика. Всъщност Google едновременно има достъп до писмата, които потребителят е изпращал и получавал, до това какво е търсил, до това в кой сайт е влязъл, какво е чел там (точно кои страници), колко време е прекарал на този сайт, също така и информация за всеки друг сайт, който същият потребител е посещавал в миналото, независимо дали ги е посещавал от същия компютър или от друг, достатъчно е всички сайтове да използват Google Analytics за статистиката си. Ако потребителят има служебен и домашен компютър и е влизал в Gmail пощата си и от двата компютъра, тогава Google Analytics е в състояние да направи връзка, че сайтовете, които сапосещавани на двата компютъра, които иначе нямат никаква друга връзка помежду си, са посещавани от един и същи потребител. Тогава, потребителят не трябва да се учудва, ако отиде на трето място, несвързано по никакъв начин с предишните две (домашния и служебния компютър), влезе си в пощата и по-късно посети произволен сайт, на който види реклама за продажба на нови автомобилни гуми.

Проследяването на всичко описано до момента е възможно именно чрез механизмите на бисквитките. Неслучайно те се наричат механизъм за запазване на състоянието и са създадени за „следене на потребителите на протокола”. Разбира се – следене в позитивния и чисто технологичен смисъл на термина, но все пак следене, което в ръцете на недобронамерени лица може да придобие съвсем различни мащаби и да се извършва със съвсем различни цели.

Всичко това може (и се) комбинира и с други данни, които се събират за потребителите – IP гео локация, информация за интернет връзката, използваното устройство, размер на дисплея, операционна система, използван софтуер и много други данни, които се предоставят автоматизирано, докато браузваме в мрежата.

Отново, сама по себе си, технологията на функциониране на бисквитките има предвидени защитни механизми. Например бисквитките от един уеб сайт не могат да бъдат прочетени от друг сайт. Но тук цялата схема се чупи поради факта, че бисквитките са достъпни за приложния слой на клиента (javascript), като в същото време милиони сайтове по света се възползват от иначе безплатната услуга за събиране на статистика за посещенията от един и същи доставчик. Именно този доставчик е свързващото звено в цялата схема. Всеки от сайтовете и всеки от потребителите поотделно не разполагат с особено полезна информация, но свързващото звено разполага с цялата информация и при добро желание, а повярвайте ми, за да бъдат безплатни всички тези услуги, желанието е преминало в нужда, тази натрупана информация може да бъде анализирана, обработена и използвана за всякакви нужди.

И тъй като често тези действия могат да навлязат доста надълбоко в личния живот на хората, Европейският съюз е предприел необходимите мерки, да задължи доставчиците на услуги в Интернет (собствениците на уеб сайтове), да информират потребителите за това какви данни се съхраняват за тях на крайните устройства (т.е. на самите компютри на клиентите) и за какви цели се използват тези данни.

Тук е много важно да уточним няколко аспекта. На първо място – използването на бисквитки, както и проследяването като процес не са забранени, просто се изисква потребителите да бъдат информирани какво се прави и с каква цел, както и потребителят изрично да е дал съгласието си за това. Другата важна подробност е, че бисквитките не са единственят механизъм за съхраняване на информация на крайните устройства и европейското законодателство не се ограничава до използването именно на бисквитки. Local Storage е съвременна алтернатива на бисквитките и въпреки, че функционира по различен начин и предоставя съвсем различни възможности, но също съхранява информация на крайните устройства и реално може да бъде използвана за следене на потребителите и може да засегне правата им по отношение на обработка на личните им данни. В този смисъл европейските директиви засягат всяка форма на съхраняване на информация на устройствата на потребителите, а не само бисквитките. Също така – директивите разглеждат съхраняването на данни за проследяване на потребителите отделно от бисквитките необходими за технологичното функциониране на системите в Интернет. Също така се прави и разлика между бисквитки от трети страни и бисквитки от собствениците на сайтовете, като в примера с бисквитките оставяни от Google Analytics, Google е трета страна.

Когато съхраняването на данните (били те бисквитки или други) се извършва от трети страни (различни от доставчика на услугата и крайния потребител), това не отменя ангажиментите на доставчика на услугата да информира потребителите, както и да им иска съгласието. Казано накратко, ако имам сайт, който използва Google Analytics, ангажиментът да информирам потребителите на сайта, както и да поискам тяхното съгласие, си остава мой (т.е. на собственика на сайта – лицето, което предоставя услугата), а не на третата страна (т.е. не е ангажимент на Google.

Също интересен факт е, че директивата разглежда като допустимо, съгласието на потребителя да бъде получено и чрез настройка на браузъра или друго приложение. Тук можем обратно да се върнем към технологиите. Преди време имаше P3P (пи три пи като Platform for Privacy Preferences, не ръ зъ ръ)– технология, която започна обещаващо, но в последствие беше имплементирана единствено от Internet Explorer и в крайна сметка разработката на спецификацията беше прекратена от W3C. Една от сочените причини е, че планираната технология беше относително сложна за имплементиране. Към днешна дата повечето браузъри поддържат Do Not Track (DNT), което представлява един HTTP header с име DNT, който ако присъства в заявката на клиента със стойност 1, посочва, че потребителят не е съгласен да бъде проследяван. Разбира се, проследяването, което се визира от DNT и запазването и достъпването на информация на крайните устройства на потребителите, което се визира в европейските директиви на се едно и също. Може да записваш и четеш информация на клиента, без да го проследяваш, което би нарушило европейските директиви, както и можеш да проследиш клиента, без да му записваш и четеш данни локално (например чрез Browser Fingerprint и с дънни съхранявани изцяло на сървъра).

Накрая, нека обобщим:

  1. Европейският съюз не забранява бисквитките;
  2. Европейският съюз предвижда мерки за защита на личните данни, като налага правила за искане на позволение от потребителите, когато се записват и достъпват данни на крайните им устройства, когато тези данни;
  3. Изискването за информирано съгласие не се ограничава единствено до бисквитките, а покрива всички съществуващи и евентуално бъдещи технологии позволяващи записване и достъпване на информация на крайните устройства на потребителите;
  4. Информиране на потребителите следва да има, както и трябва да им се поиска, независимо дали данните се записват или достъпват директно от доставчика на услугата или чрез услугите на трета страна. Или по-просто казано, ако използваме Google Analytics, ние трябва да предупредим потребителя и да му поискаме съгласието, а не Google. В този случай доставчикът на услугата (самият сайт) се явява като оператор на лични данни (не по смисъла на българския ЗЗЛД, но по смисъла на европейските директиви, а Google се явява трета страна оторизирана от администратора да управлява тези данни от негово име и за негова сметка – ерго отговорността е на администратора;
  5. Информиране на потребителя и искане на съгласие не е необходимо, когато записваните и четените данни се използват за технологични нужди и това е свързано с предоставянето на услугата, която потребителят изрично е поискал да използва. Бих казал, че когато случаят е такъв, лично аз бих избрал да информирам потребителя какво и защо се записва и чете, без обаче да му искам съгласието;
  6. Според Европейските директиви съгласието може да се изрази от потребителите и чрез специфични технологии създадени за тази цел, но използването на технологиите не отменя необходимостта от информиране на потребителя относно това какви данни се съхраняват и с каква цел се обработват;
  7. Европейската нормативна уредба в това отношение изглежда не е траспонирана в националното законодателство на България, което не означава, че може да не се спазва;

Hi Fi Raspberry Pi – digitising and streaming vinyl

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/hi-fi-raspberry-pi/

Over at Mozilla HQ (where Firefox, a browser that many of you are using to read this, is made), some retro hardware hacking has been going on.

vinyl record

The Mozillans have worked their way through several office music services, but nothing, so far, has stuck. Then this home-made project, which started as a bit of a joke, landed on a countertop – and it’s stayed.

Matt Claypotch found a vinyl record player online, and had it delivered to the office, intending to tinker with it at home. It never made it that far. He and his colleagues spent their lunch hour at a local thrift store buying up random vintage vinyl…and the record player stayed in the office so everybody could use it.

Potch’s officemates embarked on a vinyl spending spree.



What could be better? The warm crackle of vintage vinyl, “random, crappy albums” you definitely can’t find on Spotify (and stuff like the Van Halen album above that you can find on Spotify but possibly would prefer not to)…the problem was, once the machine had been set up in a break room, only the people in that room could listen to the cheese.

Enter the Raspberry Pi, with a custom-made streaming setup. One Mozillan didn’t want to have to sit in the common area to get his daily dose of bangin’ choons, so he set up a Pi to stream music from the analogue vinyl over USB (it’s 2016, record players apparently have USB ports now) via an Icecast stream to headphones anywhere in the office. Analogue > digital > analogue, if you like.

The setup is surprisingly successful; they’ve organised other audio systems which weren’t very popular, but this one, which happened organically, is being used by the whole office.

You can listen to a podcast from Envoy Office Hacks about the setup, and the office’s reaction to it.

Mozilla, keep on bopping to disco Star Wars. (I’m off to see if I can find a copy of that record. It’s probably a lot better in my imagination than it is in real life, but BOY, is it good in my imagination*.)

*I found it on YouTube. It’s a lot better in my imagination.

The post Hi Fi Raspberry Pi – digitising and streaming vinyl appeared first on Raspberry Pi.

Revisiting How We Put Together Linux Systems

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html

In a previous blog story I discussed
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems,
I now want to take the opportunity to explain a bit where we want to
take this with
systemd in the
longer run, and what we want to build out of it. This is going to be a
longer story, so better grab a cold bottle of
Club Mate before you start

Traditional Linux distributions are built around packaging systems
like RPM or dpkg, and an organization model where upstream developers
and downstream packagers are relatively clearly separated: an upstream
developer writes code, and puts it somewhere online, in a tarball. A
packager than grabs it and turns it into RPMs/DEBs. The user then
grabs these RPMs/DEBs and installs them locally on the system. For a
variety of uses this is a fantastic scheme: users have a large
selection of readily packaged software available, in mostly uniform
packaging, from a single source they can trust. In this scheme the
distribution vets all software it packages, and as long as the user
trusts the distribution all should be good. The distribution takes the
responsibility of ensuring the software is not malicious, of timely
fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn’t fit
many use-cases of our software particularly well. Let’s have a look at
the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream
    distributions to package their stuff. It’s the downstream
    distribution that decides on schedules, packaging details, and how
    to handle support. Often upstream vendors want much faster release
    cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to
    impossible. Since the end-user can run a variety of different
    package versions together, and expects the software he runs to just
    work on any combination, the test matrix explodes. If upstream tests
    its version on distribution X release Y, then there’s no guarantee
    that that’s the precise combination of packages that the end user
    will eventually run. In fact, it is very unlikely that the end user
    will, since most distributions probably updated a number of
    libraries the package relies on by the time the package ends up being
    made available to the user. The fact that each package can be
    individually updated by the user, and each user can combine library
    versions, plug-ins and executables relatively freely, results in a high
    risk of something going wrong.

  • Since there are so many different distributions in so many different
    versions around, if upstream tries to build and test software for
    them it needs to do so for a large number of distributions, which is
    a massive effort.

  • The distributions are actually quite different in many ways. In
    fact, they are different in a lot of the most basic
    functionality. For example, the path where to put x86-64 libraries
    is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is
    hard: if you want to do it, you need to actually install them, each
    one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and
    trademark requirements (and rightly so), any kind of closed source
    software (or otherwise non-free) does not fit into this scheme at

This all together makes it really hard for many upstreams to work
nicely with the current way how Linux works. Often they try to improve
the situation for them, for example by bundling libraries, to make
their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for
people who want to put together their individual system, nicely
adjusted to exactly what they need. However, this is not really how
many of today’s Linux systems are built, installed or updated. If you
build any kind of embedded device, a server system, or even user
systems, you frequently do your work based on complete system images,
that are linearly versioned. You build these images somewhere, and
then you replicate them atomically to a larger number of systems. On
these systems, you don’t install or remove packages, you get a defined
set of files, and besides installing or updating the system there are
no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing
for this major use-case of Linux. Their strict focus on individual
packages as well as package managers as end-user install and update
tool is incompatible with what many system vendors want.


The classic Linux distribution scheme is frequently not what end users
want, either. Many users are used to app markets like Android, Windows
or iOS/Mac have. Markets are a platform that doesn’t package, build or
maintain software like distributions do, but simply allows users to
quickly find and download the software they need, with the app vendor
responsible for keeping the app updated, secured, and all that on the
vendor’s release cycle. Users tend to be impatient. They want their
software quickly, and the fine distinction between trusting a single
distribution or a myriad of app developers individually is usually not
important for them. The companies behind the marketplaces usually try
to improve this trust problem by providing sand-boxing technologies: as
a replacement for the distribution that audits, vets, builds and
packages the software and thus allows users to trust it to a certain
level, these vendors try to find technical solutions to ensure that
the software they offer for download can’t be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are
sometimes quite successful attempts to do something about it. Ubuntu
Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of
this problem set, usually with a strict focus on one facet of Linux
systems. For example, Ubuntu Apps focus strictly on end user (desktop)
applications, and don’t care about how we built/update/install the OS
itself, or containers. Docker OTOH focuses on containers only, and
doesn’t care about end-user apps. Software Collections tries to focus
on the development environments. ChromeOS focuses on the OS itself,
but only for end-user devices. CoreOS also focuses on the OS, but
only for server systems.

The approaches they find are usually good at specific things, and use
a variety of different technologies, on different layers. However,
none of these projects tried to fix this problems in a generic way,
for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so
generic: you can build supercomputers and tiny embedded devices out of
it. It’s time we come up with a basic, reusable scheme how to solve
the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom
Gundersen, David Herrmann, and yours truly) recently met in Berlin
about all these things, and tried to come up with a scheme that is
somewhat simple, but tries to solve the issues generically, for all
use-cases, as part of the systemd project. All that in a way that is
somewhat compatible with the current scheme of distributions, to allow
a slow, gradual adoption. Also, and that’s something one cannot stress
enough: the toolbox scheme of classic Linux distributions is
actually a good one, and for many cases the right one. However, we
need to make sure we make distributions relevant again for all
use-cases, not just those of highly individualized systems.

Anyway, so let’s summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their
    software (regardless if just an app, or the whole OS) directly for
    the end user, and know the precise combination of libraries and
    packages it will operate with.

  • We want to allow end users and administrators to install these
    packages on their systems, regardless which distribution they have
    installed on it.

  • We want a unified solution that ultimately can cover updates for
    full systems, OS containers, end user apps, programming ABIs, and
    more. These updates shall be double-buffered, (at least). This is an
    absolute necessity if we want to prepare the ground for operating
    systems that manage themselves, that can update safely without
    administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a
    fully trustable OS, with images that can be verified by a full
    trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the
    kernel, and initrd. Cryptographically secure verification of the
    code we execute is relevant on the desktop (like ChromeOS does), but
    also for apps, for embedded devices and even on servers (in a post-Snowden
    world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So,
now, let’s discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs
and Linux file system name-spacing. btrfs at this point already has a
large number of features that fit neatly in our concept, and the
maintainers are busy working on a couple of others we want to
eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and
introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> — This refers to a full
    vendor operating system tree. It’s basically a /usr tree (and no
    other directories), in a specific version, with everything you need to boot
    it up inside it. The <vendorid> field is replaced by some vendor
    identifier, maybe a scheme like
    org.fedoraproject.FedoraWorkstation. The <architecture> field
    specifies a CPU architecture the OS is designed for, for example
    x86-64. The <version> field specifies a specific OS version, for
    example 23.4. An example sub-volume name could hence look like this:

  • root:<name>:<vendorid>:<architecture> — This refers to an
    instance of an operating system. Its basically a root directory,
    containing primarily /etc and /var (but possibly more). Sub-volumes
    of this type do not contain a populated /usr tree though. The
    <name> field refers to some instance name (maybe the host name of
    the instance). The other fields are defined as above. An example
    sub-volume name is

  • runtime:<vendorid>:<architecture>:<version> — This refers to a
    vendor runtime. A runtime here is supposed to be a set of
    libraries and other resources that are needed to run apps (for the
    concept of apps see below), all in a /usr tree. In this regard this
    is very similar to the usr sub-volumes explained above, however,
    while a usr sub-volume is a full OS and contains everything
    necessary to boot, a runtime is really only a set of
    libraries. You cannot boot it, but you can run apps with it. An
    example sub-volume name is: runtime:org.gnome.GNOME3_20:x86_64:3.20.1

  • framework:<vendorid>:<architecture>:<version> — This is very
    similar to a vendor runtime, as described above, it contains just a
    /usr tree, but goes one step further: it additionally contains all
    development headers, compilers and build tools, that allow
    developing against a specific runtime. For each runtime there should
    be a framework. When you develop against a specific framework in a
    specific architecture, then the resulting app will be compatible
    with the runtime of the same vendor ID and architecture. Example:

  • app:<vendorid>:<runtime>:<architecture>:<version> — This
    encapsulates an application bundle. It contains a tree that at
    runtime is mounted to /opt/<vendorid>, and contains all the
    application’s resources. The <vendorid> could be a string like
    org.libreoffice.LibreOffice, the <runtime> refers to one the
    vendor id of one specific runtime the application is built for, for
    example org.gnome.GNOME3_20:3.20.1. The <architecture> and
    <version> refer to the architecture the application is built for,
    and of course its version. Example:

  • home:<user>:<uid>:<gid> — This sub-volume shall refer to the home
    directory of the specific user. The <user> field contains the user
    name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs
    of the user. The idea here is that in the long run the list of
    sub-volumes is sufficient as a user database (but see
    below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly
identifiable. It is our intention to introduce a new GPT partition type
ID for this.

How To Use It

After we introduced this naming scheme let’s see what we can build of

  • When booting up a system we mount the root directory from one of the
    root sub-volumes, and then mount /usr from a matching usr
    sub-volume. Matching here means it carries the same <vendor-id>
    and <architecture>. Of course, by default we should pick the
    matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when
    we boot up a regular system: we simply combine a usr sub-volume
    with a root sub-volume.

  • When we enumerate the system’s users we simply go through the
    list of home snapshots.

  • When a user authenticates and logs in we mount his home
    directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the
    app sub-volume to /opt/<vendorid>/, and the appropriate runtime
    sub-volume the app picked to /usr, as well as the user’s
    /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he
    installs the right framework, and then temporarily transitions into
    a name space where /usris mounted from the framework sub-volume, and
    /home/$USER from his own home directory. In this name space he then
    runs his build commands. He can build in multiple name spaces at the
    same time, if he intends to builds software for multiple runtimes or
    architectures at the same time.

Instantiating a new system or OS container (which is exactly the same
in this scheme) just consists of creating a new appropriately named
root sub-volume. Completely naturally you can share one vendor OS
copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-fold-buffered), because
usr, runtime, framework, app sub-volumes can exist in multiple
versions. Of course, by default the execution logic should always pick
the newest release of each sub-volume, but it is up to the user keep
multiple versions around, and possibly execute older versions, if he
desires to do so. In fact, like on ChromeOS this could even be handled
automatically: if a system fails to boot with a newer snapshot, the
boot loader can automatically revert back to an older version of the

An Example

Note that in result this allows installing not only multiple end-user
applications into the same btrfs volume, but also multiple operating
systems, multiple system instances, multiple runtimes, multiple
frameworks. Or to spell this out in an example:

Let’s say Fedora, Mageia and ArchLinux all implement this scheme,
and provide ready-made end-user images. Also, the GNOME, KDE, SDL
projects all define a runtime+framework to develop against. Finally,
both LibreOffice and Firefox provide their stuff according to this
scheme. You can now trivially install of these into the same btrfs

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:org.mageia.Client:i386:39.3
  • usr:org.mageia.Client:i386:39.4
  • usr:org.mageia.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:org.mageia.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.1
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.4
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
  • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
  • runtime:org.kde.KDE5_6:x86_64:5.6.0
  • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • framework:org.kde.KDE5_6:x86_64:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems
installed. All of them in three versions, and one even in a beta
version. We have four system instances around. Two of them of Fedora,
maybe one of them we usually boot from, the other we run for very
specific purposes in an OS container. We also have the runtimes for
two GNOME releases in multiple versions, plus one for KDE. Then, we
have the development trees for one version of KDE and GNOME around, as
well as two apps, that make use of two releases of the GNOME
runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can
actually relatively freely mix and match apps and OSes, or develop
against specific frameworks in specific versions on any operating
system. It doesn’t matter if you booted your ArchLinux instance, or
your Fedora one, you can execute both LibreOffice and Firefox just
fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems
you installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore
which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the
sub-volume list, it doesn’t matter which system you boot, the
distribution should be able to find your local users automatically,
without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on
execution we already came quite far, but how do we actually get these
sub-volumes onto the final machines, and how do we update them? Well,
btrfs has a feature they call “send-and-receive”. It basically allows
you to “diff” two file system versions, and generate a binary
delta. You can generate these deltas on a developer’s machine and then
push them into the user’s system, and he’ll get the exact same
sub-volume too. This is how we envision installation and updating of
operating systems, applications, runtimes, frameworks. At installation
time, we simply deserialize an initial send-and-receive delta into
our btrfs volume, and later, when a new version is released we just
add in the few bits that are new, by dropping in another
send-and-receive delta under a new sub-volume name. And we do it
exactly the same for the OS itself, for a runtime, a framework or an
app. There’s no technical distinction anymore. The underlying
operation for installing apps, runtime, frameworks, vendor OSes, as well
as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an
awful lot of waste, after all they will contain a lot of very similar
data, since a lot of resources are shared between distributions,
frameworks and runtimes. However, thankfully btrfs actually is able to
de-duplicate this for us. If we add in a new app snapshot, this simply
adds in the new files that changed. Moreover different runtimes and
operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user,
desktop side of things, the concept is also extremely powerful in
server scenarios. For example, it is easy to build your own usr
trees and deliver them to your hosts using this scheme. The usr
sub-volumes are supposed to be something that administrators can put
together. After deserializing them into a couple of hosts, you can
trivially instantiate them as OS containers there, simply by adding a
new root sub-volume for each instance, referencing the usr tree you
just put together. Instantiating OS containers hence becomes as easy
as creating a new btrfs sub-volume. And you can still update the images
nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded
use-cases. Regardless if you build a TV, an IVI system or a phone: you
can put together you OS versions as usr trees, and then use
btrfs-send-and-receive facilities to deliver them to the systems, and
update them there.

Many people when they hear the word “btrfs” instantly reply with “is
it ready yet?”. Thankfully, most of the functionality we really need
here is strictly read-only. With the exception of the home
sub-volumes (see below) all snapshots are strictly read-only, and are
delivered as immutable vendor trees onto the devices. They never are
changed. Even if btrfs might still be immature, for this kind of
read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example,
an installer image could include a Fedora version compiled for x86-64,
one for i386, one for ARM, all in the same btrfs volume. Due to btrfs’
de-duplication they will share as much as possible, and when the image
is booted up the right sub-volume is automatically picked. Something
similar of course applies to the apps too!

This also allows us to implement something that we like to call
Operating-System-As-A-Virus. Installing a new system is little more

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader into the EFI System Partition
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume,
you can trivially duplicate this onto any block device you want. Let’s
say you are a happy Fedora user, and you want to provide a friend with
his own installation of this awesome system, all on a USB stick. All
you have to do for this is do the steps above, using your installed
usr tree as source to copy. And there you go! And you don’t have to
be afraid that any of your personal data is copied too, as the usr
sub-volume is the exact version your vendor provided you with. Or with
other words: there’s no distinction anymore between installer images
and installed systems. It’s all the same. Installation becomes
replication, not more. Live-CDs and installed systems can be fully

Note that in this design apps are actually developed against a single,
very specific runtime, that contains all libraries it can link against
(including a specific glibc version!). Any library that is not
included in the runtime the developer picked must be included in the
app itself. This is similar how apps on Android declare one very
specific Android version they are developed against. This greatly
simplifies application installation, as there’s no dependency hell:
each app pulls in one runtime, and the app is actually free to pick
which one, as you can have multiple installed, though only one is used
by each app.

Also note that operating systems built this way will never see
“half-updated” systems, as it is common when a system is updated using
RPM/dpkg. When updating the system the code will either run the old or
the new version, but it will never see part of the old files and part
of the new files. This is the same for apps, runtimes, and frameworks,

Where We Are Now

We are currently working on a lot of the groundwork necessary for
this. This scheme relies on the ability to monopolize the
vendor OS resources in /usr, which is the key of what I described in
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems
a few weeks back. Then, of course, for the full desktop app concept we
need a strong sandbox, that does more than just hiding files from the
file system view. After all with an app concept like the above the
primary interfacing between the executed desktop apps and the rest of the
system is via IPC (which is why we work on kdbus and teach it all
kinds of sand-boxing features), and the kernel itself. Harald Hoyer has
started working on generating the btrfs send-and-receive images based
on Fedora.

Getting to the full scheme will take a while. Currently we have many
of the building blocks ready, but some major items are missing. For
example, we push quite a few problems into btrfs, that other solutions
try to solve in user space. One of them is actually
signing/verification of images. The btrfs maintainers are working on
adding this to the code base, but currently nothing exists. This
functionality is essential though to come to a fully verified system
where a trust chain exists all the way from the firmware to the
apps. Also, to make the home sub-volume scheme fully workable we
actually need encrypted sub-volumes, so that the sub-volume’s
pass-phrase can be used for authenticating users in PAM. This doesn’t
exist either.

Working towards this scheme is a gradual process. Many of the steps we
require for this are useful outside of the grand scheme though, which
means we can slowly work towards the goal, and our users can already
take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from
traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with
/usr, /home, /opt, /var, /etc. It executes in an environment that is
pretty much identical to how it would be run on traditional systems.

There’s no need to fully move to a system that uses only btrfs and
follows strictly this sub-volume scheme. For example, we intend to
provide implicit support for systems that are installed on ext4 or
xfs, or that are put together with traditional packaging tools such as
RPM or dpkg: if the the user tries to install a
runtime/app/framework/os image on a system that doesn’t use btrfs so
far, it can just create a loop-back btrfs image in /var, and push the
data into that. Even us developers will run our stuff like this for a
while, after all this new scheme is not particularly useful for highly
individualized systems, and we developers usually tend to run
systems like that.

Also note that this in no way a departure from packaging systems like
RPM or DEB. Even if the new scheme we propose is used for installing
and updating a specific system, it is RPM/DEB that is used to put
together the vendor OS tree initially. Hence, even in this scheme
RPM/DEB are highly relevant, though not strictly as an end-user tool
anymore, but as a build tool.

So Let’s Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images,
    user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS
    images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of
    all executed code can be done, all the way to the firmware, as
    standard feature of the system.

  • We want to allow app vendors to write their programs against very
    specific frameworks, under the knowledge that they will end up being
    executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions
    of them, multiple runtimes in multiple versions, as well as multiple
    frameworks in multiple versions. And of course, multiple apps in
    multiple versions.

  • We want everything double buffered (or actually n-fold buffered), to
    ensure we can reliably update/rollback versions, in particular to
    safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS
    container is as simple as adding in a new snapshot and restarting
    the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS
    instances from a single vendor tree, with zero difference for doing
    this on order to be able to boot it on bare metal/VM or as a

  • We want to enable Linux to have an open scheme that people can use
    to build app markets and similar schemes, not restricted to a
    specific vendor.

Final Words

I’ll be talking about this at LinuxCon Europe in October. I originally
intended to discuss this at the Linux Plumbers Conference (which I
assumed was the right forum for this kind of major plumbing level
improvement), and at linux.conf.au, but there was no interest in my
session submissions there…

Of course this is all work in progress. These are our current ideas we
are working towards. As we progress we will likely change a number of
things. For example, the precise naming of the sub-volumes might look
very different in the end.

Of course, we are developers of the systemd project. Implementing this
scheme is not just a job for the systemd developers. This is a
reinvention how distributions work, and hence needs great support from
the distributions. We really hope we can trigger some interest by
publishing this proposal now, to get the distributions on board. This
after all is explicitly not supposed to be a solution for one specific
project and one specific vendor product, we care about making this
open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us
(IRC, mail, G+, …).

The future is going to be awesome!

systemd for Administrators, Part II

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/systemd-for-admins-2.html

Here’s the second installment of my ongoing series about systemd for administrators.

Which Service Owns Which Processes?

On most Linux systems the number of processes that are running by
default is substantial. Knowing which process does what and where it
belongs to becomes increasingly difficult. Some services even maintain
a couple of worker processes which clutter the “ps” output with
many additional processes that are often not easy to recognize. This is
further complicated if daemons spawn arbitrary 3rd-party processes, as
Apache does with CGI processes, or cron does with user jobs.

A slight remedy for this is often the process inheritance tree, as
shown by “ps xaf“. However this is usually not reliable, as processes
whose parents die get reparented to PID 1, and hence all information
about inheritance gets lost. If a process “double forks” it hence loses
its relationships to the processes that started it. (This actually is
supposed to be a feature and is relied on for the traditional Unix
daemonizing logic.) Furthermore processes can freely change their names
with PR_SETNAME or by patching argv[0], thus making
it harder to recognize them. In fact they can play hide-and-seek with
the administrator pretty nicely this way.

In systemd we place every process that is spawned in a control
named after its service. Control groups (or cgroups)
at their most basic are simply groups of processes that can be
arranged in a hierarchy and labelled individually. When processes
spawn other processes these children are automatically made members of
the parents cgroup. Leaving a cgroup is not possible for unprivileged
processes. Thus, cgroups can be used as an effective way to label
processes after the service they belong to and be sure that the
service cannot escape from the label, regardless how often it forks or
renames itself. Furthermore this can be used to safely kill a service
and all processes it created, again with no chance of escaping.

In today’s installment I want to introduce you to two commands you
may use to relate systemd services and processes. The first one, is
the well known ps command which has been updated to show
cgroup information along the other process details. And this is how it

$ ps xawf -eo pid,user,cgroup,args
  PID USER     CGROUP                              COMMAND
    2 root     -                                   [kthreadd]
    3 root     -                                    \_ [ksoftirqd/0]
 4281 root     -                                    \_ [flush-8:0]
    1 root     name=systemd:/systemd-1             /sbin/init
  455 root     name=systemd:/systemd-1/sysinit.service /sbin/udevd -d
28188 root     name=systemd:/systemd-1/sysinit.service  \_ /sbin/udevd -d
28191 root     name=systemd:/systemd-1/sysinit.service  \_ /sbin/udevd -d
 1096 dbus     name=systemd:/systemd-1/dbus.service /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activation
 1131 root     name=systemd:/systemd-1/auditd.service auditd
 1133 root     name=systemd:/systemd-1/auditd.service  \_ /sbin/audispd
 1135 root     name=systemd:/systemd-1/auditd.service      \_ /usr/sbin/sedispatch
 1171 root     name=systemd:/systemd-1/NetworkManager.service /usr/sbin/NetworkManager --no-daemon
 4028 root     name=systemd:/systemd-1/NetworkManager.service  \_ /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease -
 1175 avahi    name=systemd:/systemd-1/avahi-daemon.service avahi-daemon: running [epsilon.local]
 1194 avahi    name=systemd:/systemd-1/avahi-daemon.service  \_ avahi-daemon: chroot helper
 1193 root     name=systemd:/systemd-1/rsyslog.service /sbin/rsyslogd -c 4
 1195 root     name=systemd:/systemd-1/cups.service cupsd -C /etc/cups/cupsd.conf
 1207 root     name=systemd:/systemd-1/mdmonitor.service mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
 1210 root     name=systemd:/systemd-1/irqbalance.service irqbalance
 1216 root     name=systemd:/systemd-1/dbus.service /usr/sbin/modem-manager
 1219 root     name=systemd:/systemd-1/dbus.service /usr/libexec/polkit-1/polkitd
 1242 root     name=systemd:/systemd-1/dbus.service /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
 1249 68       name=systemd:/systemd-1/haldaemon.service hald
 1250 root     name=systemd:/systemd-1/haldaemon.service  \_ hald-runner
 1273 root     name=systemd:/systemd-1/haldaemon.service      \_ hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
 1275 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-rfkill-killswitch
 1284 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-leds
 1285 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-generic-backlight
 1287 68       name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-acpi
 1317 root     name=systemd:/systemd-1/abrtd.service /usr/sbin/abrtd -d -s
 1332 root     name=systemd:/systemd-1/[email protected]/tty2 /sbin/mingetty tty2
 1339 root     name=systemd:/systemd-1/[email protected]/tty3 /sbin/mingetty tty3
 1342 root     name=systemd:/systemd-1/[email protected]/tty5 /sbin/mingetty tty5
 1343 root     name=systemd:/systemd-1/[email protected]/tty4 /sbin/mingetty tty4
 1344 root     name=systemd:/systemd-1/crond.service crond
 1346 root     name=systemd:/systemd-1/[email protected]/tty6 /sbin/mingetty tty6
 1362 root     name=systemd:/systemd-1/sshd.service /usr/sbin/sshd
 1376 root     name=systemd:/systemd-1/prefdm.service /usr/sbin/gdm-binary -nodaemon
 1391 root     name=systemd:/systemd-1/prefdm.service  \_ /usr/libexec/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 --force-active-vt
 1394 root     name=systemd:/systemd-1/prefdm.service      \_ /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
 1495 root     name=systemd:/user/lennart/1             \_ pam: gdm-password
 1521 lennart  name=systemd:/user/lennart/1                 \_ gnome-session
 1621 lennart  name=systemd:/user/lennart/1                     \_ metacity
 1635 lennart  name=systemd:/user/lennart/1                     \_ gnome-panel
 1638 lennart  name=systemd:/user/lennart/1                     \_ nautilus
 1640 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/polkit-gnome-authentication-agent-1
 1641 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/seapplet
 1644 lennart  name=systemd:/user/lennart/1                     \_ gnome-volume-control-applet
 1646 lennart  name=systemd:/user/lennart/1                     \_ /usr/sbin/restorecond -u
 1652 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/devilspie
 1662 lennart  name=systemd:/user/lennart/1                     \_ nm-applet --sm-disable
 1664 lennart  name=systemd:/user/lennart/1                     \_ gnome-power-manager
 1665 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/gdu-notification-daemon
 1670 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/evolution/2.32/evolution-alarm-notify
 1672 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/python /usr/share/system-config-printer/applet.py
 1674 lennart  name=systemd:/user/lennart/1                     \_ /usr/lib64/deja-dup/deja-dup-monitor
 1675 lennart  name=systemd:/user/lennart/1                     \_ abrt-applet
 1677 lennart  name=systemd:/user/lennart/1                     \_ bluetooth-applet
 1678 lennart  name=systemd:/user/lennart/1                     \_ gpk-update-icon
 1408 root     name=systemd:/systemd-1/console-kit-daemon.service /usr/sbin/console-kit-daemon --no-daemon
 1419 gdm      name=systemd:/systemd-1/prefdm.service /usr/bin/dbus-launch --exit-with-session
 1453 root     name=systemd:/systemd-1/dbus.service /usr/libexec/upowerd
 1473 rtkit    name=systemd:/systemd-1/rtkit-daemon.service /usr/libexec/rtkit-daemon
 1496 root     name=systemd:/systemd-1/accounts-daemon.service /usr/libexec/accounts-daemon
 1499 root     name=systemd:/systemd-1/systemd-logger.service /lib/systemd/systemd-logger
 1511 lennart  name=systemd:/systemd-1/prefdm.service /usr/bin/gnome-keyring-daemon --daemonize --login
 1534 lennart  name=systemd:/user/lennart/1        dbus-launch --sh-syntax --exit-with-session
 1535 lennart  name=systemd:/user/lennart/1        /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
 1603 lennart  name=systemd:/user/lennart/1        /usr/libexec/gconfd-2
 1612 lennart  name=systemd:/user/lennart/1        /usr/libexec/gnome-settings-daemon
 1615 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd
 1626 lennart  name=systemd:/user/lennart/1        /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
 1634 lennart  name=systemd:/user/lennart/1        /usr/bin/pulseaudio --start --log-target=syslog
 1649 lennart  name=systemd:/user/lennart/1         \_ /usr/libexec/pulse/gconf-helper
 1645 lennart  name=systemd:/user/lennart/1        /usr/libexec/bonobo-activation-server --ac-activate --ior-output-fd=24
 1668 lennart  name=systemd:/user/lennart/1        /usr/libexec/im-settings-daemon
 1701 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfs-gdu-volume-monitor
 1707 lennart  name=systemd:/user/lennart/1        /usr/bin/gnote --panel-applet --oaf-activate-iid=OAFIID:GnoteApplet_Factory --oaf-ior-fd=22
 1725 lennart  name=systemd:/user/lennart/1        /usr/libexec/clock-applet
 1727 lennart  name=systemd:/user/lennart/1        /usr/libexec/wnck-applet
 1729 lennart  name=systemd:/user/lennart/1        /usr/libexec/notification-area-applet
 1733 root     name=systemd:/systemd-1/dbus.service /usr/libexec/udisks-daemon
 1747 root     name=systemd:/systemd-1/dbus.service  \_ udisks-daemon: polling /dev/sr0
 1759 lennart  name=systemd:/user/lennart/1        gnome-screensaver
 1780 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd-trash --spawner :1.9 /org/gtk/gvfs/exec_spaw/0
 1864 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfs-afc-volume-monitor
 1874 lennart  name=systemd:/user/lennart/1        /usr/libexec/gconf-im-settings-daemon
 1903 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd-burn --spawner :1.9 /org/gtk/gvfs/exec_spaw/1
 1909 lennart  name=systemd:/user/lennart/1        gnome-terminal
 1913 lennart  name=systemd:/user/lennart/1         \_ gnome-pty-helper
 1914 lennart  name=systemd:/user/lennart/1         \_ bash
29231 lennart  name=systemd:/user/lennart/1         |   \_ ssh tango
 2221 lennart  name=systemd:/user/lennart/1         \_ bash
 4193 lennart  name=systemd:/user/lennart/1         |   \_ ssh tango
 2461 lennart  name=systemd:/user/lennart/1         \_ bash
29219 lennart  name=systemd:/user/lennart/1         |   \_ emacs systemd-for-admins-1.txt
15113 lennart  name=systemd:/user/lennart/1         \_ bash
27251 lennart  name=systemd:/user/lennart/1             \_ empathy
29504 lennart  name=systemd:/user/lennart/1             \_ ps xawf -eo pid,user,cgroup,args
 1968 lennart  name=systemd:/user/lennart/1        ssh-agent
 1994 lennart  name=systemd:/user/lennart/1        gpg-agent --daemon --write-env-file
18679 lennart  name=systemd:/user/lennart/1        /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
18741 lennart  name=systemd:/user/lennart/1         \_ /usr/lib64/firefox-3.6/firefox
28900 lennart  name=systemd:/user/lennart/1             \_ /usr/lib64/nspluginwrapper/npviewer.bin --plugin /usr/lib64/mozilla/plugins/libflashplayer.so --connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
 4016 root     name=systemd:/systemd-1/sysinit.service /usr/sbin/bluetoothd --udev
 4094 smmsp    name=systemd:/systemd-1/sendmail.service sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
 4096 root     name=systemd:/systemd-1/sendmail.service sendmail: accepting connections
 4112 ntp      name=systemd:/systemd-1/ntpd.service /usr/sbin/ntpd -n -u ntp:ntp -g
27262 lennart  name=systemd:/user/lennart/1        /usr/libexec/mission-control-5
27265 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-haze
27268 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-logger
27270 lennart  name=systemd:/user/lennart/1        /usr/libexec/dconf-service
27280 lennart  name=systemd:/user/lennart/1        /usr/libexec/notification-daemon
27284 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-gabble
27285 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-salut
27297 lennart  name=systemd:/user/lennart/1        /usr/libexec/geoclue-yahoo

(Note that this output is shortened, I have removed most of the
kernel threads here, since they are not relevant in the context of
this blog story)

In the third column you see the cgroup systemd assigned to each
process. You’ll find that the udev processes are in the
name=systemd:/systemd-1/sysinit.service cgroup, which is
where systemd places all processes started by the
sysinit.service service, which covers early boot.

My personal recommendation is to set the shell alias psc
to the ps command line shown above:

alias psc='ps xawf -eo pid,user,cgroup,args'

With this service information of processes is just four keypresses

A different way to present the same information is the
systemd-cgls tool we ship with systemd. It shows the cgroup
hierarchy in a pretty tree. Its output looks like this:

$ systemd-cgls
+    2 [kthreadd]
+ 4281 [flush-8:0]
+ user
| \ lennart
|   \ 1
|     +  1495 pam: gdm-password
|     +  1521 gnome-session
|     +  1534 dbus-launch --sh-syntax --exit-with-session
|     +  1535 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
|     +  1603 /usr/libexec/gconfd-2
|     +  1612 /usr/libexec/gnome-settings-daemon
|     +  1615 /ushr/libexec/gvfsd
|     +  1621 metacity
|     +  1626 /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
|     +  1634 /usr/bin/pulseaudio --start --log-target=syslog
|     +  1635 gnome-panel
|     +  1638 nautilus
|     +  1640 /usr/libexec/polkit-gnome-authentication-agent-1
|     +  1641 /usr/bin/seapplet
|     +  1644 gnome-volume-control-applet
|     +  1645 /usr/libexec/bonobo-activation-server --ac-activate --ior-output-fd=24
|     +  1646 /usr/sbin/restorecond -u
|     +  1649 /usr/libexec/pulse/gconf-helper
|     +  1652 /usr/bin/devilspie
|     +  1662 nm-applet --sm-disable
|     +  1664 gnome-power-manager
|     +  1665 /usr/libexec/gdu-notification-daemon
|     +  1668 /usr/libexec/im-settings-daemon
|     +  1670 /usr/libexec/evolution/2.32/evolution-alarm-notify
|     +  1672 /usr/bin/python /usr/share/system-config-printer/applet.py
|     +  1674 /usr/lib64/deja-dup/deja-dup-monitor
|     +  1675 abrt-applet
|     +  1677 bluetooth-applet
|     +  1678 gpk-update-icon
|     +  1701 /usr/libexec/gvfs-gdu-volume-monitor
|     +  1707 /usr/bin/gnote --panel-applet --oaf-activate-iid=OAFIID:GnoteApplet_Factory --oaf-ior-fd=22
|     +  1725 /usr/libexec/clock-applet
|     +  1727 /usr/libexec/wnck-applet
|     +  1729 /usr/libexec/notification-area-applet
|     +  1759 gnome-screensaver
|     +  1780 /usr/libexec/gvfsd-trash --spawner :1.9 /org/gtk/gvfs/exec_spaw/0
|     +  1864 /usr/libexec/gvfs-afc-volume-monitor
|     +  1874 /usr/libexec/gconf-im-settings-daemon
|     +  1882 /usr/libexec/gvfs-gphoto2-volume-monitor
|     +  1903 /usr/libexec/gvfsd-burn --spawner :1.9 /org/gtk/gvfs/exec_spaw/1
|     +  1909 gnome-terminal
|     +  1913 gnome-pty-helper
|     +  1914 bash
|     +  1968 ssh-agent
|     +  1994 gpg-agent --daemon --write-env-file
|     +  2221 bash
|     +  2461 bash
|     +  4193 ssh tango
|     + 15113 bash
|     + 18679 /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
|     + 18741 /usr/lib64/firefox-3.6/firefox
|     + 27251 empathy
|     + 27262 /usr/libexec/mission-control-5
|     + 27265 /usr/libexec/telepathy-haze
|     + 27268 /usr/libexec/telepathy-logger
|     + 27270 /usr/libexec/dconf-service
|     + 27280 /usr/libexec/notification-daemon
|     + 27284 /usr/libexec/telepathy-gabble
|     + 27285 /usr/libexec/telepathy-salut
|     + 27297 /usr/libexec/geoclue-yahoo
|     + 28900 /usr/lib64/nspluginwrapper/npviewer.bin --plugin /usr/lib64/mozilla/plugins/libflashplayer.so --connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
|     + 29219 emacs systemd-for-admins-1.txt
|     + 29231 ssh tango
|     \ 29519 systemd-cgls
\ systemd-1
  + 1 /sbin/init
  + ntpd.service
  | \ 4112 /usr/sbin/ntpd -n -u ntp:ntp -g
  + systemd-logger.service
  | \ 1499 /lib/systemd/systemd-logger
  + accounts-daemon.service
  | \ 1496 /usr/libexec/accounts-daemon
  + rtkit-daemon.service
  | \ 1473 /usr/libexec/rtkit-daemon
  + console-kit-daemon.service
  | \ 1408 /usr/sbin/console-kit-daemon --no-daemon
  + prefdm.service
  | + 1376 /usr/sbin/gdm-binary -nodaemon
  | + 1391 /usr/libexec/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 --force-active-vt
  | + 1394 /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
  | + 1419 /usr/bin/dbus-launch --exit-with-session
  | \ 1511 /usr/bin/gnome-keyring-daemon --daemonize --login
  + [email protected]
  | + tty6
  | | \ 1346 /sbin/mingetty tty6
  | + tty4
  | | \ 1343 /sbin/mingetty tty4
  | + tty5
  | | \ 1342 /sbin/mingetty tty5
  | + tty3
  | | \ 1339 /sbin/mingetty tty3
  | \ tty2
  |   \ 1332 /sbin/mingetty tty2
  + abrtd.service
  | \ 1317 /usr/sbin/abrtd -d -s
  + crond.service
  | \ 1344 crond
  + sshd.service
  | \ 1362 /usr/sbin/sshd
  + sendmail.service
  | + 4094 sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
  | \ 4096 sendmail: accepting connections
  + haldaemon.service
  | + 1249 hald
  | + 1250 hald-runner
  | + 1273 hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
  | + 1275 /usr/libexec/hald-addon-rfkill-killswitch
  | + 1284 /usr/libexec/hald-addon-leds
  | + 1285 /usr/libexec/hald-addon-generic-backlight
  | \ 1287 /usr/libexec/hald-addon-acpi
  + irqbalance.service
  | \ 1210 irqbalance
  + avahi-daemon.service
  | + 1175 avahi-daemon: running [epsilon.local]
  + NetworkManager.service
  | + 1171 /usr/sbin/NetworkManager --no-daemon
  | \ 4028 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease -cf /var/run/nm-dhclient-wlan0.conf wlan0
  + rsyslog.service
  | \ 1193 /sbin/rsyslogd -c 4
  + mdmonitor.service
  | \ 1207 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
  + cups.service
  | \ 1195 cupsd -C /etc/cups/cupsd.conf
  + auditd.service
  | + 1131 auditd
  | + 1133 /sbin/audispd
  | \ 1135 /usr/sbin/sedispatch
  + dbus.service
  | +  1096 /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activation
  | +  1216 /usr/sbin/modem-manager
  | +  1219 /usr/libexec/polkit-1/polkitd
  | +  1242 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
  | +  1453 /usr/libexec/upowerd
  | +  1733 /usr/libexec/udisks-daemon
  | +  1747 udisks-daemon: polling /dev/sr0
  | \ 29509 /usr/libexec/packagekitd
  + dev-mqueue.mount
  + dev-hugepages.mount
  \ sysinit.service
    +   455 /sbin/udevd -d
    +  4016 /usr/sbin/bluetoothd --udev
    + 28188 /sbin/udevd -d
    \ 28191 /sbin/udevd -d

(This too is shortened, the same way)

As you can see, this command shows the processes by their cgroup
and hence service, as systemd labels the cgroups after the
services. For example, you can easily see that the auditing service
auditd.service spawns three individual processes,
auditd, audisp and sedispatch.

If you look closely you will notice that a number of processes have
been assigned to the cgroup /user/1. At this point let’s
simply leave it at that systemd not only maintains services in cgroups,
but user session processes as well. In a later installment we’ll discuss in
more detail what this about.

So much for now, come back soon for the next installment!

On Version Control Systems

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/on-version-control-systems.html

Here’s what I have to say about today’s state of version control systems in Free Software:

We shouldn’t forget that a VC system is just a development tool. Preferring
one over the other is nothing that has any direct influence on code quality, it
doesn’t make your algorithms perform any better, or your applications look
prettier. It’s just a tool. As such it should just do its job and get out of the
way. A programmer should have religious arguments about code quality, about
algorithms or about UIs, but what he certainly should not have is religious
arguments over the feature set of specific VCSes[1].

Does this mean it doesn’t matter at all which VCS to choose? No, of course
it does matter a lot. The step from traditional VCSes to DVCS is a major one, an
important one. Starting a fresh new Free Software project today and choosing
CVS or SVN is anachronistic at best.

Which leaves of course the question, which DVCS to pick. If you take the
“get out of the way” requirement seriously than there can only be one answer to
the question: GIT. Why? It certainly (still) has a steep learning curve, and a
steeper one than most other VC systems. But what is even harder to learn than
GIT is learning all of GIT, Mercurial, Monotone, Bizarre^H^H^H^H^H^H^HBazaar,
Darcs, Arch, SVK at the same time. If every project picked a different VCS
system, and you’d want to contribute to more than just a single project, then
you’d have to learn them all. And learning them all means learning them all not
very well. And needing to learn them all means scaring people away who don’t
want to learn yet another VCS just to check out your code. Fragmentation in use of VCSes for Free Software projects hinders development.

Which brings me to the main point I want to raise with this blog story:

It is much more important to make contributing to Free Software projects
easy by choosing a VCS everyone knows well — than it is to make it easy by
choosing a VCS that everyone could learn easily.

So, and which VCS is it that has a chance of qualifying as “everyone knows
well” and is a DVCS? I would say there is only one answer to the question: GIT.
Sure, there are some high-profile projects using HG (Mozilla, Java, Solaris),
but my impression is that the vast majority of projects that are central to
free desktops do use GIT.

Certainly, some DVCSes might be nicer than others, there might be areas
where GIT is lacking in comparison to others, but those differences are tiny.
What matters more is not scaring contributors away by making it hard for them
to contribute by requiring them to learn yet another VCS.

Yes, with CVS, SVN and GIT I think I have learned enough VC systems for now.
My hunger for learning further ones is exactly zero. Let me just code, and
don’t make it hard for me by asking me to learn your favourite one, please.

Or in other, frank words, if you start a new Open Source project today, and you
don’t choose GIT as VCS then you basically ask potential
contributors to go away.

ALSA recently switched from Mercurial to GIT. That was a good move.

So, please stop discussing which DVCS is the best one. It doesn’t matter. Picking one
that everyone knows is far more important.

That’s all I have to say.


[1] Of course, unless he himself develops a VC system.