Tag Archives: embedded

Detecting landmines – with spinach

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/detecting-landmines-with-spinach/

Forget sniffer dogs…we need to talk about spinach.

The team at MIT (Massachusetts Institute of Technology) have been working to transform spinach plants into a means of detection in the fight against buried munitions such as landmines.

Plant-to-human communication

MIT engineers have transformed spinach plants into sensors that can detect explosives and wirelessly relay that information to a handheld device similar to a smartphone. (Learn more: http://news.mit.edu/2016/nanobionic-spinach-plants-detect-explosives-1031) Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1 The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts.

Nanoparticles, plus tiny tubes called carbon nanotubes, are embedded into the spinach leaves where they pick up nitro-aromatics, chemicals found in the hidden munitions.

It takes the spinach approximately ten minutes to absorb water from the ground, including the nitro-aromatics, which then bind to the polymer material wrapped around the nanotube.

But where does the Pi come into this?

The MIT team shine a laser onto the leaves, detecting the altered fluorescence of the light emitted by the newly bonded tubes. This light is then read by a Raspberry Pi fitted with an infrared camera, resulting in a precise map of where hidden landmines are located. This signal can currently be picked up within a one-mile radius, with plans to increase the reach in future.

detecting landmines with spinach

You can also physically hack a smartphone to replace the Raspberry Pi… but why would you want to do that?

The team at MIT have already used the tech to detect hydrogen peroxide, TNT, and sarin, while co-author Prof. Michael Strano advises that the same setup can be used to detect “virtually anything”.

“The plants could be use for defence applications, but also to monitor public spaces for terrorism-related activities, since we show both water and airborne detection”

More information on the paper can be found at the MIT website.

The post Detecting landmines – with spinach appeared first on Raspberry Pi.

AWS Week in Review – October 24, 2016

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-week-in-review-october-24-2016/

Another busy week in AWS-land! Today’s post included submissions from 21 internal and external contributors, along with material from my RSS feeds, my inbox, and other things that come my way. To join in the fun, create (or find) some awesome AWS-related content and submit a pull request!

Monday

October 24

Tuesday

October 25

Wednesday

October 26

Thursday

October 27

Friday

October 28

Saturday

October 29

Sunday

October 30

New & Notable Open Source

  • aws-git-backed-static-website is a Git-backed static website generator powered entirely by AWS.
  • rds-pgbadger fetches log files from an Amazon RDS for PostgreSQL instance and generates a beautiful pgBadger report.
  • aws-lambda-redshift-copy is an AWS Lambda function that automates the copy command in Redshift.
  • VarnishAutoScalingCluster contains code and instructions for setting up a shared, horizontally scalable Varnish cluster that scales up and down using Auto Scaling groups.
  • aws-base-setup contains starter templates for developing AWS CloudFormation-based AWS stacks.
  • terraform_f5 contains Terraform scripts to instantiate a Big IP in AWS.
  • claudia-bot-builder creates chat bots for Facebook, Slack, Skype, Telegram, GroupMe, Kik, and Twilio and deploys them to AWS Lambda in minutes.
  • aws-iam-ssh-auth is a set of scripts used to authenticate users connecting to EC2 via SSH with IAM.
  • go-serverless sets up a go.cd server for serverless application deployment in AWS.
  • awsq is a helper script to run batch jobs on AWS using SQS.
  • respawn generates CloudFormation templates from YAML specifications.

New SlideShare Presentations

New Customer Success Stories

  • AlbemaTV – AbemaTV is an Internet media-services company that operates one of Japan’s leading streaming platforms, FRESH! by AbemaTV. The company built its microservices platform on Amazon EC2 Container Service and uses an Amazon Aurora data store for its write-intensive microservices—such as timelines and chat—and a MySQL database on Amazon RDS for the remaining microservices APIs. By using AWS, AbemaTV has been able to quickly deploy its new platform at scale with minimal engineering effort.
  • Celgene – Celgene uses AWS to enable secure collaboration between internal and external researchers, allow individual scientists to launch hundreds of compute nodes, and reduce the time it takes to do computational jobs from weeks or months to less than a day. Celgene is a global biopharmaceutical company that creates drugs that fight cancer and other diseases and disorders. Celgene runs its high-performance computing research clusters, as well as its research collaboration environment, on AWS.
  • Under Armour – Under Armour can scale its Connected Fitness apps to meet the demands of more than 180 million global users, innovate and deliver new products and features more quickly, and expand internationally by taking advantage of the reliability and high availability of AWS. The company is a global leader in performance footwear, apparel, and equipment. Under Armour runs its growing Connected Fitness app platform on the AWS Cloud.

New YouTube Videos

Upcoming Events

Help Wanted

Stay tuned for next week! In the meantime, follow me on Twitter and subscribe to the RSS feed.

Conservancy’s First GPL Enforcement Feedback Session

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/10/27/gpl-feedback.html

[ This blog
was crossposted
on Software Freedom Conservancy’s website
. ]

As I mentioned in an earlier blog post, I had the privilege
of attending Embedded Linux Conference Europe (ELC EU) and the OpenWrt Summit
in Berlin, Germany earlier this month. I gave a talk (for which the video is
available below) at the OpenWrt Summit. I also had the opportunity to host
the first of many conference sessions seeking feedback and input from the
Linux developer community about Conservancy’s
GPL Compliance Project for
Linux Developers
.

ELC EU has no “BoF Board” where you can post informal
sessions. So, we scheduled the session by word of mouth over a lunch hour.
We nevertheless got an good turnout (given that our session’s main
competition was eating food 🙂 of about 15 people.

Most notably and excitingly, Harald Welte, well-known Netfilter developer
and leader of gpl-violations.org,
was able to attend. Harald talked about his work with
gpl-violations.org enforcing his own copyrights in Linux, and
explained why this was important work for users of the violating devices.
He also pointed out that some of the companies that were sued during his
most active period of gpl-violations.org are now regular upstream
contributors.

Two people who work in the for-profit license compliance industry attended
as well. Some of the discussion focused on usual debates that charities
involved in compliance commonly have with the for-profit compliance
industry. Specifically, one of them asked how much compliance is
enough, by percentage?
I responded to his question on two axes.
First, I addressed the axis of how many enforcement matters does the GPL
Compliance Program for Linux Developers do, by percentage of products
violating the GPL
? There are, at any given time, hundreds of
documented GPL violating products, and our coalition works on only a tiny
percentage of those per year. It’s a sad fact that only that tiny
percentage of the products that violate Linux are actually pursued to
compliance.

On the other axis, I discussed the percentage on a per-product basis.
From that point of view, the question is really: Is there a ‘close
enough to compliance’ that we can as a community accept and forget
about the remainder?
From my point of view, we frequently compromise
anyway, since the GPL doesn’t require someone to prepare code properly for
upstream contribution. Thus, we all often accept compliance once someone
completes the bare minimum of obligations literally written in the GPL, but
give us a source release that cannot easily be converted to an upstream
contribution. So, from that point of view, we’re often accepting a
less-than-optimal outcome. The GPL by itself does not inspire upstreaming;
the other collaboration techniques that are enabled in our community
because of the GPL work to finish that job, and adherence to
the Principles assures
that process can work. Having many people who work with companies in
different ways assures that as a larger community, we try all the different
strategies to encourage participation, and inspire today’s violators to
become tomorrow upstream contributors — as Harald mention has already
often happened.

That same axis does include on rare but important compliance problem: when
a violator is particularly savvy, and refuses to release very specific
parts of their Linux code
(as VMware did),
even though the license requires it. In those cases, we certainly cannot
and should not accept anything less than required compliance — lest
companies begin holding back all the most interesting parts of the code
that GPL requires them to produce. If that happened, the GPL would cease
to function correctly for Linux.

After that part of the discussion, we turned to considerations of
corporate contributors, and how they responded to enforcement. Wolfram
Sang, one of the developers in Conservancy’s coalition, spoke up on this
point. He expressed that the focus on for-profit company contributions,
and the achievements of those companies, seemed unduly prioritized by some
in the community. As an independent contractor and individual developer,
Wolfram believes that contributions from people like him are essential to a
diverse developer base, that their opinions should be taken into account,
and their achievements respected.

I found Wolfram’s points particularly salient. My view is that Free
Software development, including for Linux, succeeds because both powerful
and wealthy entities and individuals contribute and collaborate
together on equal footing. While companies have typically only enforce the
GPL on their own copyrights for business reasons (e.g., there is at least
one example of a major Linux-contributing company using GPL enforcement
merely as a counter-punch in a patent lawsuit), individual developers who
join Conservancy’s coalition follow community principles and enforce to
defend the rights of their users.

At the end of the session, I asked two developers who hadn’t spoken during
the session, and who aren’t members of Conservancy’s coalition, their
opinion on how enforcement was historically carried out by
gpl-violations.org, and how it is currently carried out by Conservancy’s
GPL Compliance Program for Linux Developers. Both responded with a simple
response (paraphrased): it seems like a good thing to do; keep doing
it!

I finished up the session by inviting everyone to
the join
the principles-discuss
list, where public discussion about GPL
enforcement under the Principles has already begun. I also invited
everyone to attend my talk, that took place an hour later at the OpenWrt
Summit, which was co-located with ELC EU.

In that talk, I spoke about a specific example of community success in GPL
enforcement. As explained on the
OpenWrt history page,
OpenWrt was initially made possible thanks to GPL enforcement done by
BusyBox and Linux contributors in a coalition together. (Those who want to
hear more about the connection between GPL enforcement and OpenWrt can view
my talk.)

Since there weren’t opportunities to promote impromptu sessions on-site,
this event was a low-key (but still quite nice) start to Conservancy’s
planned year-long effort seeking feedback about GPL compliance and
enforcement. Our next
session is
an official BoF session at Linux Plumbers Conference
, scheduled for
next Thursday 3 November at 18:00. It will be led by my colleagues Karen
Sandler and Brett Smith.

How Different Stakeholders Frame Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/how_different_s.html

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society — ­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­ — such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­ — require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

The Compute Module – now in an NEC display near you

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/compute-module-nec-display-near-you/

Back in April 2014, we launched the Compute Module to provide hardware developers with a way to incorporate Raspberry Pi technology into their own products. Since then we’ve seen it used to build home media players, industrial control systems, and everything in between.

Earlier this week, NEC announced that they would be adding Compute Module support to their next-generation large-format displays, starting with 40″, 48″ and 55″ models in January 2017 and eventually scaling all the way up to a monstrous 98″ (!!) by the end of the year. These are commercial-grade displays designed for use in brightly-lit public spaces such as schools, offices, shops and railway stations.

Believe it or not these are the small ones

Believe it or not, these are the small ones.

NEC have already lined up a range of software partners in retail, airport information systems, education and corporate to provide presentation and signage software which runs on the Compute Module platform. You’ll be seeing these roll out in a lot of locations that you visit frequently.

Each display has an internal bay which accepts an adapter board loaded with either the existing Compute Module, or the upcoming Compute Module 3, which incorporates the BCM2837 application processor and 1GB of LPDDR2 memory found on the Raspberry Pi 3 Model B. We’re expecting to do a wider release of Compute Module 3 to everybody around the end of the year.

The Compute Module in situ

The Compute Module in situ

We’ve been working on this project with NEC for over a year now, and are very excited that it’s finally seeing the light of day. It’s an incredible vote of confidence in the Raspberry Pi Compute Module platform from a blue-chip hardware vendor, and will hopefully be the first of many.

Now, here’s some guy to tell you more about what’s going on behind the screens you walk past every day on your commute.

‘The Power to Surprise’ live stream at Display Trends Forum 2016 – NEC Teams Up With Raspberry Pi

NEC Display Solutions today announced that it will be sharing an open platform modular approach with Raspberry Pi, enabling a seamless integration of Raspberry Pi’s devices with NEC’s displays. NEC’s leading position in offering the widest product range of display solutions matches perfectly with the Raspberry Pi, the organisation responsible for developing the award-winning range of low-cost, high-performance computers.

The post The Compute Module – now in an NEC display near you appeared first on Raspberry Pi.

Security Economics of the Internet of Things

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_econom_1.html

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The “distributed” part means that other insecure computers on the Internet — sometimes in the millions­ — are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it’s a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender’s capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can’t get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­ — and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

Even worse, most of these devices don’t have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can’t update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

Slashdot thread.

Here are some of the things that are vulnerable.

EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

Succeeding MegaZeux

Post Syndicated from Eevee original https://eev.ee/blog/2016/10/06/succeeding-megazeux/

In the beginning, there was ZZT. ZZT was a set of little shareware games for DOS that used VGA text mode for all the graphics, leading to such whimsical Rogue-like choices as ä for ammo pickups, Ω for lions, and for keys. It also came with an editor, including a small programming language for creating totally custom objects, which gave it the status of “game creation system” and a legacy that survives even today.

A little later on, there was MegaZeux. MegaZeux was something of a spiritual successor to ZZT, created by (as I understand it) someone well-known for her creative abuse of ZZT’s limitations. It added quite a few bells and whistles, most significantly a built-in font editor, which let aspiring developers draw simple sprites rather than rely on whatever they could scrounge from the DOS font.

And then…

And then, nothing. MegaZeux was updated for quite a while, and (unlike ZZT) has even been ported to SDL so it can actually run on modern operating systems. But there was never a third entry in this series, another engine worthy of calling these its predecessors.

I think that’s a shame.

The legacy

Plenty of people have never heard of ZZT, and far more have never heard of MegaZeux, so here’s a brief primer.

Both were released as “first-episode” shareware: they came with one game free, and you could pony up some cash to get the sequels. Those first games — Town of ZZT and Caverns of Zeux — have these moderately iconic opening scenes.

Town of ZZT
Caverns of Zeux

In the intervening decades, all of the sequels have been released online for free. If you want to try them yourself, ZZT 3.2 includes Town of ZZT and its sequels (but must be run in DOSBox), and you can get MegaZeux 2.84c, Caverns of Zeux, and the rest of the Zeux series separately.

Town of ZZT has you, the anonymous player, wandering around a loosely-themed “town” in search of five purple keys. It’s very much a game of its time: the setting is very vague but manages to stay distinct and memorable with very light touches; the puzzles range from trivial to downright cruel; the interface itself fights against you, as you can’t carry more than one purple key at a time; and the game can be softlocked in numerous ways, only some of which have advance warning in the form of “SAVE!!!” written carved directly into the environment.

The armory, and a gruff guardian
Darkness, which all players love
A few subtle hints

Caverns of Zeux is a little more cohesive, with a (thin) plot that unfolds as you progress through the game. Your objectives are slightly vaguer; you start out only knowing you’re trapped in a cave, and further information must be gleaned from NPCs. The gameplay is shaken up a couple times throughout — you discover spellbooks that give you new abilities, but later lose your primary weapon. The meat of the game is more about exploring and less about wacky Sokoban puzzles, though with many of the areas looking very similar and at least eight different-colored doors scattered throughout the game, the backtracking can get frustrating.

A charming little town
A chasm with several gem holders
The ice caves, or maybe caverns

Those are obviously a bit retro-looking now, but they’re not bad for VGA text made by individual hobbyists in 1991 and 1994. ZZT only even uses CGA’s eight bright colors. MegaZeux takes a bit more advantage of VGA capabilities to let you edit the palette as well as the font, but games are still restricted to only using 16 colors at one time.

The font ZZT was stuck with
MegaZeux's default character set

That’s great, but who cares?

A fair question!

ZZT and MegaZeux both occupy a unique game development niche. It’s the same niche as (Z)Doom, I think, and a niche that very few other tools fill.

I’ve mumbled about this on Twitter a couple times, and several people have suggested that the PICO-8 or Mario Maker might be in the same vein. I disagree wholeheartedly! ZZT, MegaZeux, and ZDoom all have two critical — and rare — things in common.

  1. You can crack open the editor, draw a box, and have a game. On the PICO-8, you are a lonely god in an empty void; you must invent physics from scratch before you can do anything else. ZZT, MegaZeux, and Doom all have enough built-in gameplay to make a variety of interesting levels right out of the gate. You can treat them as nothing more than level editors, and you’ll be hitting the ground running — no code required. And unlike most “no programming” GCSes, I mean that literally!

  2. If and when you get tired of only using the built-in objects, you can extend the engine. ZZT and MegaZeux have programmable actors built right in. Even vanilla Doom was popular enough to gain a third-party tool, DEHACKED, which could edit the compiled doom.exe to customize actor behavior. Mario Maker might be a nice and accessible environment for making games, but at the end of the day, the only thing you can make with it is Mario.

Both of these properties together make for a very smooth learning curve. You can open the editor and immediately make something, rather than needing to absorb a massive pile of upfront stuff before you can even get a sprite on the screen. Once you need to make small tweaks, you can dip your toes into robots — a custom pickup that gives you two keys at once is four lines of fairly self-explanatory code. Want an NPC with a dialogue tree? That’s a little more complex, but not much. And then suddenly you discover you’re doing programming. At the same time, you get rendering, movement, combat, collision, health, death, pickups, map transitions, menus, dialogs, saving/loading… all for free.

MegaZeux has one more nice property, the art learning curve. The built-in font is perfectly usable, but a world built from monochrome 8×14 tiles is a very comfortable place to dabble in sprite editing. You can add eyebrows to the built-in player character or slightly reshape keys to fit your own tastes, and the result will still fit the “art style” of the built-in assets. Want to try making your own sprites from scratch? Go ahead! It’s much easier to make something that looks nice when you don’t have to worry about color or line weight or proportions or any of that stuff.

It’s true that we’re in an “indie” “boom” right now, and more game-making tools are available than ever before. A determined game developer can already choose from among dozens (hundreds?) of editors and engines and frameworks and toolkits and whatnot. But the operative word there is “determined“. Not everyone has their heart set on this. The vast majority of people aren’t interested in devoting themselves to making games, so the most they’d want to do (at first) is dabble.

But programming is a strange and complex art, where dabbling can be surprisingly difficult. If you want to try out art or writing or music or cooking or dance or whatever, you can usually get started with some very simple tools and a one-word Google search. If you want to try out game development, it usually requires programming, which in turn requires a mountain of upfront context and tool choices and explanations and mysterious incantations and forty-minute YouTube videos of some guy droning on in monotone.

To me, the magic of MegaZeux is that anyone with five minutes to spare can sit down, plop some objects around, and have made a thing.

Deep dive

MegaZeux has a lot of hidden features. It also has a lot of glass walls. Is that a phrase? It should be a phrase. I mean that it’s easy to find yourself wanting to do something that seems common and obvious, yet find out quite abruptly that it’s structurally impossible.

I’m not leading towards a conclusion here, only thinking out loud. I want to explain what makes MegaZeux interesting, but also explain what makes MegaZeux limiting, but also speculate on what might improve on it. So, you know, something for everyone.

Big picture

MegaZeux is a top-down adventure-ish game engine. You can make platformers, if you fake your own gravity; you can make RPGs, if you want to build all the UI that implies.

MegaZeux games can only be played in, well, MegaZeux. Games that need instructions and multiple downloads to be played are fighting an uphill battle. It’s a simple engine that seems reasonable to deploy to the web, and I’ve heard of a couple attempts at either reimplementing the engine in JavaScript or throwing the whole shebang at emscripten, but none are yet viable.

People have somewhat higher expectations from both games and tools nowadays. But approachability is often at odds with flexibility. The more things you explicitly support, the more complicated and intimidating the interface — or the more hidden features you have to scour the manual to even find out about.

I’ve looked through the advertising screenshots of Game Maker and RPG Maker, and I’m amazed how many things are all over the place at any given time. It’s like trying to configure the old Mozilla Suite. Every new feature means a new checkbox somewhere, and eventually half of what new authors need to remember is the set of things they can safely ignore.

SLADE’s Doom map editor manages to be much simpler, but I’m not particularly happy with that, either — it’s not clever enough to save you from your mistakes (or necessarily detect them), and a lot of the jargon makes no sense unless you’ve already learned what it means somewhere else. Plus, making the most of ZDoom’s extra features tends to involve navigating ten different text files that all have different syntax and different rules.

MegaZeux has your world, some menus with objects in them, and spacebar to place something. The UI is still very DOS-era, but once you get past that, it’s pretty easy to build something.

How do you preserve that in something “modern”? I’m not sure. The only remotely-similar thing I can think of is Mario Maker, which cleverly hides a lot of customization options right in the world editor UI: placing wings on existing objects, dropping objects into blocks, feeding mushrooms to enemies to make them bigger. The downside is that Mario Maker has quite a lot of apocryphal knowledge that isn’t written down anywhere. (That’s not entirely a downside… but I could write a whole other post just exploring that one sentence.)

Graphics

Oh, no.

Graphics don’t make the game, but they’re a significant limiting factor for MegaZeux. Fixing everything to a grid means that even a projectile can only move one tile at a time. Only one character can be drawn per grid space, so objects can’t usefully be drawn on top of each other. Animations are difficult, since they eat into your 255-character budget, which limits real-time visual feedback. Most individual objects are a single tile — creating anything larger requires either a lot of manual work to keep all the parts together, or the use of multi-tile sprites which don’t quite exist on the board.

And yet! The same factors are what make MegaZeux very accessible. The tiles are small and simple enough that different art styles don’t really clash. Using a grid means simple games don’t have to think about collision detection at all. A monochromatic font can be palette-shifted, giving you colorful variants of the same objects for free.

How could you scale up the graphics but preserve the charm and approachability? Hmm.

I think the palette restrictions might be important here, but merely bumping from 2 to 8 colors isn’t quite right. The palette-shifting in MegaZeux always makes me think of keys first, and multi-colored keys make me think of Chip’s Challenge, where the key sprites were simple but lightly shaded.

All four Chips Challenge 2 keys

The game has to contain all four sprites separately. If you wanted to have a single sprite and get all of those keys by drawing it in different colors, you’d have to specify three colors per key: the base color, a lighter color, and a darker color. In other words, a ramp — a short gradient, chosen from a palette, that can represent the same color under different lighting. Here are some PICO-8 ramps, for example. What about a sprite system that drew sprites in terms of ramps rather than individual colors?

A pixel-art door in eight different color schemes

I whipped up this crappy example to illustrate. All of the doors are fundamentally the same image, and all of them use only eight colors: black, transparent, and two ramps of three colors each. The top-left door could be expressed as just “light gray” and “blue” — those colors would be expanded into ramps automatically, and black would remain black.

I don’t know how well this would work, but I’d love to see someone try it. It may not even be necessary to require all sprites be expressed this way — maybe you could import your own truecolor art if you wanted. ZDoom works kind of this way, though it’s more of a historical accident: it does support arbitrary PNGs, but vanilla Doom sprites use a custom format that’s in terms of a single global palette, and only that custom format can be subjected to palette manipulation.


Now, MegaZeux has the problem that small sprites make it difficult to draw bigger things like UI (or a non-microscopic player). The above sprites are 32×32 (scaled up 2× for ease of viewing here), which creates the opposite problem: you can’t possibly draw text or other smaller details with them.

I wonder what could be done here. I know that the original Pokémon games have a concept of “metatiles”: every map is defined in terms of 4×4 blocks of smaller tiles. You can see it pretty clearly on this map of Pallet Town. Each larger square is a metatile, and many of them repeat, even in areas that otherwise seem different.

Pallet Town from Pokémon Red, carved into blocks

I left the NPCs in because they highlight one of the things I found most surprising about this scheme. All the objects you interact with — NPCs, signs, doors, items, cuttable trees, even the player yourself — are 16×16 sprites. The map appears to be made out of 16×16 sprites, as well — but it’s really built from 8×8 tiles arranged into bigger 32×32 tiles.

This isn’t a particularly nice thing to expose directly to authors nowadays, but it demonstrates that there are other ways to compose tiles besides the obvious. Perhaps simple terrain like grass and dirt could be single large tiles, but you could also make a large tile by packing together several smaller tiles?

Text? Oh, text can just be a font.

Player status

MegaZeux has no HUD. To know how much health you have, you need to press Enter to bring up the pause menu, where your health is listed in a stack of other numbers like “gems” and “coins”. I say “menu”, but the pause menu is really a list of keyboard shortcuts, not something you can scroll through and choose items from.

MegaZeux's in-game menu, showing a list of keyboard shortcuts on the left and some stats on the right

To be fair, ZZT does reserve the right side of the screen for your stats, and it puts health at the top. I find myself scanning the MegaZeux pause menu for health every time, which seems a somewhat poor choice for the number that makes the game end when you run out of it.

Unlike most adventure games, your health is an integer starting at 100, not a small number of hearts or whatever. The only feedback when you take damage is a sound effect and an “Ouch!” at the bottom of the screen; you don’t flinch, recoil, or blink. Health pickups might give you any amount of health, you can pick up health beyond 100, and nothing on the screen tells you how much you got when you pick one up. Keeping track of your health in your head is, ah, difficult.

MegaZeux also has a system of multiple lives, but those are also just a number, and the default behavior on “death” is for your health to reset to 100 and absolutely nothing else happens. Walking into lava (which hurts for 100 at a time) will thus kill you and strip you of all your lives quite rapidly.

It is possible to manually create a HUD in MegaZeux using the “overlay” layer, a layer that gets drawn on top of everything else in the world. The downside is that you then can’t use the overlay for anything in-world, like roofs or buildings that can be walked behind. The overlay can be in multiple modes, one that’s attached to the viewport (like a HUD) and one that’s attached to the world (like a ceiling layer), so an obvious first step would be offering these as separate features.

An alternative is to use sprites, blocks of tiles created and drawn as a single unit by Robotic code. Sprites can be attached to the viewport and can even be drawn even above the overlay, though they aren’t exposed in the editor and must be created entirely manually. Promising, if clumsy and a bit non-obvious — I only just now found out about this possibility by glancing at an obscure section of the manual.

Another looming problem is that text is the same size as everything else — but you generally want a HUD to be prominent enough to glance at very quickly.

This makes me wonder how more advanced drawing could work in general. Instead of writing code by hand to populate and redraw your UI, could you just drag and drop some obvious components (like “value of this number”) onto a layer? Reuse the same concept for custom dialogs and menus, perhaps?

Inventory

MegaZeux has no inventory. Or, okay, it has sort of an inventory, but it’s all over the place.

The stuff in the pause menu is kind of like an inventory. It counts ammo, gems, coins, two kinds of bombs, and a variety of keys for you. The game also has multiple built-in objects that can give you specific numbers of gems and coins, which is neat, except that gems and coins don’t do actually anything. I think they increase your score, but until now I’d forgotten that MegaZeux has a score.

A developer can also define six named “counters” (i.e., integers) that will show up on the pause menu when nonzero. Caverns of Zeux uses this to show you how many rainbow gems you’ve discovered… but it’s just a number labeled RainbowGems, and there’s no way to see which ones you have.

Other than that, you’re on your own. All of the original Zeux games made use of an inventory, so this is a really weird oversight. Caverns of Zeux also had spellbooks, but you could only see which ones you’d found by trying to use them and seeing if it failed. Chronos Stasis has maybe a dozen items you can collect and no way to see which ones you have — though, to be fair, you use most of them in the same place. Forest of Ruin has a fairly standard inventory, but no way to view it. All three games have at least one usable item that they just bind to a key, which you’d better remember, because it’s game-specific and thus not listed in the general help file.

To be fair, this is preposterously flexible in a way that a general inventory might not be. But it’s also tedious for game authors and potentially confusing for players.

I don’t think an inventory would be particularly difficult to support, and MegaZeux is already halfway there. Most likely, the support is missing because it would need to be based on some concept of a custom object, and MegaZeux doesn’t have that either. I’ll get to that in a bit.

Creating new objects

MegaZeux allows you to create “robots”, objects that are controlled entirely through code you write in a simple programming language. You can copy and paste robots around as easily as any other object on the map. Cool.

What’s less cool is that robots can’t share code — when you place one, you make a separate copy of all of its code. If you create a small horde of custom monsters, then later want to make a change, you’ll have to copy/paste all the existing ones. Hope you don’t have them on other boards!

Some workarounds exist: you could make use of robots’ ability to copy themselves at runtime, and it’s possible to save or load code to/from an external file at runtime. More cumbersome than defining a template object and dropping it wherever you want, and definitely much less accessible.

This is really, really bad, because the only way to extend any of the builtin objects is to replace them with robots!

I’m a little spoiled by ZDoom, where you can create as many kinds of actor as you want. Actors can even inherit from one another, though the mechanism is a little limited and… idiosyncratic, so I wouldn’t call it beginner-friendly. It’s pretty nice to be able to define a type of monster or decoration and drop it all over a map, and I’m surprised such a thing doesn’t exist in MegaZeux, where boards and the viewport both tend to be fairly large.

This is the core of how ZDoom’s inventory works, too. I believe that inventories contain only kinds, not individual actors — that is, you can have 5 red keys, but the game only knows “5 of RedCard” rather than having five distinct RedCard objects. I’m sure part of the reason MegaZeux has no general-purpose inventory is that every custom object is completely distinct, with nothing fundamentally linking even identical copies of the same robot together.

Combat

By default, the player can shoot bullets by holding Space and pressing a direction. (Moving and shooting at the same time is… difficult.) Like everything else, bullets are fixed to the character grid, so they move an entire tile at a time.

Bullets can also destroy other projectiles, sometimes. A bullet hitting another bullet will annihilate both. A bullet hitting a fireball might either turn the fireball into a regular fire tile or simple be destroyed, depending on which animation frame the fireball is in when the bullet hits it. I didn’t know this until someone told me only a couple weeks ago; I’d always just thought it was random and arbitrary and frustrating. Seekers can’t be destroyed at all.

Most enemies charge directly at you; most are killed in one hit; most attack you by colliding with you; most are also destroyed by the collision.

The (built-in) combat is fairly primitive. It gives you something to do, but it’s not particularly satisfting, which is unfortunate for an adventure game engine.

Several factors conspire here. Graphical limitations make it difficult to give much visual feedback when something (including the player) takes damage or is destroyed. The motion of small, fast-moving objects on a fixed grid can be hard to keep track of. No inventory means weapons aren’t objects, either, so custom weapons need to be implemented separately in the global robot. No custom objects means new enemies and projectiles are difficult to create. No visual feedback means hitscan weapons are implausible.

I imagine some new and interesting directions would make themselves obvious in an engine with a higher resolution and custom objects.

Robotic

Robotic is MegaZeux’s programming language for defining the behavior of robots, and it’s one of the most interesting parts of the engine. A robot that acts like an item giving you two keys might look like this:

1
2
3
4
5
6
end
: "touch"
* "You found two keys!"
givekey c04
givekey c05
die as an item
MegaZeux's Robotic editor

Robotic has no blocks, loops, locals, or functions — though recent versions can fake functions by using special jumps. All you get is a fixed list of a few hundred commands. It’s effectively a form of bytecode assembly, with no manual assembling required.

And yet! For simple tasks, it works surprisingly well. Creating a state machine, as in the code above, is straightforward. end stops execution, since all robots start executing from their first line on start. : "touch" is a label (:"touch" is invalid syntax) — all external stimuli are received as jumps, and touch is a special label that a robot jumps to when the player pushes against it. * displays a message in the colorful status line at the bottom of the screen. givekey gives a key of a specific color — colors are a first-class argument type, complete with their own UI in the editor and an automatic preview of the particular colors. die as an item destroys the robot and simultaneously moves the player on top of it, as though the player had picked it up.

A couple other interesting quirks:

  • Most prepositions, articles, and other English glue words are semi-optional and shown in grey. The line die as an item above has as an greyed out, indicating that you could just type die item and MegaZeux would fill in the rest. You could also type die as item, die an item, or even die through item, because all of as, an, and through act like whitespace. Most commands sprinkle a few of these in to make themselves read a little more like English and clarify the order of arguments.

  • The same label may appear more than once. However, labels may be zapped, and a jump will always go to the first non-zapped occurrence of a label. This lets an author encode a robot’s state within the state of its own labels, obviating the need for state-tracking variables in many cases. (Zapping labels predates per-robot variables — “local counters” — which are unhelpfully named local through local32.)

    Of course, this can rapidly spiral out of control when state changes are more complicated or several labels start out zapped or different labels are zapped out of step with each other. Robotic offers no way to query how many of a label have been zapped and MegaZeux has no debugger for label states, so it’s not hard to lose track of what’s going on. Still, it’s an interesting extension atop a simple label-based state machine.

  • The built-in types often have some very handy shortcuts. For example, GO [dir] # tells a robot to move in some direction, some number of spaces. The directions you’d expect all work: NORTH, SOUTH, EAST, WEST, and synonyms like N and UP. But there are some extras like RANDNB to choose a random direction that doesn’t block the robot, or SEEK to move towards the player, or FLOW to continue moving in its current direction. Some of the extras only make sense in particular contexts, which complicates them a little, but the ability to tell an NPC to wander aimlessly with only RANDNB is incredible.

  • Robotic is more powerful than you might expect; it can change anything you can change in the editor, emulate the behavior of most other builtins, and make use of several features not exposed in the editor at all.

Nowadays, the obvious choice for an embedded language is Lua. It’d be much more flexible, to be sure, but it’d lose a little of the charm. One of the advantages of creating a totally custom language for a game is that you can add syntax for very common engine-specific features, like colors; in a general-purpose language, those are a little clumsier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
function myrobot:ontouch(toucher)
    if not toucher.is_player then
        return false
    end
    world:showstatus("You found two keys!")
    toucher.inventory:add(Key{color=world.colors.RED})
    toucher.inventory:add(Key{color=world.colors.PURPLE})
    self:die()
    return true
end

Changing the rules

MegaZeux has a couple kinds of built-in objects that are difficult to replicate — and thus difficult to customize.

One is projectiles, mentioned earlier. Several variants exist, and a handful of specific behaviors can be toggled with board or world settings, but otherwise that’s all you get. It should be feasible to replicate them all with robots, but I suspect it’d involve a lot of subtleties.

Another is terrain. MegaZeux has a concept of a floor layer (though this is not explicitly exposed in the editor) and some floor tiles have different behavior. Ice is slippery; forest blocks almost everything but can be trampled by the player; lava hurts the player a lot; fire hurts the player and can spread, but burns out after a while. The trick with replicating these is that robots cannot be walked on. An alternative is to use sensors, which can be walked on and which can be controlled by a robot, but anything other than the player will push a sensor rather than stepping onto it. The only other approach I can think of is to keep track of all tiles that have a custom terrain, draw or animate them manually with custom floor tiles, and constantly check whether something’s standing there.

Last are powerups, which are really effects that rings or potions can give you. Some of them are special cases of effects that Robotic can do more generally, such as giving 10 health or changing all of one object into another. Some are completely custom engine stuff, like “Slow Time”, which makes everything on the board (even robots!) run at half speed. The latter are the ones you can’t easily emulate. What if you want to run everything at a quarter speed, for whatever reason? Well, you can’t, short of replacing everything with robots and doing a multiplication every time they wait.

ZDoom has a similar problem: it offers fixed sets of behaviors and powerups (which mostly derive from the commercial games it supports) and that’s it. You can manually script other stuff and go quite far, but some surprisingly simple ideas are very difficult to implement, just because the engine doesn’t offer the right kind of hook.

The tricky part of a generic engine is that a game creator will eventually want to change the rules, and they can only do that if the engine has rules for changing those rules. If the engine devs never thought of it, you’re out of luck.

Someone else please carry on this legacy

MegaZeux still sees development activity, but it’s very sporadic — the last release was in 2012. New features tend to be about making the impossible possible, rather than making the difficult easier. I think it’s safe to call MegaZeux finished, in the sense that a novel is finished.

I would really like to see something pick up its torch. It’s a very tricky problem, especially with the sprawling complexity of games, but surely it’s worth giving non-developers a way to try out the field.

I suppose if ZZT and MegaZeux and ZDoom have taught us anything, it’s that the best way to get started is to just write a game and give it very flexible editing tools. Maybe we should do that more. Maybe I’ll try to do it with Isaac’s Descent HD, and we’ll see how it turns out.

Security Design: Stop Trying to Fix the User

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/10/security_design.html

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers’ good intentions, these warnings just inure people to them. I’ve read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don’t see “the certificate has expired; are you sure you want to go to this webpage?” They see, “I’m an annoying message preventing you from reading a webpage. Click here to get rid of me.”

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the “I forgot my password” link as a way to bypass the system completely — ­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without­ — as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­ — “stress of mind, or knowledge of a long series of rules.”

I’ve been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People — ­and developers — ­are finally starting to listen. Many security updates happen automatically so users don’t have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user’s system so they don’t have to worry about embedded malware. And programs can run in sandboxes that don’t compromise the entire computer. We’ve come a long way, but we have a lot further to go.

“Blame the victim” thinking is older than the Internet, of course. But that doesn’t make it right. We owe it to our users to make the Information Age a safe place for everyone — ­not just those with “security awareness.”

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

Help Send Conservancy to Embedded Linux Conference Europe

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2016/09/21/lf-elc-eu.html

[ This blog
was crossposted
on Software Freedom Conservancy’s website
. ]

Last month, Conservancy made a public commitment to attend Linux-related
events to get feedback from developers about our work generally, and
Conservancy’s GPL Compliance Program for Linux Developers specifically. As
always, even before that, we were regularly submitting talks to nearly any
event with Linux in its name. As a small charity, we always request travel
funding from the organizers, who are often quite gracious. As I mentioned in
my blog posts about LCA 2016
and GUADEC 2016, the organizers
covered my travel funding there, and recently both Karen and I both received
travel funding to speak at LCA 2017
and DebConf 2016, as well as many
other events this year.

Recently, I submitted talks for the CFPs of Linux
Foundation’s Embedded
Linux Conference Europe (ELC EU)
and the Prpl
Foundation’s OpenWRT Summit. The
latter was accepted, and the folks at the Prpl Foundation graciously
offered to fund my flight costs to speak at the OpenWRT Summit! I’ve
never spoken at an OpenWRT event before and I’m looking forward to the
opportunity getting to know the OpenWRT and LEDE communities better by
speaking at that event, and am excited to discuss Conservancy’s work with
them.

OpenWRT Summit, while co-located, is a wholly separate event from LF’s ELC
EU. Unfortunately, I was not so lucky in my talk submissions there: my
talk proposal has been waitlisted since July. I was hopeful after a talk
cancellation in mid-August. (I know because the speaker who canceled
suggested that I request his slot for my waitlisted talk.)
Unfortunately, the LF staff informed me that they understandably filled
his open slot with a sponsored session that came in.

The good news is that my OpenWRT Summit flight is booked, and my friend
(and Conservancy Board Member Emeritus)
Loïc Dachary
(who lives in Berlin) has agreed to let me crash with
him for that week. So, I’ll be in town for the entirety of ELC EU with
almost no direct travel costs to Conservancy! The bad news is that it
seems my ELC EU talk remains waitlisted. Therefore, I don’t have a
confirmed registration for the rest of ELC EU (beyond OpenWRT Summit).

While it seems like a perfect and cost-effective opportunity to be able to
attend both events, that seems harder than I thought! Once I confirmed my
OpenWRT Summit travel arrangements, I asked for the hobbyist discount to
register for ELC EU, but LF staff informed me yesterday that the hobbyist
(as well as the other discounts) are sold out. The moral of the story is
that logistics are just plain tough and time-consuming when you work for a
charity with an extremely limited travel budget. ☻

Yet, it seems a shame to waste the opportunity of being in town with so
many Linux developers and not being able to see or talk to them, so
Conservancy is asking for some help from you to fund the $680 of my registration
costs for ELC EU. That’s just about
six new Conservancy supporter
signups
, so I hope we can get six new Supporters before Linux
Foundation’s ELC EU conference begins on October 10th. Either way, I look
forward to seeing those developers who attend the co-located OpenWRT
Summit! And, if the logistics work out — perhaps I’ll see you at ELC
EU as well!

Now with added cucumbers

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/now-added-cucumbers/

Working here at Pi Towers, I’m always a little frustrated by not being able to share the huge number of commercial businesses’ embedded projects that use Raspberry Pis. (About a third of the Pis we sell go to businesses.) We don’t get to feature many of them on the blog; many organisations don’t want their work replicated by competitors, or aren’t prepared for customers and competitors to see how inexpensively they’re able to automate tasks. Every now and then, though, a company is happy to share what they’re using Pis for.

cucumber-farmer-3

Makoto Koike, centre, with his parents on the family cucumber farm

Here’s a great example: a cucumber farm in Japan, which is using a Raspberry Pi to sort thorny cucumbers, saving the farmer eight to nine hours’ manual work a day.

Makoto Koike is the son of farmers, who works as an embedded systems designer for the Japanese car industry. He started helping out at his parents’ cucumber farm (which he will be taking over when they retire), and spotted a process that was ripe for automation.

cucumber-farmer-7

Cucumbers from the Makotos’ farm

At the Makotos’ farm, cucumbers are graded into nine categories: the straightest, thickest, freshest, most vivid cucumbers (which must have plenty of characteristic spurs) are the best, and can be sold at the highest price. Makoto-san’s mother was in charge of sorting the cucumbers every day, which took eight hours at the peak of the harvest. Makoto-san had an epiphany after reading about Google’s AlphaGo beating the world number one professional Go player. He realised that machine learning and deep learning meant the sorting process could be automated, so he built a process using Google’s open-source machine learning library, TensorFlow, and some machinery to process the cucumbers into graded batches.

cucumber-farmer-10

Sorting in action

cucumber-farmer-6

Camera interface

Google have put together a diagram showing how the system works:

cucumber-farmer-14

There are difficulties in building this sort of system, not least the 7000 cucumbers, pre-graded by his mother, that Makoto-san had to photograph and label over a period of three months to give the model material to train with. He says:

“When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural networks where the model is trained to fit only the small training dataset) because of the insufficient number of training images.”

Still, it’s an impressive feat, and a real-world >95% accuracy rate is not unfeasible with a big enough data set. We’d be interested to see how the setup progresses, especially as more automation is added; right now, cucumbers are added to the processing hopper by hand, and a human has to interact with the touchscreen grading panel. Here’s the system at work:

TensorFlow powered cucumber sorter by Makoto Koike

Uploaded by Kazunori Sato on 2016-08-05.

We’re hoping to see some updates from the Makoto family as the system evolves. And in the meantime, if you have an embedded project you’d like to share with us, let us know in the comments!

 

The post Now with added cucumbers appeared first on Raspberry Pi.

FINAL REMINDER! systemd.conf 2016 CfP Ends on Monday!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/final-reminder-systemdconf-2016-cfp-ends-on-monday.html

Please note that the systemd.conf 2016
Call for Participation ends on Monday, on Aug. 1st! Please send
in your talk proposal by then! We’ve already got a good number of
excellent submissions, but we are very interested in yours, too!

We are looking for talks on all facets of systemd: deployment,
maintenance, administration, development. Regardless of whether you
use it in the cloud, on embedded, on IoT, on the desktop, on mobile,
in a container or on the server: we are interested in your
submissions!

In addition to proposals for talks for the main conference, we are
looking for proposals for workshop sessions held during our
Workshop Day (the first day of the conference). The workshop format
consists of a day of 2-3h training sessions, that may cover any
systemd-related topic you’d like. We are both interested in
submissions from the developer community as well as submissions from
organizations making use of systemd! Introductory workshop sessions
are particularly welcome, as the Workshop Day is intended to open up
our conference to newcomers and people who aren’t systemd gurus yet,
but would like to become more fluent.

For further details on the submissions we are looking for and the CfP
process, please consult the CfP
page
and
submit your proposal using the provided form!

ALSO: Please sign up for the conference soon! Only a
limited number of tickets are available, hence make sure to secure
yours quickly before they run out! (Last year we sold out.) Please
sign up here for the
conference!

AND OF COURSE: We are also looking for more sponsors for
systemd.conf! If you are working on systemd-related projects, or make
use of it in your company, please consider becoming a sponsor of
systemd.conf
2016
!
Without our sponsors we couldn’t organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/reminder-systemdconf-2016-cfp-ends-in-two-weeks.html

Please note that the systemd.conf 2016
Call for Participation ends in less than two weeks, on Aug. 1st!
Please send in your talk proposal by then! We’ve already got a good
number of excellent submissions, but we are interested in yours even
more!

We are looking for talks on all facets of systemd: deployment,
maintenance, administration, development. Regardless of whether you
use it in the cloud, on embedded, on IoT, on the desktop, on mobile,
in a container or on the server: we are interested in your
submissions!

In addition to proposals for talks for the main conference, we are
looking for proposals for workshop sessions held during our
Workshop Day (the first day of the conference). The workshop format
consists of a day of 2-3h training sessions, that may cover any
systemd-related topic you’d like. We are both interested in
submissions from the developer community as well as submissions from
organizations making use of systemd! Introductory workshop sessions
are particularly welcome, as the Workshop Day is intended to open up
our conference to newcomers and people who aren’t systemd gurus yet,
but would like to become more fluent.

For further details on the submissions we are looking for and the CfP
process, please consult the CfP
page
and
submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a
limited number of tickets are available, hence make sure to secure
yours quickly before they run out! (Last year we sold out.) Please
sign up here for the
conference!

AND OF COURSE: We are also looking for more sponsors for
systemd.conf! If you are working on systemd-related projects, or make
use of it in your company, please consider becoming a sponsor of
systemd.conf
2016
!
Without our sponsors we couldn’t organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

CfP is now open

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/cfp-is-now-open.html

The systemd.conf 2016 Call for Participation is Now Open!

We’d like to invite presentation and workshop proposals for systemd.conf 2016!

The conference will consist of three parts:

  • One day of workshops, consisting of in-depth (2-3hr) training and learning-by-doing sessions (Sept. 28th)
  • Two days of regular talks (Sept. 29th-30th)
  • One day of hackfest (Oct. 1st)

We are now accepting submissions for the first three days: proposals
for workshops, training sessions and regular talks. In particular, we
are looking for sessions including, but not limited to, the following
topics:

  • Use Cases: systemd in today’s and tomorrow’s devices and applications
  • systemd and containers, in the cloud and on servers
  • systemd in distributions
  • Embedded systemd and in IoT
  • systemd on the desktop
  • Networking with systemd
  • … and everything else related to systemd

Please submit your proposals by August 1st, 2016. Notification of acceptance will be sent out 1-2 weeks later.

If submitting a workshop proposal please contact the organizers for more details.

To submit a talk, please visit our CfP submission page.

For further information on systemd.conf 2016, please visit our conference web site.

Introducing sd-event

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/introducing-sd-event.html

The Event Loop API of libsystemd

When we began working on
systemd we built
it around a hand-written ad-hoc event loop, wrapping Linux
epoll
. The more
our project grew the more we realized the limitations of using raw
epoll:

  • As we used
    timerfd
    for our timer events, each event source cost one file descriptor and
    we had many of them! File descriptors are a scarce resource on UNIX,
    as
    RLIMIT_NOFILE
    is typically set to 1024 or similar, limiting the number of
    available file descriptors per process to 1021, which isn’t
    particularly a lot.

  • Ordering of event dispatching became a nightmare. In many cases, we
    wanted to make sure that a certain kind of event would always be
    dispatched before another kind of event, if both happen at the same
    time. For example, when the last process of a service dies, we might
    be notified about that via a SIGCHLD signal, via an
    sd_notify() “STATUS=”
    message, and via a control group notification. We wanted to get
    these events in the right order, to know when it’s safe to process
    and subsequently release the runtime data systemd keeps about the
    service or process: it shouldn’t be done if there are still events
    about it pending.

  • For each program we added to the systemd project we noticed we were
    adding similar code, over and over again, to work with epoll’s
    complex interfaces. For example, finding the right file descriptor
    and callback function to dispatch an epoll event to, without running
    into invalidated pointer issues is outright difficult and requires
    non-trivial code.

  • Integrating child process watching into our event loops was much
    more complex than one could hope, and even more so if child process
    events should be ordered against each other and unrelated kinds of
    events.

Eventually, we started working on
sd-bus. At
the same time we decided to seize the opportunity, put together a
proper event loop API in C, and then not only port sd-bus on top of
it, but also the rest of systemd. The result of this is
sd-event. After
almost two years of development we declared sd-event stable in systemd
version 221, and published it as official API of libsystemd.

Why?

sd-event.h,
of course, is not the first event loop API around, and it doesn’t
implement any really novel concepts. When we started working on it we
tried to do our homework, and checked the various existing event loop
APIs, maybe looking for candidates to adopt instead of doing our own,
and to learn about the strengths and weaknesses of the various
implementations existing. Ultimately, we found no implementation that
could deliver what we needed, or where it would be easy to add the
missing bits: as usual in the systemd project, we wanted something
that allows us access to all the Linux-specific bits, instead of
limiting itself to the least common denominator of UNIX. We weren’t
looking for an abstraction API, but simply one that makes epoll usable
in system code.

With this blog story I’d like to take the opportunity to introduce you
to sd-event, and explain why it might be a good candidate to adopt as
event loop implementation in your project, too.

So, here are some features it provides:

  • I/O event sources, based on epoll’s file descriptor watching,
    including edge triggered events (EPOLLET). See
    sd_event_add_io(3).

  • Timer event sources, based on timerfd_create(), supporting the
    CLOCK_MONOTONIC, CLOCK_REALTIME, CLOCK_BOOTIME clocks, as well
    as the CLOCK_REALTIME_ALARM and CLOCK_BOOTTIME_ALARM clocks that
    can resume the system from suspend. When creating timer events a
    required accuracy parameter may be specified which allows coalescing
    of timer events to minimize power consumption. For each clock only a
    single timer file descriptor is kept, and all timer events are
    multiplexed with a priority queue. See
    sd_event_add_time(3).

  • UNIX process signal events, based on
    signalfd(2),
    including full support for real-time signals, and queued
    parameters. See sd_event_add_signal(3).

  • Child process state change events, based on
    waitid(2). See
    sd_event_add_child(3).

  • Static event sources, of three types: defer, post and exit, for
    invoking calls in each event loop, after other event sources or at
    event loop termination. See
    sd_event_add_defer(3).

  • Event sources may be assigned a 64bit priority value, that controls
    the order in which event sources are dispatched if multiple are
    pending simultanously. See
    sd_event_source_set_priority(3).

  • The event loop may automatically send watchdog notification messages
    to the service manager. See sd_event_set_watchdog(3).

  • The event loop may be integrated into foreign event loops, such as
    the GLib one. The event loop API is hence composable, the same way
    the underlying epoll logic is. See
    sd_event_get_fd(3)
    for an example.

  • The API is fully OOM safe.

  • A complete set of documentation in UNIX man page format is
    available, with
    sd-event(3)
    as the entry page.

  • It’s pretty widely available, and requires no extra
    dependencies. Since systemd is built on it, most major distributions
    ship the library in their default install set.

  • After two years of development, and after being used in all of
    systemd’s components, it has received a fair share of testing already,
    even though we only recently decided to declare it stable and turned
    it into a public API.

Note that sd-event has some potential drawbacks too:

  • If portability is essential to you, sd-event is not your best
    option. sd-event is a wrapper around Linux-specific APIs, and that’s
    visible in the API. For example: our event callbacks receive
    structures defined by Linux-specific APIs such as signalfd.

  • It’s a low-level C API, and it doesn’t isolate you from the OS
    underpinnings. While I like to think that it is relatively nice and
    easy to use from C, it doesn’t compromise on exposing the low-level
    functionality. It just fills the gaps in what’s missing between
    epoll, timerfd, signalfd and related concepts, and it does not hide
    that away.

Either way, I believe that sd-event is a great choice when looking for
an event loop API, in particular if you work on system-level software
and embedded, where functionality like timer coalescing or
watchdog support matter.

Getting Started

Here’s a short example how to use sd-event in a simple daemon. In this
example, we’ll not just use sd-event.h, but also sd-daemon.h to
implement a system service.

#include <alloca.h>
#include <endian.h>
#include <errno.h>
#include <netinet/in.h>
#include <signal.h>
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/ioctl.h>
#include <sys/socket.h>
#include <unistd.h>

#include <systemd/sd-daemon.h>
#include <systemd/sd-event.h>

static int io_handler(sd_event_source *es, int fd, uint32_t revents, void *userdata) {
        void *buffer;
        ssize_t n;
        int sz;

        /* UDP enforces a somewhat reasonable maximum datagram size of 64K, we can just allocate the buffer on the stack */
        if (ioctl(fd, FIONREAD, &sz) < 0)
                return -errno;
        buffer = alloca(sz);

        n = recv(fd, buffer, sz, 0);
        if (n < 0) {
                if (errno == EAGAIN)
                        return 0;

                return -errno;
        }

        if (n == 5 && memcmp(buffer, "EXIT\n", 5) == 0) {
                /* Request a clean exit */
                sd_event_exit(sd_event_source_get_event(es), 0);
                return 0;
        }

        fwrite(buffer, 1, n, stdout);
        fflush(stdout);
        return 0;
}

int main(int argc, char *argv[]) {
        union {
                struct sockaddr_in in;
                struct sockaddr sa;
        } sa;
        sd_event_source *event_source = NULL;
        sd_event *event = NULL;
        int fd = -1, r;
        sigset_t ss;

        r = sd_event_default(&event);
        if (r < 0)
                goto finish;

        if (sigemptyset(&ss) < 0 ||
            sigaddset(&ss, SIGTERM) < 0 ||
            sigaddset(&ss, SIGINT) < 0) {
                r = -errno;
                goto finish;
        }

        /* Block SIGTERM first, so that the event loop can handle it */
        if (sigprocmask(SIG_BLOCK, &ss, NULL) < 0) {
                r = -errno;
                goto finish;
        }

        /* Let's make use of the default handler and "floating" reference features of sd_event_add_signal() */
        r = sd_event_add_signal(event, NULL, SIGTERM, NULL, NULL);
        if (r < 0)
                goto finish;
        r = sd_event_add_signal(event, NULL, SIGINT, NULL, NULL);
        if (r < 0)
                goto finish;

        /* Enable automatic service watchdog support */
        r = sd_event_set_watchdog(event, true);
        if (r < 0)
                goto finish;

        fd = socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0);
        if (fd < 0) {
                r = -errno;
                goto finish;
        }

        sa.in = (struct sockaddr_in) {
                .sin_family = AF_INET,
                .sin_port = htobe16(7777),
        };
        if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
                r = -errno;
                goto finish;
        }

        r = sd_event_add_io(event, &event_source, fd, EPOLLIN, io_handler, NULL);
        if (r < 0)
                goto finish;

        (void) sd_notifyf(false,
                          "READY=1\n"
                          "STATUS=Daemon startup completed, processing events.");

        r = sd_event_loop(event);

finish:
        event_source = sd_event_source_unref(event_source);
        event = sd_event_unref(event);

        if (fd >= 0)
                (void) close(fd);

        if (r < 0)
                fprintf(stderr, "Failure: %s\n", strerror(-r));

        return r < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
}

The example above shows how to write a minimal UDP/IP server, that
listens on port 7777. Whenever a datagram is received it outputs its
contents to STDOUT, unless it is precisely the string EXIT\n in
which case the service exits. The service will react to SIGTERM and
SIGINT and do a clean exit then. It also notifies the service manager
about its completed startup, if it runs under a service
manager. Finally, it sends watchdog keep-alive messages to the service
manager if it asked for that, and if it runs under a service manager.

When run as systemd service this service’s STDOUT will be connected to
the logging framework of course, which means the service can act as a
minimal UDP-based remote logging service.

To compile and link this example, save it as event-example.c, then run:

$ gcc event-example.c -o event-example `pkg-config --cflags --libs libsystemd`

For a first test, simply run the resulting binary from the command
line, and test it against the following netcat command line:

$ nc -u localhost 7777

For the sake of brevity error checking is minimal, and in a real-world
application should, of course, be more comprehensive. However, it
hopefully gets the idea across how to write a daemon that reacts to
external events with sd-event.

For further details on the functions used in the example above, please
consult the manual pages:
sd-event(3),
sd_event_exit(3),
sd_event_source_get_event(3),
sd_event_default(3),
sd_event_add_signal(3),
sd_event_set_watchdog(3),
sd_event_add_io(3),
sd_notifyf(3),
sd_event_loop(3),
sd_event_source_unref(3),
sd_event_unref(3).

Conclusion

So, is this the event loop to end all other event loops? Certainly
not. I actually believe in “event loop plurality”. There are many
reasons for that, but most importantly: sd-event is supposed to be an
event loop suitable for writing a wide range of applications, but it’s
definitely not going to solve all event loop problems. For example,
while the priority logic is important for many usecase it comes with
drawbacks for others: if not used carefully high-priority event
sources can easily starve low-priority event sources. Also, in order
to implement the priority logic, sd-event needs to linearly iterate
through the event structures returned by
epoll_wait(2)
to sort the events by their priority, resulting in worst case
O(n*log(n)) complexity on each event loop wakeup (for n = number of
file descriptors). Then, to implement priorities fully, sd-event only
dispatches a single event before going back to the kernel and asking
for new events. sd-event will hence not provide the theoretically
possible best scalability to huge numbers of file descriptors. Of
course, this could be optimized, by improving epoll, and making it
support how todays’s event loops actually work (after, all, this is
the problem set all event loops that implement priorities — including
GLib’s — have to deal with), but even then: the design of sd-event is focussed on
running one event loop per thread, and it dispatches events strictly
ordered. In many other important usecases a very different design is
preferable: one where events are distributed to a set of worker threads
and are dispatched out-of-order.

Hence, don’t mistake sd-event for what it isn’t. It’s not supposed to
unify everybody on a single event loop. It’s just supposed to be a
very good implementation of an event loop suitable for a large part of
the typical usecases.

Note that our APIs, including
sd-bus, integrate nicely into
sd-event event loops, but do not require it, and may be integrated
into other event loops too, as long as they support watching for time
and I/O events.

And that’s all for now. If you are considering using sd-event for your
project and need help or have questions, please direct them to the
systemd mailing list.

Second Round of systemd.conf 2015 Sponsors

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/second-round-of-systemdconf-2015-sponsors.html

Second Round of systemd.conf 2015 Sponsors

We are happy to announce the second round of systemd.conf
2015
sponsors! In addition to those from
the first
announcement
, we have:

Our second Gold sponsor is Red Hat!

What began as a better way to build software—openness, transparency, collaboration—soon shifted the balance of power in an entire industry. The revolution of choice continues. Today Red Hat® is the world’s leading provider of open source solutions, using a community-powered approach to provide reliable and high-performing cloud, virtualization, storage, Linux®, and middleware technologies.

A Bronze sponsor is Samsung:

From the beginning we have established a very fast pace and are currently one of the biggest and fastest growing modern-technology R&D centers in East-Central Europe.
We have started with designing subsystems for digital satellite television, however, we have quickly expanded the scope of our interest. Currently, it includes advanced systems of digital television, platform convergence, mobile systems, smart solutions, and enterprise solutions.
Also a vital role in our activity plays the quality and certification center, which controls the conformity of Samsung Electronics products with the highest standards of quality and reliability.

A Bronze sponsor is travelping:

Travelping is passionate about networks, communications and devices. We empower our customers to deploy and operate networks using our state of the art products, solutions and services.
Our products and solutions are based on our industry proven physical and virtual appliance platforms. These purpose built platforms ensure best in class performance, scalability and reliability combined with consistent end to end management capabilities.
To build this products, Travelping has developed a own embedded, cross platform Linux distribution called CAROS.io which incorporates the systemd service manager and tools.

A Bronze sponsor is Collabora:

Collabora has over 10 years of experience working with top tier OEMs & silicon manufacturers worldwide to develop products based on Open Source software. Through the use of Open Source technologies and methodologies, Collabora helps clients in multiple market segments gain faster time to market and save millions of dollars in licensing and maintenance costs. Collabora has already brought to market several products relying on systemd extensively.

A Bronze sponsor is Endocode:

Endocode AG. An employee-owned, software engineering company from Berlin. Open Source is our heart and soul.

A Bronze sponsor is the Linux Foundation:

The Linux Foundation advances the growth of Linux and offers its collaborative principles and practices to any endeavor.

We are Cooperating with LinuxTag e.V. on the organization:

LinuxTag is Europe’s leading organizer of Linux and Open Source events. Born of the community and in business for 20 years, we organize LinuxTag, an annual conference and exhibition attracting thousands of visitors. We also participate and cooperate in organizing workshops, tutorials, seminars, and other events together with and for the Open Source community. Selected events include non-profit workshops, the German Kernel Summit at FrOSCon, participation in the Open Tech Summit, and others. We take care of the organizational framework of systemd.conf 2015. LinuxTag e.V. is a non-profit organization and welcomes donations of ideas and workforce.

A Media Partner is Golem:

Golem.de is an up to date online-publication intended for professional computer users. It provides technology insights of the IT and telecommunications industry. Golem.de offers profound and up to date information on significant and trending topics. Online- and IT-Professionals, marketing managers, purchasers, and readers inspired by technology receive substantial information on product, market and branding potentials through tests, interviews und market analysis.

We’d like to thank our sponsors for their support! Without sponsors our conference would not be possible!

The Conference s SOLD OUT since a few weeks. We no longer accept registrations, nor paper submissions.

For further details about systemd.conf consult the conference website.

See the the first round of sponsor announcements!

See you in Berlin!

Preliminary systemd.conf 2015 Schedule

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/preliminary-systemdconf-2015-schedule.html

A Preliminary systemd.conf 2015 Schedule is Now Online!

We are happy to announce that an initial, preliminary version of the
systemd.conf 2015
schedule
is now
online! (Please ignore that some rows in the schedule link the same
session twice on that page. That’s a bug in the web site CMS we are
working on to fix.)

We got an overwhelming number of high-quality submissions during the
CfP! Because there were so many good talks we really wanted to
accept, we decided to do two full days of talks now, leaving one more
day for the hackfest and BoFs. We also shortened many of the slots, to
make room for more. All in all we now have a schedule packed with
fantastic presentations!

The areas covered range from containers, to system provisioning,
stateless systems, distributed init systems, the kdbus IPC, control
groups, systemd on the desktop, systemd in embedded devices,
configuration management and systemd, and systemd in downstream
distributions.

We’d like to thank everybody who submited a presentation proposal!

Also, don’t forget to register for the conference! Only a limited number of
registrations are available due to space constraints!
Register here!.

We are still looking for sponsors. If you’d like to join the ranks of
systemd.conf 2015 sponsors, please have a look at our Becoming a
Sponsor
page!

For further details about systemd.conf consult the conference
website
.

First Round of systemd.conf 2015 Sponsors

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/first-round-of-systemdconf-2015-sponsors.html

First Round of systemd.conf 2015 Sponsors

We are happy to announce the first round of systemd.conf
2015
sponsors!

Our first Gold sponsor is CoreOS!

CoreOS develops software for modern infrastructure that delivers a consistent operating environment for distributed applications. CoreOS’s commercial offering, Tectonic, is an enterprise-ready platform that combines Kubernetes and the CoreOS stack to run Linux containers. In addition CoreOS is the creator and maintainer of open source projects such as CoreOS Linux, etcd, fleet, flannel and rkt. The strategies and architectures that influence CoreOS allow companies like Google, Facebook and Twitter to run their services at scale with high resilience. Learn more about CoreOS here https://coreos.com/, Tectonic here, https://tectonic.com/ or follow CoreOS on Twitter @coreoslinux.

A Silver sponsor is Codethink:

Codethink is a software services consultancy, focusing on engineering reliable systems for long-term deployment with open source technologies.

A Bronze sponsor is Pantheon:

Pantheon is a platform for professional website development, testing, and deployment. Supporting Drupal and WordPress, Pantheon runs over 100,000 websites for the world’s top brands, universities, and media organizations on top of over a million containers.

A Bronze sponsor is Pengutronix:

Pengutronix provides consulting, training and development services for Embedded Linux to customers from the industry. The Kernel Team ports Linux to customer hardware and has more than 3100 patches in the official mainline kernel. In addition to lowlevel ports, the Pengutronix Application Team is responsible for board support packages based on PTXdist or Yocto and deals with system integration (this is where systemd plays an important role). The Graphics Team works on accelerated multimedia tasks, based on the Linux kernel, GStreamer, Qt and web technologies.

We’d like to thank our sponsors for their support! Without sponsors our conference would not be possible!

We’ll shortly announce our second round of sponsors, please stay tuned!

If you’d like to join the ranks of systemd.conf 2015 sponsors, please have a look at our Becoming a Sponsor page!

Reminder! The systemd.conf 2015 Call for Presentations ends on monday, August 31st! Please make sure to submit your proposals on the CfP page until then!

Also, don’t forget to register for the conference! Only a limited number of
registrations are available due to space constraints!
Register here!.

For further details about systemd.conf consult the conference website.

systemd.conf 2015 Call for Presentations

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/systemdconf-2015-call-for-presentations.html

REMINDER! systemd.conf 2015 Call for Presentations ends August 31st!

We’d like to remind you that the systemd.conf 2015 Call for Presentations ends
on August 31st! Please submit your presentation proposals before that data
on our website.

We are specifically interested in submissions from projects and vendors building
today’s and tomorrow’s products, services and devices with systemd. We’d like to
learn about the problems you encounter and the benefits you see! Hence, if
you work for a company using systemd, please submit a presentation!

We are also specifically interested in submissions from downstream distribution
maintainers of systemd! If you develop or maintain systemd packages in a
distribution, please submit a presentation reporting about the state, future
and the problems of systemd packaging so that we can improve downstream
collaboration!

And of course, all talks regarding systemd usage in containers, in the cloud,
on servers, on the desktop, in mobile and in embedded are highly welcome! Talks
about systemd networking and kdbus IPC are very welcome too!

Please submit your presentations until August 31st!

And don’t forget to register for the conference! Only a limited number of
registrations are available due to space constraints!
Register here!.

Also, limited travel and entry fee sponsorship is available for community contributors. Please contact us for details!

For further details about the CfP consult the CfP page.

For further details about systemd.conf consult the conference website.

Revisiting How We Put Together Linux Systems

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html

In a previous blog story I discussed
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems,
I now want to take the opportunity to explain a bit where we want to
take this with
systemd in the
longer run, and what we want to build out of it. This is going to be a
longer story, so better grab a cold bottle of
Club Mate before you start
reading.

Traditional Linux distributions are built around packaging systems
like RPM or dpkg, and an organization model where upstream developers
and downstream packagers are relatively clearly separated: an upstream
developer writes code, and puts it somewhere online, in a tarball. A
packager than grabs it and turns it into RPMs/DEBs. The user then
grabs these RPMs/DEBs and installs them locally on the system. For a
variety of uses this is a fantastic scheme: users have a large
selection of readily packaged software available, in mostly uniform
packaging, from a single source they can trust. In this scheme the
distribution vets all software it packages, and as long as the user
trusts the distribution all should be good. The distribution takes the
responsibility of ensuring the software is not malicious, of timely
fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn’t fit
many use-cases of our software particularly well. Let’s have a look at
the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream
    distributions to package their stuff. It’s the downstream
    distribution that decides on schedules, packaging details, and how
    to handle support. Often upstream vendors want much faster release
    cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to
    impossible. Since the end-user can run a variety of different
    package versions together, and expects the software he runs to just
    work on any combination, the test matrix explodes. If upstream tests
    its version on distribution X release Y, then there’s no guarantee
    that that’s the precise combination of packages that the end user
    will eventually run. In fact, it is very unlikely that the end user
    will, since most distributions probably updated a number of
    libraries the package relies on by the time the package ends up being
    made available to the user. The fact that each package can be
    individually updated by the user, and each user can combine library
    versions, plug-ins and executables relatively freely, results in a high
    risk of something going wrong.

  • Since there are so many different distributions in so many different
    versions around, if upstream tries to build and test software for
    them it needs to do so for a large number of distributions, which is
    a massive effort.

  • The distributions are actually quite different in many ways. In
    fact, they are different in a lot of the most basic
    functionality. For example, the path where to put x86-64 libraries
    is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is
    hard: if you want to do it, you need to actually install them, each
    one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and
    trademark requirements (and rightly so), any kind of closed source
    software (or otherwise non-free) does not fit into this scheme at
    all.

This all together makes it really hard for many upstreams to work
nicely with the current way how Linux works. Often they try to improve
the situation for them, for example by bundling libraries, to make
their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for
people who want to put together their individual system, nicely
adjusted to exactly what they need. However, this is not really how
many of today’s Linux systems are built, installed or updated. If you
build any kind of embedded device, a server system, or even user
systems, you frequently do your work based on complete system images,
that are linearly versioned. You build these images somewhere, and
then you replicate them atomically to a larger number of systems. On
these systems, you don’t install or remove packages, you get a defined
set of files, and besides installing or updating the system there are
no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing
for this major use-case of Linux. Their strict focus on individual
packages as well as package managers as end-user install and update
tool is incompatible with what many system vendors want.

Users

The classic Linux distribution scheme is frequently not what end users
want, either. Many users are used to app markets like Android, Windows
or iOS/Mac have. Markets are a platform that doesn’t package, build or
maintain software like distributions do, but simply allows users to
quickly find and download the software they need, with the app vendor
responsible for keeping the app updated, secured, and all that on the
vendor’s release cycle. Users tend to be impatient. They want their
software quickly, and the fine distinction between trusting a single
distribution or a myriad of app developers individually is usually not
important for them. The companies behind the marketplaces usually try
to improve this trust problem by providing sand-boxing technologies: as
a replacement for the distribution that audits, vets, builds and
packages the software and thus allows users to trust it to a certain
level, these vendors try to find technical solutions to ensure that
the software they offer for download can’t be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are
sometimes quite successful attempts to do something about it. Ubuntu
Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of
this problem set, usually with a strict focus on one facet of Linux
systems. For example, Ubuntu Apps focus strictly on end user (desktop)
applications, and don’t care about how we built/update/install the OS
itself, or containers. Docker OTOH focuses on containers only, and
doesn’t care about end-user apps. Software Collections tries to focus
on the development environments. ChromeOS focuses on the OS itself,
but only for end-user devices. CoreOS also focuses on the OS, but
only for server systems.

The approaches they find are usually good at specific things, and use
a variety of different technologies, on different layers. However,
none of these projects tried to fix this problems in a generic way,
for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so
generic: you can build supercomputers and tiny embedded devices out of
it. It’s time we come up with a basic, reusable scheme how to solve
the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom
Gundersen, David Herrmann, and yours truly) recently met in Berlin
about all these things, and tried to come up with a scheme that is
somewhat simple, but tries to solve the issues generically, for all
use-cases, as part of the systemd project. All that in a way that is
somewhat compatible with the current scheme of distributions, to allow
a slow, gradual adoption. Also, and that’s something one cannot stress
enough: the toolbox scheme of classic Linux distributions is
actually a good one, and for many cases the right one. However, we
need to make sure we make distributions relevant again for all
use-cases, not just those of highly individualized systems.

Anyway, so let’s summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their
    software (regardless if just an app, or the whole OS) directly for
    the end user, and know the precise combination of libraries and
    packages it will operate with.

  • We want to allow end users and administrators to install these
    packages on their systems, regardless which distribution they have
    installed on it.

  • We want a unified solution that ultimately can cover updates for
    full systems, OS containers, end user apps, programming ABIs, and
    more. These updates shall be double-buffered, (at least). This is an
    absolute necessity if we want to prepare the ground for operating
    systems that manage themselves, that can update safely without
    administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a
    fully trustable OS, with images that can be verified by a full
    trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the
    kernel, and initrd. Cryptographically secure verification of the
    code we execute is relevant on the desktop (like ChromeOS does), but
    also for apps, for embedded devices and even on servers (in a post-Snowden
    world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So,
now, let’s discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs
and Linux file system name-spacing. btrfs at this point already has a
large number of features that fit neatly in our concept, and the
maintainers are busy working on a couple of others we want to
eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and
introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> — This refers to a full
    vendor operating system tree. It’s basically a /usr tree (and no
    other directories), in a specific version, with everything you need to boot
    it up inside it. The <vendorid> field is replaced by some vendor
    identifier, maybe a scheme like
    org.fedoraproject.FedoraWorkstation. The <architecture> field
    specifies a CPU architecture the OS is designed for, for example
    x86-64. The <version> field specifies a specific OS version, for
    example 23.4. An example sub-volume name could hence look like this:
    usr:org.fedoraproject.FedoraWorkstation:x86_64:23.4

  • root:<name>:<vendorid>:<architecture> — This refers to an
    instance of an operating system. Its basically a root directory,
    containing primarily /etc and /var (but possibly more). Sub-volumes
    of this type do not contain a populated /usr tree though. The
    <name> field refers to some instance name (maybe the host name of
    the instance). The other fields are defined as above. An example
    sub-volume name is
    root:revolution:org.fedoraproject.FedoraWorkstation:x86_64.

  • runtime:<vendorid>:<architecture>:<version> — This refers to a
    vendor runtime. A runtime here is supposed to be a set of
    libraries and other resources that are needed to run apps (for the
    concept of apps see below), all in a /usr tree. In this regard this
    is very similar to the usr sub-volumes explained above, however,
    while a usr sub-volume is a full OS and contains everything
    necessary to boot, a runtime is really only a set of
    libraries. You cannot boot it, but you can run apps with it. An
    example sub-volume name is: runtime:org.gnome.GNOME3_20:x86_64:3.20.1

  • framework:<vendorid>:<architecture>:<version> — This is very
    similar to a vendor runtime, as described above, it contains just a
    /usr tree, but goes one step further: it additionally contains all
    development headers, compilers and build tools, that allow
    developing against a specific runtime. For each runtime there should
    be a framework. When you develop against a specific framework in a
    specific architecture, then the resulting app will be compatible
    with the runtime of the same vendor ID and architecture. Example:
    framework:org.gnome.GNOME3_20:x86_64:3.20.1

  • app:<vendorid>:<runtime>:<architecture>:<version> — This
    encapsulates an application bundle. It contains a tree that at
    runtime is mounted to /opt/<vendorid>, and contains all the
    application’s resources. The <vendorid> could be a string like
    org.libreoffice.LibreOffice, the <runtime> refers to one the
    vendor id of one specific runtime the application is built for, for
    example org.gnome.GNOME3_20:3.20.1. The <architecture> and
    <version> refer to the architecture the application is built for,
    and of course its version. Example:
    app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133

  • home:<user>:<uid>:<gid> — This sub-volume shall refer to the home
    directory of the specific user. The <user> field contains the user
    name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs
    of the user. The idea here is that in the long run the list of
    sub-volumes is sufficient as a user database (but see
    below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly
identifiable. It is our intention to introduce a new GPT partition type
ID for this.

How To Use It

After we introduced this naming scheme let’s see what we can build of
this:

  • When booting up a system we mount the root directory from one of the
    root sub-volumes, and then mount /usr from a matching usr
    sub-volume. Matching here means it carries the same <vendor-id>
    and <architecture>. Of course, by default we should pick the
    matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when
    we boot up a regular system: we simply combine a usr sub-volume
    with a root sub-volume.

  • When we enumerate the system’s users we simply go through the
    list of home snapshots.

  • When a user authenticates and logs in we mount his home
    directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the
    app sub-volume to /opt/<vendorid>/, and the appropriate runtime
    sub-volume the app picked to /usr, as well as the user’s
    /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he
    installs the right framework, and then temporarily transitions into
    a name space where /usris mounted from the framework sub-volume, and
    /home/$USER from his own home directory. In this name space he then
    runs his build commands. He can build in multiple name spaces at the
    same time, if he intends to builds software for multiple runtimes or
    architectures at the same time.

Instantiating a new system or OS container (which is exactly the same
in this scheme) just consists of creating a new appropriately named
root sub-volume. Completely naturally you can share one vendor OS
copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-fold-buffered), because
usr, runtime, framework, app sub-volumes can exist in multiple
versions. Of course, by default the execution logic should always pick
the newest release of each sub-volume, but it is up to the user keep
multiple versions around, and possibly execute older versions, if he
desires to do so. In fact, like on ChromeOS this could even be handled
automatically: if a system fails to boot with a newer snapshot, the
boot loader can automatically revert back to an older version of the
OS.

An Example

Note that in result this allows installing not only multiple end-user
applications into the same btrfs volume, but also multiple operating
systems, multiple system instances, multiple runtimes, multiple
frameworks. Or to spell this out in an example:

Let’s say Fedora, Mageia and ArchLinux all implement this scheme,
and provide ready-made end-user images. Also, the GNOME, KDE, SDL
projects all define a runtime+framework to develop against. Finally,
both LibreOffice and Firefox provide their stuff according to this
scheme. You can now trivially install of these into the same btrfs
volume:

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:org.mageia.Client:i386:39.3
  • usr:org.mageia.Client:i386:39.4
  • usr:org.mageia.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:org.mageia.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.1
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.4
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
  • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
  • runtime:org.kde.KDE5_6:x86_64:5.6.0
  • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • framework:org.kde.KDE5_6:x86_64:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems
installed. All of them in three versions, and one even in a beta
version. We have four system instances around. Two of them of Fedora,
maybe one of them we usually boot from, the other we run for very
specific purposes in an OS container. We also have the runtimes for
two GNOME releases in multiple versions, plus one for KDE. Then, we
have the development trees for one version of KDE and GNOME around, as
well as two apps, that make use of two releases of the GNOME
runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can
actually relatively freely mix and match apps and OSes, or develop
against specific frameworks in specific versions on any operating
system. It doesn’t matter if you booted your ArchLinux instance, or
your Fedora one, you can execute both LibreOffice and Firefox just
fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems
you installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore
which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the
sub-volume list, it doesn’t matter which system you boot, the
distribution should be able to find your local users automatically,
without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on
execution we already came quite far, but how do we actually get these
sub-volumes onto the final machines, and how do we update them? Well,
btrfs has a feature they call “send-and-receive”. It basically allows
you to “diff” two file system versions, and generate a binary
delta. You can generate these deltas on a developer’s machine and then
push them into the user’s system, and he’ll get the exact same
sub-volume too. This is how we envision installation and updating of
operating systems, applications, runtimes, frameworks. At installation
time, we simply deserialize an initial send-and-receive delta into
our btrfs volume, and later, when a new version is released we just
add in the few bits that are new, by dropping in another
send-and-receive delta under a new sub-volume name. And we do it
exactly the same for the OS itself, for a runtime, a framework or an
app. There’s no technical distinction anymore. The underlying
operation for installing apps, runtime, frameworks, vendor OSes, as well
as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an
awful lot of waste, after all they will contain a lot of very similar
data, since a lot of resources are shared between distributions,
frameworks and runtimes. However, thankfully btrfs actually is able to
de-duplicate this for us. If we add in a new app snapshot, this simply
adds in the new files that changed. Moreover different runtimes and
operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user,
desktop side of things, the concept is also extremely powerful in
server scenarios. For example, it is easy to build your own usr
trees and deliver them to your hosts using this scheme. The usr
sub-volumes are supposed to be something that administrators can put
together. After deserializing them into a couple of hosts, you can
trivially instantiate them as OS containers there, simply by adding a
new root sub-volume for each instance, referencing the usr tree you
just put together. Instantiating OS containers hence becomes as easy
as creating a new btrfs sub-volume. And you can still update the images
nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded
use-cases. Regardless if you build a TV, an IVI system or a phone: you
can put together you OS versions as usr trees, and then use
btrfs-send-and-receive facilities to deliver them to the systems, and
update them there.

Many people when they hear the word “btrfs” instantly reply with “is
it ready yet?”. Thankfully, most of the functionality we really need
here is strictly read-only. With the exception of the home
sub-volumes (see below) all snapshots are strictly read-only, and are
delivered as immutable vendor trees onto the devices. They never are
changed. Even if btrfs might still be immature, for this kind of
read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example,
an installer image could include a Fedora version compiled for x86-64,
one for i386, one for ARM, all in the same btrfs volume. Due to btrfs’
de-duplication they will share as much as possible, and when the image
is booted up the right sub-volume is automatically picked. Something
similar of course applies to the apps too!

This also allows us to implement something that we like to call
Operating-System-As-A-Virus. Installing a new system is little more
than:

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader into the EFI System Partition
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume,
you can trivially duplicate this onto any block device you want. Let’s
say you are a happy Fedora user, and you want to provide a friend with
his own installation of this awesome system, all on a USB stick. All
you have to do for this is do the steps above, using your installed
usr tree as source to copy. And there you go! And you don’t have to
be afraid that any of your personal data is copied too, as the usr
sub-volume is the exact version your vendor provided you with. Or with
other words: there’s no distinction anymore between installer images
and installed systems. It’s all the same. Installation becomes
replication, not more. Live-CDs and installed systems can be fully
identical.

Note that in this design apps are actually developed against a single,
very specific runtime, that contains all libraries it can link against
(including a specific glibc version!). Any library that is not
included in the runtime the developer picked must be included in the
app itself. This is similar how apps on Android declare one very
specific Android version they are developed against. This greatly
simplifies application installation, as there’s no dependency hell:
each app pulls in one runtime, and the app is actually free to pick
which one, as you can have multiple installed, though only one is used
by each app.

Also note that operating systems built this way will never see
“half-updated” systems, as it is common when a system is updated using
RPM/dpkg. When updating the system the code will either run the old or
the new version, but it will never see part of the old files and part
of the new files. This is the same for apps, runtimes, and frameworks,
too.

Where We Are Now

We are currently working on a lot of the groundwork necessary for
this. This scheme relies on the ability to monopolize the
vendor OS resources in /usr, which is the key of what I described in
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems
a few weeks back. Then, of course, for the full desktop app concept we
need a strong sandbox, that does more than just hiding files from the
file system view. After all with an app concept like the above the
primary interfacing between the executed desktop apps and the rest of the
system is via IPC (which is why we work on kdbus and teach it all
kinds of sand-boxing features), and the kernel itself. Harald Hoyer has
started working on generating the btrfs send-and-receive images based
on Fedora.

Getting to the full scheme will take a while. Currently we have many
of the building blocks ready, but some major items are missing. For
example, we push quite a few problems into btrfs, that other solutions
try to solve in user space. One of them is actually
signing/verification of images. The btrfs maintainers are working on
adding this to the code base, but currently nothing exists. This
functionality is essential though to come to a fully verified system
where a trust chain exists all the way from the firmware to the
apps. Also, to make the home sub-volume scheme fully workable we
actually need encrypted sub-volumes, so that the sub-volume’s
pass-phrase can be used for authenticating users in PAM. This doesn’t
exist either.

Working towards this scheme is a gradual process. Many of the steps we
require for this are useful outside of the grand scheme though, which
means we can slowly work towards the goal, and our users can already
take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from
traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with
/usr, /home, /opt, /var, /etc. It executes in an environment that is
pretty much identical to how it would be run on traditional systems.

There’s no need to fully move to a system that uses only btrfs and
follows strictly this sub-volume scheme. For example, we intend to
provide implicit support for systems that are installed on ext4 or
xfs, or that are put together with traditional packaging tools such as
RPM or dpkg: if the the user tries to install a
runtime/app/framework/os image on a system that doesn’t use btrfs so
far, it can just create a loop-back btrfs image in /var, and push the
data into that. Even us developers will run our stuff like this for a
while, after all this new scheme is not particularly useful for highly
individualized systems, and we developers usually tend to run
systems like that.

Also note that this in no way a departure from packaging systems like
RPM or DEB. Even if the new scheme we propose is used for installing
and updating a specific system, it is RPM/DEB that is used to put
together the vendor OS tree initially. Hence, even in this scheme
RPM/DEB are highly relevant, though not strictly as an end-user tool
anymore, but as a build tool.

So Let’s Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images,
    user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS
    images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of
    all executed code can be done, all the way to the firmware, as
    standard feature of the system.

  • We want to allow app vendors to write their programs against very
    specific frameworks, under the knowledge that they will end up being
    executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions
    of them, multiple runtimes in multiple versions, as well as multiple
    frameworks in multiple versions. And of course, multiple apps in
    multiple versions.

  • We want everything double buffered (or actually n-fold buffered), to
    ensure we can reliably update/rollback versions, in particular to
    safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS
    container is as simple as adding in a new snapshot and restarting
    the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS
    instances from a single vendor tree, with zero difference for doing
    this on order to be able to boot it on bare metal/VM or as a
    container.

  • We want to enable Linux to have an open scheme that people can use
    to build app markets and similar schemes, not restricted to a
    specific vendor.

Final Words

I’ll be talking about this at LinuxCon Europe in October. I originally
intended to discuss this at the Linux Plumbers Conference (which I
assumed was the right forum for this kind of major plumbing level
improvement), and at linux.conf.au, but there was no interest in my
session submissions there…

Of course this is all work in progress. These are our current ideas we
are working towards. As we progress we will likely change a number of
things. For example, the precise naming of the sub-volumes might look
very different in the end.

Of course, we are developers of the systemd project. Implementing this
scheme is not just a job for the systemd developers. This is a
reinvention how distributions work, and hence needs great support from
the distributions. We really hope we can trigger some interest by
publishing this proposal now, to get the distributions on board. This
after all is explicitly not supposed to be a solution for one specific
project and one specific vendor product, we care about making this
open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us
(IRC, mail, G+, …).

The future is going to be awesome!

Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/stateless.html

(Just a small heads-up: I don’t blog as much as I used to, I
nowadays update my Google+
page
a lot more frequently. You might want to subscribe that if
you are interested in more frequent technical updates on what we are
working on.)

In the past weeks we have been working on a couple of features for
systemd
that enable a number of new usecases I’d like to shed some light
on. Taking benefit of the /usr
merge
that a number of distributions have completed we want to
bring runtime behaviour of Linux systems to the next level. With the
/usr merge completed most static vendor-supplied OS data is
found exclusively in /usr, only a few additional bits in
/var and /etc are necessary to make a system
boot. On this we can build to enable a couple of new features:

  1. A mechanism we call Factory Reset shall flush out
    /etc and /var, but keep the vendor-supplied
    /usr, bringing the system back into a well-defined, pristine
    vendor state with no local state or configuration. This functionality
    is useful across the board from servers, to desktops, to embedded
    devices.
  2. A Stateless System goes one step further: a system like
    this never stores /etc or /var on persistent
    storage, but always comes up with pristine vendor state. On systems
    like this every reboot acts as factor reset. This functionality is
    particularly useful for simple containers or systems that boot off the
    network or read-only media, and receive all configuration they need
    during runtime from vendor packages or protocols like DHCP or are
    capable of discovering their parameters automatically from the
    available hardware or periphery.
  3. Reproducible Systems multiply a vendor image into many
    containers or systems. Only local configuration or state is stored
    per-system, while the vendor operating system is pulled in from the
    same, immutable, shared snapshot. Each system hence has its private
    /etc and /var for receiving local configuration,
    however the OS tree in /usr is pulled in via bind mounts (in
    case of containers) or technologies like NFS (in case of physical
    systems), or btrfs snapshots from a golden master image. This is
    particular interesting for containers where the goal is to run
    thousands of container images from the same OS tree. However, it also
    has a number of other usecases, for example thin client systems, which
    can boot the same NFS share a number of times. Furthermore this
    mechanism is useful to implement very simple OS installers, that
    simply unserialize a /usr snapshot into a file system,
    install a boot loader, and reboot.
  4. Verifiable Systems are closely related to stateless
    systems: if the underlying storage technology can cryptographically
    ensure that the vendor-supplied OS is trusted and in a consistent
    state, then it must be made sure that /etc or /var
    are either included in the OS image, or simply unnecessary for booting.

Concepts

A number of Linux-based operating systems have tried to implement
some of the schemes described out above in one way or
another. Particularly interesting are GNOME’s OSTree, CoreOS and Google’s Android and
ChromeOS. They generally found different solutions for the specific
problems you have when implementing schemes like this, sometimes taking
shortcuts that keep only the specific case in mind, and cannot cover
the general purpose. With systemd now being at the core of so many
distributions and deeply involved in bringing up and maintaining the
system we came to the conclusion that we should attempt to add generic
support for setups like this to systemd itself, to open this up for
the general purpose distributions to build on. We decided to focus on
three kinds of systems:

  1. The stateful system, the traditional system as we know it with
    machine-specific /etc, /usr and /var, all
    properly populated.
  2. Startup without a populated /var, but with configured
    /etc. (We will call these volatile systems.)
  3. Startup without either /etc or /var. (We will
    call these stateless systems.)

A factory reset is just a special case of the latter two modes,
where the system boots up without /var and /etc but
the next boot is a normal stateful boot like like the first described
mode. Note that a mode where /etc is flushed, but
/var is not is nothing we intend to cover (why? well, the
user ID question becomes much harder, see below, and we simply saw no
usecase for it worth the trouble).

Problems

Booting up a system without a populated /var is relatively
straight-forward. With a
few lines of tmpfiles configuration
it is possible to populate
/var with its basic structure in a way that is sufficient to
make a system boot cleanly. systemd version 214 and newer ship with
support for this. Of course, support for this scheme in systemd is
only a small part of the solution. While a lot of software
reconstructs the directory hierarchy it needs in /var
automatically, many software does not. In case like this it is
necessary to ship a couple of additional tmpfiles lines that setup up
at boot-time the necessary files or directories in /var to
make the software operate, similar to what RPM or DEB packages would
set up at installation time.

Booting up a system without a populated /etc is a more
difficult task. In /etc we have a lot of configuration bits
that are essential for the system to operate, for example and most
importantly system user and group information in /etc/passwd
and /etc/group. If the system boots up without /etc
there must be a way to replicate the minimal information necessary in
it, so that the system manages to boot up fully.

To make this even more complex, in order to support “offline”
updates of /usr that are replicated into a number of systems
possessing private /etc and /var there needs to be a
way how these directories can be upgraded transparently when
necessary, for example by recreating caches like
/etc/ld.so.cache or adding missing system users to
/etc/passwd on next reboot.

Starting with systemd 215 (yet unreleased, as I type this) we will
ship with a number of features in systemd that make /etc-less
boots functional:

  • A new tool systemd-sysusers as been added. It introduces
    a new drop-in directory /usr/lib/sysusers.d/. Minimal
    descriptions of necessary system users and groups can be placed
    there. Whenever the tool is invoked it will create these users in
    /etc/passwd and /etc/group should they be
    missing. It is only suitable for creating system users and groups, not
    for normal users. It will write to the files directly via the
    appropriate glibc APIs, which is the right thing to do for system
    users. (For normal users no such APIs exist, as the users might be
    stored centrally on LDAP or suchlike, and they are out of focus for
    our usecase.) The major benefit of this tool is that system user
    definition can happen offline: a package simply has to drop in a new
    file to register a user. This makes system user registration
    declarative instead of imperative — which is the way
    how system users are traditionally created from RPM or DEB
    installation scripts. By being declarative it is easy to replicate the
    users on next boot to a number of system instances.

    To make this new
    tool interesting for packaging scripts we make it easy to
    alternatively invoke it during package installation time, thus being a
    good alternative to invocations of useradd -r and
    groupadd -r.

    Some OS designs use a static, fixed user/group list stored in
    /usr as primary database for users/groups, which fixed
    UID/GID mappings. While this works for specific systems, this cannot
    cover the general purpose. As the UID/GID range for system
    users/groups is very small (only containing 998 users and groups on most systems), the
    best has to be made from this space and only UIDs/GIDs necessary on
    the specific system should be allocated. This means allocation has to
    be dynamic and adjust to what is necessary.

    Also note that this tool has
    one very nice feature: in addition to fully dynamic, and fully static
    UID/GID assignment for the users to create, it supports reading
    UID/GID numbers off existing files in /usr, so that vendors
    can make use of setuid/setgid binaries owned by specific users.

  • We also added a default
    user definition list
    which creates the most basic users the system
    and systemd need. Of course, very likely downstream distributions
    might need to alter this default list, add new entries and possibly
    map specific users to particular numeric UIDs.
  • A new condition ConditionNeedsUpdate= has been
    added. With this mechanism it is possible to conditionalize execution
    of services depending on whether /usr is newer than
    /etc or /var. The idea is that various services that
    need to be added into the boot process on upgrades make use of this to
    not delay boot-ups on normal boots, but run as necessary should
    /usr have been update since the last boot. This is
    implemented based on the mtime timestamp of the
    /usr: if the OS has been updated the packaging software
    should touch the directory, thus informing all instances that
    an upgrade of /etc and /var might be necessary.
  • We added a number of service files, that make use of the new
    ConditionNeedsUpdate= switch, and run a couple of services
    after each update. Among them are the aforementiond
    systemd-sysusers tool, as well as services that rebuild the
    udev hardware database, the journal catalog database and the library
    cache in /etc/ld.so.cache.
  • If systemd detects an empty /etc at early boot it will
    now use the unit
    preset
    information to enable all services by default that the
    vendor or packager declared. It will then proceed booting.
  • We added a
    new tmpfiles snippet
    that is able to reconstruct the
    most basic structure of /etc if it is missing.
  • tmpfiles also gained the ability copy entire directory trees into
    place should they be missing. This is particularly useful for copying
    certain essential files or directories into /etc without
    which the system refuses to boot. Currently the most prominent
    candidates for this are /etc/pam.d and
    /etc/dbus-1. In the long run we hope that packages can be
    fixed so that they always work correctly without configuration in
    /etc. Depending on the software this means that they should
    come with compiled-in defaults that just work should their
    configuration file be missing, or that they should fall back to static
    vendor-supplied configuration in /usr that is used whenever
    /etc doesn’t have any configuration. Both the PAM and the
    D-Bus case are probably candidates for the latter. Given that there
    are probably many cases like this we are working with a number of
    folks to introduce a new directory called /usr/share/etc
    (name is not settled yet) to major distributions, that always
    contain the full, original, vendor-supplied configuration of all
    packages. This is very useful here, so that there’s an obvious place
    to copy the original configuration from, but it is also useful
    completely independently as this provides administrators with an easy
    place to diff their own configuration in /etc
    against to see what local changes are in place.
  • We added a new --tmpfs= switch to systemd-nspawn
    to make testing of systems with unpopulated /etc and
    /var easy. For example, to run a fully state-less container, use a command line like this:

    # system-nspawn -D /srv/mycontainer --read-only --tmpfs=/var --tmpfs=/etc -b

    This command line will invoke the container tree stored in
    /srv/mycontainer in a read-only way, but with a (writable)
    tmpfs mounted to /var and /etc. With a very recent
    git snapshot of systemd invoking a Fedora rawhide system should mostly
    work OK, modulo the D-Bus and PAM problems mentioned above. A later
    version of systemd-nspawn is likely to gain a high-level
    switch --mode={stateful|volatile|stateless} that sets
    combines this into simple switches reusing the vocabulary introduced
    earlier.

What’s Next

Pulling this all together we are very close to making boots with
empty /etc and /var on general purpose Linux
operating systems a reality. Of course, while doing the groundwork in
systemd gets us some distance, there’s a lot of work left. Most
importantly: the majority of Linux packages are simply incomptible
with this scheme the way they are currently set up. They do not work
without configuration in /etc or state directories in
/var; they do not drop system user information in
/usr/lib/sysusers.d. However, we believe it’s our job to do
the groundwork, and to start somewhere.

So what does this mean for the next steps? Of course, currently
very little of this is available in any distribution (simply already
because 215 isn’t even released yet). However, this will hopefully
change quickly. As soon as that is accomplished we can start working
on making the other components of the OS work nicely in this
scheme. If you are an upstream developer, please consider making your
software work correctly if /etc and/or /var are not
populated. This means:

  • When you need a state directory in /var and it is missing,
    create it first. If you cannot do that, because you dropped priviliges
    or suchlike, please consider dropping in a tmpfiles snippet that
    creates the directory with the right permissions early at boot, should
    it be missing.
  • When you need configuration files in /etc to work
    properly, consider changing your application to work nicely when these
    files are missing, and automatically fall back to either built-in
    defaults, or to static vendor-supplied configuration files shipped in
    /usr, so that administrators can override configuration in
    /etc but if they don’t the default configuration counts.
  • When you need a system user or group, consider dropping in a file
    into /usr/lib/sysusers.d describing the users. (Currently
    documentation on this is minimal, we will provide more docs on this
    shortly.)

If you are a packager, you can also help on making this all work:

  • Ask upstream to implement what we describe above, possibly even preparing a patch for this.
  • If upstream will not make these changes, then consider dropping in
    tmpfiles snippets that copy the bare minimum of configuration files to
    make your software work from somewhere in /usr into
    /etc.
  • Consider moving from imperative useradd commands in
    packaging scripts, to declarative sysusers files. Ideally,
    this is shipped upstream too, but if that’s not possible then simply
    adding this to packages should be good enough.

Of course, before moving to declarative system user definitions you
should consult with your distribution whether their packaging policy
even allows that. Currently, most distributions will not, so we have
to work to get this changed first.

Anyway, so much about what we have been working on and where we want to take this.

Conclusion

Before we finish, let me stress again why we are doing all
this:

  1. For end-user machines like desktops, tablets or mobile phones, we
    want a generic way to implement factory reset, which the user can make
    use of when the system is broken (saves you support costs), or when he
    wants to sell it and get rid of his private data, and renew that “fresh
    car smell”.
  2. For embedded machines we want a generic way how to reset
    devices. We also want a way how every single boot can be identical to
    a factory reset, in a stateless system design.
  3. For all kinds of systems we want to centralize vendor data in
    /usr so that it can be strictly read-only, and fully
    cryptographically verified as one unit.
  4. We want to enable new kinds of OS installers that simply
    deserialize a vendor OS /usr snapshot into a new file system,
    install a boot loader and reboot, leaving all first-time configuration
    to the next boot.
  5. We want to enable new kinds of OS updaters that build on this, and
    manage a number of vendor OS /usr snapshots in verified states, and
    which can then update /etc and /var simply by
    rebooting into a newer version.
  6. We wanto to scale container setups naturally, by sharing a single
    golden master /usr tree with a large number of instances that
    simply maintain their own private /etc and /var for
    their private configuration and state, while still allowing clean
    updates of /usr.
  7. We want to make thin clients that share /usr across the
    network work by allowing stateless bootups. During all discussions on
    how /usr was to be organized this was fequently mentioned. A
    setup like this so far only worked in very specific cases, with this
    scheme we want to make this work in general case.

Of course, we have no illusions, just doing the groundwork for all
of this in systemd doesn’t make this all a real-life solution
yet. Also, it’s very unlikely that all of Fedora (or any other general
purpose distribution) will support this scheme for all its packages
soon, however, we are quite confident that the idea is convincing,
that we need to start somewhere, and that getting the most core
packages adapted to this shouldn’t be out of reach.

Oh, and of course, the concepts behind this are really not new, we
know that. However, what’s new here is that we try to make them
available in a general purpose OS core, instead of special purpose
systems.

Anyway, let’s get the ball rolling! Late’s make stateless systems a
reality!

And that’s all I have for now. I am sure this leaves a lot of
questions open. If you have any, join us on IRC on #systemd
on freenode or comment on Google+.