Tag Archives: G+

Ultimate 3D printer control with OctoPrint

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/octoprint/

Control and monitor your 3D printer remotely with a Raspberry Pi and OctoPrint.

Timelapse of OctoPrint Ornament

Printed on a bq Witbox STL file can be found here: http://www.thingiverse.com/thing:191635 OctoPrint is located here: http://www.octoprint.org

3D printing

Whether you have a 3D printer at home or use one at your school or local makerspace, it’s fair to assume you’ve had a failed print or two in your time. Filament knotting or running out, your print peeling away from the print bed — these are common issues for all 3D printing enthusiasts, and they can be costly if they’re discovered too late.

OctoPrint

OctoPrint is a free open-source software, created and maintained by Gina Häußge, that performs a multitude of useful 3D printing–related tasks, including remote control of your printer, live video, and data collection.

The OctoPrint logo

Control and monitoring

To control the print process, use OctoPrint on a Raspberry Pi connected to your 3D printer. First, ensure a safe uninterrupted run by using the software to restrict who can access the printer. Then, before starting your print, use the web app to work on your STL file. The app also allows you to reposition the print head at any time, as well as pause or stop printing if needed.

Live video streaming

Since OctoPrint can stream video of your print as it happens, you can watch out for any faults that may require you to abort and restart. Proud of your print? Record the entire process from start to finish and upload the time-lapse video to your favourite social media platform.

OctoPrint software graphic user interface screenshot

Data capture

Octoprint records real-time data, such as the temperature, giving you another way to monitor your print to ensure a smooth, uninterrupted process. Moreover, the records will help with troubleshooting if there is a problem.

OctoPrint software graphic user interface screenshot

Print the Millenium Falcon

OK, you can print anything you like. However, this design definitely caught our eye this week.

3D-Printed Fillenium Malcon (Timelapse)

This is a Timelapse of my biggest print project so far on my own designed/built printer. It’s 500x170x700(mm) and weights 3 Kilograms of Filament.

You can support the work of Gina and OctoPrint by visiting her Patreon account and following OctoPrint on Twitter, Facebook, or G+. And if you’ve set up a Raspberry Pi to run OctoPrint, or you’ve created some cool Pi-inspired 3D prints, make sure to share them with us on our own social media channels.

The post Ultimate 3D printer control with OctoPrint appeared first on Raspberry Pi.

Twitter makers love Halloween

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/twitter-love-halloween/

Halloween is almost upon us! In honour of one of the maker community’s favourite howlidays, here are some posts from enthusiastic makers on Twitter to get you inspired and prepared for the big event.

Lorraine’s VR Puppet

Lorraine Underwood on Twitter

Using a @Raspberry_Pi with @pimoroni tilt hat to make a cool puppet for #Halloween https://t.co/pOeTFZ0r29

Made with a Pimoroni Pan-Tilt HAT, a Raspberry Pi, and some VR software on her phone, Lorraine Underwood‘s puppet is going to be a rather fitting doorman to interact with this year’s trick-or-treaters. Follow her project’s progress as she posts it on her blog.

Firr’s Monster-Mashing House

Firr on Twitter

Making my house super spooky for Halloween! https://t.co/w553l40BT0

Harnessing the one song guaranteed to earworm its way into my mind this October, Firr has upgraded his house to sing for all those daring enough to approach it this coming All Hallows’ Eve.

Firr used resources from Adafruit, along with three projectors, two Raspberry Pis, and some speakers, to create this semi-interactive display.

While the eyes can move on their own, a joystick can be added for direct control. Firr created a switch that goes between autonomous animation and direct control.

Find out more on the htxt.africa website.

Justin’s Snake Eyes Pumpkin

Justin Smith on Twitter

First #pumpkin of the season for Friday the 13th! @PaintYourDragon’s snake eyes bonnet for the #RaspberryPi to handle the eye animation. https://t.co/TSlUUxYP5Q

The Animated Snake Eyes Bonnet is definitely one of the freakiest products to come from the Adafruit lab, and it’s the perfect upgrade for any carved pumpkin this Halloween. Attach the bonnet to a Raspberry Pi 3, or the smaller Zero or Zero W, and thus add animated eyes to your scary orange masterpiece, as Justin Smith demonstrates in his video. The effect will terrify even the bravest of trick-or-treaters! Just make sure you don’t light a candle in there too…we’re not sure how fire-proof the tech is.

And then there’s this…

EmmArarrghhhhhh on Twitter

Squishy eye keyboard? Anyone? Made with @Raspberry_Pi @pimoroni’s Explorer HAT Pro and a pile of stuff from @Poundland 😂👀‼️ https://t.co/qLfpLLiXqZ

Yeah…the line between frightening and funny is never thinner than on Halloween.

Make and share this Halloween!

For more Halloween project ideas, check out our free resources including Scary ‘Spot the difference’ and the new Pioneers-inspired Pride and Prejudice‘ for zombies.

Halloween Pride and Prejudice Zombies Raspberry Pi

It is a truth universally acknowledged that a single man in possession of the zombie virus must be in want of braaaaaaains.

No matter whether you share your Halloween builds on Twitter, Facebook, G+, Instagram, or YouTube, we want to see them — make sure to tag us in your posts. We also have a comment section below this post, so go ahead and fill it with your ideas, links to completed projects, and general chat about the world of RasBOOrry Pi!

…sorry, that’s a hideous play on words. I apologise.

The post Twitter makers love Halloween appeared first on Raspberry Pi.

Adafruit’s read-only Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/adafruits-read-only/

For passive projects such as point-of-sale displays, video loopers, and your upcoming Halloween builds, Adafruit have come up with a read-only solution for powering down your Raspberry Pi without endangering your SD card.

Adafruit read-only raspberry pi

Pulling the plug

At home, at a coding club, or at a Jam, you rarely need to pull the plug on your Raspberry Pi without going through the correct shutdown procedure. To ensure a long life for your SD card and its contents, you should always turn off you Pi by selecting the shutdown option from the menu. This way the Pi saves any temporary files to the card before relinquishing power.

Dramatic reconstruction

By pulling the plug while your OS is still running, you might corrupt these files, which could result in the Pi failing to boot up again. The only fix? Wipe the SD card clean and start over, waving goodbye to all files you didn’t back up.

Passive projects

But what if it’s not as easy as selecting shutdown, because your Raspberry Pi is embedded deep inside the belly of a project? Maybe you’ve hot-glued your Zero W into a pumpkin which is now screwed to the roof of your porch, or your store has a bank of Pi-powered monitors playing ads and the power is set to shut off every evening. Without the ability to shut down your Pi via the menu, you risk the SD card’s contents every time you power down your project.

Read-only

Just in time of the plethora of Halloween projects we’re looking forward to this month, the clever folk at Adafruit have designed a solution for this issue. They’ve shared a script which forces the Raspberry Pi to run in read-only mode, so that powering it down via a plug pull will not corrupt the SD card.

But how?

The script makes the Pi save temporary files to the RAM instead of the SD card. Of course, this means that no files or new software can be written to the card. However, if that’s not necessary for your Pi project, you might be happy to make the trade-off. Note that you can only use Adafruit’s script on Raspbian Lite.

Find more about the read-only Raspberry Pi solution, including the script and optional GPIO-halt utility, on the Adafruit Learn page. And be aware that making your Pi read-only is irreversible, so be sure to back up the contents of your SD card before you implement the script.

Halloween!

It’s October, and we’re now allowed to get excited about Halloween and all of the wonderful projects you plan on making for the big night.

Adafruit read-only raspberry pi

Adafruit’s animated snake eyes

We’ll be covering some of our favourite spooky build on social media throughout the month — make sure to share yours with us, either in the comments below or on Facebook, Twitter, Instagram, or G+.

The post Adafruit’s read-only Raspberry Pi appeared first on Raspberry Pi.

Get social: connecting with Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/connecting-raspberry-pi-social/

Fancy connecting with Raspberry Pi beyond the four imaginary walls of this blog post? Want to find ways into the conversation among our community of makers, learners, and educators? Here’s how:

Twitter

Connecting with us on Twitter is your sure-fire way of receiving the latest news and articles from and about the Raspberry Pi Foundation, Code Club, and CoderDojo. Here you’ll experience the fun, often GIF-fuelled banter of the busy Raspberry Pi community, along with tips, project support, and event updates. This is the best place to follow hashtags such as #Picademy, #MakeYourIdeas, and #RJam in real time.

Raspberry Pi on Twitter

News! Raspberry Pi and @CoderDojo join forces in a merger that will help more young people get creative with tech: https://t.co/37y45ht7li

YouTube

We create a variety of video content, from Pi Towers fun, to resource videos, to interviews and program updates. We’re constantly adding content to our channel to bring you more interesting, enjoyable videos to watch and share within the community. Want to see what happens when you drill a hole through a Raspberry Pi Zero to make a fidget spinner? Or what Code Club International volunteers got up to when we brought them together in London for a catch-up? Maybe you’d like to try a new skill and need guidance? Our YouTube channel is the place to go!

Getting started with soldering

Learn the basics of how to solder components together, and the safety precautions you need to take. Find a transcript of this video in our accompanying learning resource: raspberrypi.org/learning/getting-started-with-soldering/

Instagram

Instagram is known as the home of gorgeous projects and even better-looking project photographs. Our Instagram, however, is mainly a collection of random office shenanigans, short video clips, and the occasional behind-the-scenes snap of projects, events, or the mess on my desk. Come join the party!

When one #AstroPi unit is simply not enough… . Would you like to #3DPrint your own Astro Pi unit? Head to rpf.io/astroprint for the free files and assembly guide . . . . . . #RaspberryPi #Space #ESA @astro_timpeake @thom_astro

1,379 Likes, 9 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “When one #AstroPi unit is simply not enough… . Would you like to #3DPrint your own Astro Pi unit?…”

Facebook

Looking to share information on Raspberry Pi with your social community? Maybe catch a live stream or read back through comments on some of our community projects? Then you’ll want to check out Raspberry Pi Facebook page. It brings the world together via a vast collection of interesting articles, images, videos, and discussions. Here you’ll find information on upcoming events we’re visiting, links to our other social media accounts, and projects our community shares via visitor posts. If you have a moment to spare, you may even find you can answer a community question.

Raspberry Pi at the Scottish Learning Festival

No Description

Raspberry Pi forum

The Raspberry Pi forum is the go-to site for posting questions, getting support, and just having a good old chin wag. Whether you have problems setting up your Pi, need advice on how to build a media centre, or can’t figure out how to utilise Scratch for the classroom, the forum has you covered. Head there for absolutely anything Pi-related, and you’re sure to find help with your query – or better yet, the answer may already be waiting for you!

G+

Our G+ community is an ever-growing mix of makers, educators, industry professionals, and those completely new to Pi and eager to learn more about the Foundation and the product. Here you’ll find project shares, tech questions, and conversation. It’s worth stopping by if you use the platform.

Code Club and CoderDojo

You should also check out the social media accounts of our BFFs Code Club and CoderDojo!


On the CoderDojo website, along with their active forum, you’ll find links to all their accounts at the bottom of the page. For UK-focused Code Club information, head to the Code Club UK Twitter account, and for links to accounts of Code Clubs based in your country, use the search option on the Code Club International website.

Connect with us

However you want to connect with us, make sure to say hi. We love how active and welcoming our online community is and we always enjoy engaging in conversation, seeing your builds and events, and sharing Pi Towers mischief as well as useful Pi-related information and links with you!

If you use any other social platform and miss our presence there, let us know in the comments. And if you run your own Raspberry Pi-related forum, online group, or discussion board, share that as well!

The post Get social: connecting with Raspberry Pi appeared first on Raspberry Pi.

Revisiting How We Put Together Linux Systems

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/revisiting-how-we-put-together-linux-systems.html

In a previous blog story I discussed
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems,
I now want to take the opportunity to explain a bit where we want to
take this with
systemd in the
longer run, and what we want to build out of it. This is going to be a
longer story, so better grab a cold bottle of
Club Mate before you start
reading.

Traditional Linux distributions are built around packaging systems
like RPM or dpkg, and an organization model where upstream developers
and downstream packagers are relatively clearly separated: an upstream
developer writes code, and puts it somewhere online, in a tarball. A
packager than grabs it and turns it into RPMs/DEBs. The user then
grabs these RPMs/DEBs and installs them locally on the system. For a
variety of uses this is a fantastic scheme: users have a large
selection of readily packaged software available, in mostly uniform
packaging, from a single source they can trust. In this scheme the
distribution vets all software it packages, and as long as the user
trusts the distribution all should be good. The distribution takes the
responsibility of ensuring the software is not malicious, of timely
fixing security problems and helping the user if something is wrong.

Upstream Projects

However, this scheme also has a number of problems, and doesn’t fit
many use-cases of our software particularly well. Let’s have a look at
the problems of this scheme for many upstreams:

  • Upstream software vendors are fully dependent on downstream
    distributions to package their stuff. It’s the downstream
    distribution that decides on schedules, packaging details, and how
    to handle support. Often upstream vendors want much faster release
    cycles then the downstream distributions follow.

  • Realistic testing is extremely unreliable and next to
    impossible. Since the end-user can run a variety of different
    package versions together, and expects the software he runs to just
    work on any combination, the test matrix explodes. If upstream tests
    its version on distribution X release Y, then there’s no guarantee
    that that’s the precise combination of packages that the end user
    will eventually run. In fact, it is very unlikely that the end user
    will, since most distributions probably updated a number of
    libraries the package relies on by the time the package ends up being
    made available to the user. The fact that each package can be
    individually updated by the user, and each user can combine library
    versions, plug-ins and executables relatively freely, results in a high
    risk of something going wrong.

  • Since there are so many different distributions in so many different
    versions around, if upstream tries to build and test software for
    them it needs to do so for a large number of distributions, which is
    a massive effort.

  • The distributions are actually quite different in many ways. In
    fact, they are different in a lot of the most basic
    functionality. For example, the path where to put x86-64 libraries
    is different on Fedora and Debian derived systems..

  • Developing software for a number of distributions and versions is
    hard: if you want to do it, you need to actually install them, each
    one of them, manually, and then build your software for each.

  • Since most downstream distributions have strict licensing and
    trademark requirements (and rightly so), any kind of closed source
    software (or otherwise non-free) does not fit into this scheme at
    all.

This all together makes it really hard for many upstreams to work
nicely with the current way how Linux works. Often they try to improve
the situation for them, for example by bundling libraries, to make
their test and build matrices smaller.

System Vendors

The toolbox approach of classic Linux distributions is fantastic for
people who want to put together their individual system, nicely
adjusted to exactly what they need. However, this is not really how
many of today’s Linux systems are built, installed or updated. If you
build any kind of embedded device, a server system, or even user
systems, you frequently do your work based on complete system images,
that are linearly versioned. You build these images somewhere, and
then you replicate them atomically to a larger number of systems. On
these systems, you don’t install or remove packages, you get a defined
set of files, and besides installing or updating the system there are
no ways how to change the set of tools you get.

The current Linux distributions are not particularly good at providing
for this major use-case of Linux. Their strict focus on individual
packages as well as package managers as end-user install and update
tool is incompatible with what many system vendors want.

Users

The classic Linux distribution scheme is frequently not what end users
want, either. Many users are used to app markets like Android, Windows
or iOS/Mac have. Markets are a platform that doesn’t package, build or
maintain software like distributions do, but simply allows users to
quickly find and download the software they need, with the app vendor
responsible for keeping the app updated, secured, and all that on the
vendor’s release cycle. Users tend to be impatient. They want their
software quickly, and the fine distinction between trusting a single
distribution or a myriad of app developers individually is usually not
important for them. The companies behind the marketplaces usually try
to improve this trust problem by providing sand-boxing technologies: as
a replacement for the distribution that audits, vets, builds and
packages the software and thus allows users to trust it to a certain
level, these vendors try to find technical solutions to ensure that
the software they offer for download can’t be malicious.

Existing Approaches To Fix These Problems

Now, all the issues pointed out above are not new, and there are
sometimes quite successful attempts to do something about it. Ubuntu
Apps, Docker, Software Collections, ChromeOS, CoreOS all fix part of
this problem set, usually with a strict focus on one facet of Linux
systems. For example, Ubuntu Apps focus strictly on end user (desktop)
applications, and don’t care about how we built/update/install the OS
itself, or containers. Docker OTOH focuses on containers only, and
doesn’t care about end-user apps. Software Collections tries to focus
on the development environments. ChromeOS focuses on the OS itself,
but only for end-user devices. CoreOS also focuses on the OS, but
only for server systems.

The approaches they find are usually good at specific things, and use
a variety of different technologies, on different layers. However,
none of these projects tried to fix this problems in a generic way,
for all uses, right in the core components of the OS itself.

Linux has come to tremendous successes because its kernel is so
generic: you can build supercomputers and tiny embedded devices out of
it. It’s time we come up with a basic, reusable scheme how to solve
the problem set described above, that is equally generic.

What We Want

The systemd cabal (Kay Sievers, Harald Hoyer, Daniel Mack, Tom
Gundersen, David Herrmann, and yours truly) recently met in Berlin
about all these things, and tried to come up with a scheme that is
somewhat simple, but tries to solve the issues generically, for all
use-cases, as part of the systemd project. All that in a way that is
somewhat compatible with the current scheme of distributions, to allow
a slow, gradual adoption. Also, and that’s something one cannot stress
enough: the toolbox scheme of classic Linux distributions is
actually a good one, and for many cases the right one. However, we
need to make sure we make distributions relevant again for all
use-cases, not just those of highly individualized systems.

Anyway, so let’s summarize what we are trying to do:

  • We want an efficient way that allows vendors to package their
    software (regardless if just an app, or the whole OS) directly for
    the end user, and know the precise combination of libraries and
    packages it will operate with.

  • We want to allow end users and administrators to install these
    packages on their systems, regardless which distribution they have
    installed on it.

  • We want a unified solution that ultimately can cover updates for
    full systems, OS containers, end user apps, programming ABIs, and
    more. These updates shall be double-buffered, (at least). This is an
    absolute necessity if we want to prepare the ground for operating
    systems that manage themselves, that can update safely without
    administrator involvement.

  • We want our images to be trustable (i.e. signed). In fact we want a
    fully trustable OS, with images that can be verified by a full
    trust chain from the firmware (EFI SecureBoot!), through the boot loader, through the
    kernel, and initrd. Cryptographically secure verification of the
    code we execute is relevant on the desktop (like ChromeOS does), but
    also for apps, for embedded devices and even on servers (in a post-Snowden
    world, in particular).

What We Propose

So much about the set of problems, and what we are trying to do. So,
now, let’s discuss the technical bits we came up with:

The scheme we propose is built around the variety of concepts of btrfs
and Linux file system name-spacing. btrfs at this point already has a
large number of features that fit neatly in our concept, and the
maintainers are busy working on a couple of others we want to
eventually make use of.

As first part of our proposal we make heavy use of btrfs sub-volumes and
introduce a clear naming scheme for them. We name snapshots like this:

  • usr:<vendorid>:<architecture>:<version> — This refers to a full
    vendor operating system tree. It’s basically a /usr tree (and no
    other directories), in a specific version, with everything you need to boot
    it up inside it. The <vendorid> field is replaced by some vendor
    identifier, maybe a scheme like
    org.fedoraproject.FedoraWorkstation. The <architecture> field
    specifies a CPU architecture the OS is designed for, for example
    x86-64. The <version> field specifies a specific OS version, for
    example 23.4. An example sub-volume name could hence look like this:
    usr:org.fedoraproject.FedoraWorkstation:x86_64:23.4

  • root:<name>:<vendorid>:<architecture> — This refers to an
    instance of an operating system. Its basically a root directory,
    containing primarily /etc and /var (but possibly more). Sub-volumes
    of this type do not contain a populated /usr tree though. The
    <name> field refers to some instance name (maybe the host name of
    the instance). The other fields are defined as above. An example
    sub-volume name is
    root:revolution:org.fedoraproject.FedoraWorkstation:x86_64.

  • runtime:<vendorid>:<architecture>:<version> — This refers to a
    vendor runtime. A runtime here is supposed to be a set of
    libraries and other resources that are needed to run apps (for the
    concept of apps see below), all in a /usr tree. In this regard this
    is very similar to the usr sub-volumes explained above, however,
    while a usr sub-volume is a full OS and contains everything
    necessary to boot, a runtime is really only a set of
    libraries. You cannot boot it, but you can run apps with it. An
    example sub-volume name is: runtime:org.gnome.GNOME3_20:x86_64:3.20.1

  • framework:<vendorid>:<architecture>:<version> — This is very
    similar to a vendor runtime, as described above, it contains just a
    /usr tree, but goes one step further: it additionally contains all
    development headers, compilers and build tools, that allow
    developing against a specific runtime. For each runtime there should
    be a framework. When you develop against a specific framework in a
    specific architecture, then the resulting app will be compatible
    with the runtime of the same vendor ID and architecture. Example:
    framework:org.gnome.GNOME3_20:x86_64:3.20.1

  • app:<vendorid>:<runtime>:<architecture>:<version> — This
    encapsulates an application bundle. It contains a tree that at
    runtime is mounted to /opt/<vendorid>, and contains all the
    application’s resources. The <vendorid> could be a string like
    org.libreoffice.LibreOffice, the <runtime> refers to one the
    vendor id of one specific runtime the application is built for, for
    example org.gnome.GNOME3_20:3.20.1. The <architecture> and
    <version> refer to the architecture the application is built for,
    and of course its version. Example:
    app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133

  • home:<user>:<uid>:<gid> — This sub-volume shall refer to the home
    directory of the specific user. The <user> field contains the user
    name, the <uid> and <gid> fields the numeric Unix UIDs and GIDs
    of the user. The idea here is that in the long run the list of
    sub-volumes is sufficient as a user database (but see
    below). Example: home:lennart:1000:1000.

btrfs partitions that adhere to this naming scheme should be clearly
identifiable. It is our intention to introduce a new GPT partition type
ID for this.

How To Use It

After we introduced this naming scheme let’s see what we can build of
this:

  • When booting up a system we mount the root directory from one of the
    root sub-volumes, and then mount /usr from a matching usr
    sub-volume. Matching here means it carries the same <vendor-id>
    and <architecture>. Of course, by default we should pick the
    matching usr sub-volume with the newest version by default.

  • When we boot up an OS container, we do exactly the same as the when
    we boot up a regular system: we simply combine a usr sub-volume
    with a root sub-volume.

  • When we enumerate the system’s users we simply go through the
    list of home snapshots.

  • When a user authenticates and logs in we mount his home
    directory from his snapshot.

  • When an app is run, we set up a new file system name-space, mount the
    app sub-volume to /opt/<vendorid>/, and the appropriate runtime
    sub-volume the app picked to /usr, as well as the user’s
    /home/$USER to its place.

  • When a developer wants to develop against a specific runtime he
    installs the right framework, and then temporarily transitions into
    a name space where /usris mounted from the framework sub-volume, and
    /home/$USER from his own home directory. In this name space he then
    runs his build commands. He can build in multiple name spaces at the
    same time, if he intends to builds software for multiple runtimes or
    architectures at the same time.

Instantiating a new system or OS container (which is exactly the same
in this scheme) just consists of creating a new appropriately named
root sub-volume. Completely naturally you can share one vendor OS
copy in one specific version with a multitude of container instances.

Everything is double-buffered (or actually, n-fold-buffered), because
usr, runtime, framework, app sub-volumes can exist in multiple
versions. Of course, by default the execution logic should always pick
the newest release of each sub-volume, but it is up to the user keep
multiple versions around, and possibly execute older versions, if he
desires to do so. In fact, like on ChromeOS this could even be handled
automatically: if a system fails to boot with a newer snapshot, the
boot loader can automatically revert back to an older version of the
OS.

An Example

Note that in result this allows installing not only multiple end-user
applications into the same btrfs volume, but also multiple operating
systems, multiple system instances, multiple runtimes, multiple
frameworks. Or to spell this out in an example:

Let’s say Fedora, Mageia and ArchLinux all implement this scheme,
and provide ready-made end-user images. Also, the GNOME, KDE, SDL
projects all define a runtime+framework to develop against. Finally,
both LibreOffice and Firefox provide their stuff according to this
scheme. You can now trivially install of these into the same btrfs
volume:

  • usr:org.fedoraproject.WorkStation:x86_64:24.7
  • usr:org.fedoraproject.WorkStation:x86_64:24.8
  • usr:org.fedoraproject.WorkStation:x86_64:24.9
  • usr:org.fedoraproject.WorkStation:x86_64:25beta
  • usr:org.mageia.Client:i386:39.3
  • usr:org.mageia.Client:i386:39.4
  • usr:org.mageia.Client:i386:39.6
  • usr:org.archlinux.Desktop:x86_64:302.7.8
  • usr:org.archlinux.Desktop:x86_64:302.7.9
  • usr:org.archlinux.Desktop:x86_64:302.7.10
  • root:revolution:org.fedoraproject.WorkStation:x86_64
  • root:testmachine:org.fedoraproject.WorkStation:x86_64
  • root:foo:org.mageia.Client:i386
  • root:bar:org.archlinux.Desktop:x86_64
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.1
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.4
  • runtime:org.gnome.GNOME3_20:x86_64:3.20.5
  • runtime:org.gnome.GNOME3_22:x86_64:3.22.0
  • runtime:org.kde.KDE5_6:x86_64:5.6.0
  • framework:org.gnome.GNOME3_22:x86_64:3.22.0
  • framework:org.kde.KDE5_6:x86_64:5.6.0
  • app:org.libreoffice.LibreOffice:GNOME3_20:x86_64:133
  • app:org.libreoffice.LibreOffice:GNOME3_22:x86_64:166
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:39
  • app:org.mozilla.Firefox:GNOME3_20:x86_64:40
  • home:lennart:1000:1000
  • home:hrundivbakshi:1001:1001

In the example above, we have three vendor operating systems
installed. All of them in three versions, and one even in a beta
version. We have four system instances around. Two of them of Fedora,
maybe one of them we usually boot from, the other we run for very
specific purposes in an OS container. We also have the runtimes for
two GNOME releases in multiple versions, plus one for KDE. Then, we
have the development trees for one version of KDE and GNOME around, as
well as two apps, that make use of two releases of the GNOME
runtime. Finally, we have the home directories of two users.

Now, with the name-spacing concepts we introduced above, we can
actually relatively freely mix and match apps and OSes, or develop
against specific frameworks in specific versions on any operating
system. It doesn’t matter if you booted your ArchLinux instance, or
your Fedora one, you can execute both LibreOffice and Firefox just
fine, because at execution time they get matched up with the right
runtime, and all of them are available from all the operating systems
you installed. You get the precise runtime that the upstream vendor of
Firefox/LibreOffice did their testing with. It doesn’t matter anymore
which distribution you run, and which distribution the vendor prefers.

Also, given that the user database is actually encoded in the
sub-volume list, it doesn’t matter which system you boot, the
distribution should be able to find your local users automatically,
without any configuration in /etc/passwd.

Building Blocks

With this naming scheme plus the way how we can combine them on
execution we already came quite far, but how do we actually get these
sub-volumes onto the final machines, and how do we update them? Well,
btrfs has a feature they call “send-and-receive”. It basically allows
you to “diff” two file system versions, and generate a binary
delta. You can generate these deltas on a developer’s machine and then
push them into the user’s system, and he’ll get the exact same
sub-volume too. This is how we envision installation and updating of
operating systems, applications, runtimes, frameworks. At installation
time, we simply deserialize an initial send-and-receive delta into
our btrfs volume, and later, when a new version is released we just
add in the few bits that are new, by dropping in another
send-and-receive delta under a new sub-volume name. And we do it
exactly the same for the OS itself, for a runtime, a framework or an
app. There’s no technical distinction anymore. The underlying
operation for installing apps, runtime, frameworks, vendor OSes, as well
as the operation for updating them is done the exact same way for all.

Of course, keeping multiple full /usr trees around sounds like an
awful lot of waste, after all they will contain a lot of very similar
data, since a lot of resources are shared between distributions,
frameworks and runtimes. However, thankfully btrfs actually is able to
de-duplicate this for us. If we add in a new app snapshot, this simply
adds in the new files that changed. Moreover different runtimes and
operating systems might actually end up sharing the same tree.

Even though the example above focuses primarily on the end-user,
desktop side of things, the concept is also extremely powerful in
server scenarios. For example, it is easy to build your own usr
trees and deliver them to your hosts using this scheme. The usr
sub-volumes are supposed to be something that administrators can put
together. After deserializing them into a couple of hosts, you can
trivially instantiate them as OS containers there, simply by adding a
new root sub-volume for each instance, referencing the usr tree you
just put together. Instantiating OS containers hence becomes as easy
as creating a new btrfs sub-volume. And you can still update the images
nicely, get fully double-buffered updates and everything.

And of course, this scheme also applies great to embedded
use-cases. Regardless if you build a TV, an IVI system or a phone: you
can put together you OS versions as usr trees, and then use
btrfs-send-and-receive facilities to deliver them to the systems, and
update them there.

Many people when they hear the word “btrfs” instantly reply with “is
it ready yet?”. Thankfully, most of the functionality we really need
here is strictly read-only. With the exception of the home
sub-volumes (see below) all snapshots are strictly read-only, and are
delivered as immutable vendor trees onto the devices. They never are
changed. Even if btrfs might still be immature, for this kind of
read-only logic it should be more than good enough.

Note that this scheme also enables doing fat systems: for example,
an installer image could include a Fedora version compiled for x86-64,
one for i386, one for ARM, all in the same btrfs volume. Due to btrfs’
de-duplication they will share as much as possible, and when the image
is booted up the right sub-volume is automatically picked. Something
similar of course applies to the apps too!

This also allows us to implement something that we like to call
Operating-System-As-A-Virus. Installing a new system is little more
than:

  • Creating a new GPT partition table
  • Adding an EFI System Partition (FAT) to it
  • Adding a new btrfs volume to it
  • Deserializing a single usr sub-volume into the btrfs volume
  • Installing a boot loader into the EFI System Partition
  • Rebooting

Now, since the only real vendor data you need is the usr sub-volume,
you can trivially duplicate this onto any block device you want. Let’s
say you are a happy Fedora user, and you want to provide a friend with
his own installation of this awesome system, all on a USB stick. All
you have to do for this is do the steps above, using your installed
usr tree as source to copy. And there you go! And you don’t have to
be afraid that any of your personal data is copied too, as the usr
sub-volume is the exact version your vendor provided you with. Or with
other words: there’s no distinction anymore between installer images
and installed systems. It’s all the same. Installation becomes
replication, not more. Live-CDs and installed systems can be fully
identical.

Note that in this design apps are actually developed against a single,
very specific runtime, that contains all libraries it can link against
(including a specific glibc version!). Any library that is not
included in the runtime the developer picked must be included in the
app itself. This is similar how apps on Android declare one very
specific Android version they are developed against. This greatly
simplifies application installation, as there’s no dependency hell:
each app pulls in one runtime, and the app is actually free to pick
which one, as you can have multiple installed, though only one is used
by each app.

Also note that operating systems built this way will never see
“half-updated” systems, as it is common when a system is updated using
RPM/dpkg. When updating the system the code will either run the old or
the new version, but it will never see part of the old files and part
of the new files. This is the same for apps, runtimes, and frameworks,
too.

Where We Are Now

We are currently working on a lot of the groundwork necessary for
this. This scheme relies on the ability to monopolize the
vendor OS resources in /usr, which is the key of what I described in
Factory Reset, Stateless Systems, Reproducible Systems & Verifiable Systems
a few weeks back. Then, of course, for the full desktop app concept we
need a strong sandbox, that does more than just hiding files from the
file system view. After all with an app concept like the above the
primary interfacing between the executed desktop apps and the rest of the
system is via IPC (which is why we work on kdbus and teach it all
kinds of sand-boxing features), and the kernel itself. Harald Hoyer has
started working on generating the btrfs send-and-receive images based
on Fedora.

Getting to the full scheme will take a while. Currently we have many
of the building blocks ready, but some major items are missing. For
example, we push quite a few problems into btrfs, that other solutions
try to solve in user space. One of them is actually
signing/verification of images. The btrfs maintainers are working on
adding this to the code base, but currently nothing exists. This
functionality is essential though to come to a fully verified system
where a trust chain exists all the way from the firmware to the
apps. Also, to make the home sub-volume scheme fully workable we
actually need encrypted sub-volumes, so that the sub-volume’s
pass-phrase can be used for authenticating users in PAM. This doesn’t
exist either.

Working towards this scheme is a gradual process. Many of the steps we
require for this are useful outside of the grand scheme though, which
means we can slowly work towards the goal, and our users can already
take benefit of what we are working on as we go.

Also, and most importantly, this is not really a departure from
traditional operating systems:

Each app, each OS and each app sees a traditional Unix hierarchy with
/usr, /home, /opt, /var, /etc. It executes in an environment that is
pretty much identical to how it would be run on traditional systems.

There’s no need to fully move to a system that uses only btrfs and
follows strictly this sub-volume scheme. For example, we intend to
provide implicit support for systems that are installed on ext4 or
xfs, or that are put together with traditional packaging tools such as
RPM or dpkg: if the the user tries to install a
runtime/app/framework/os image on a system that doesn’t use btrfs so
far, it can just create a loop-back btrfs image in /var, and push the
data into that. Even us developers will run our stuff like this for a
while, after all this new scheme is not particularly useful for highly
individualized systems, and we developers usually tend to run
systems like that.

Also note that this in no way a departure from packaging systems like
RPM or DEB. Even if the new scheme we propose is used for installing
and updating a specific system, it is RPM/DEB that is used to put
together the vendor OS tree initially. Hence, even in this scheme
RPM/DEB are highly relevant, though not strictly as an end-user tool
anymore, but as a build tool.

So Let’s Summarize Again What We Propose

  • We want a unified scheme, how we can install and update OS images,
    user apps, runtimes and frameworks.

  • We want a unified scheme how you can relatively freely mix OS
    images, apps, runtimes and frameworks on the same system.

  • We want a fully trusted system, where cryptographic verification of
    all executed code can be done, all the way to the firmware, as
    standard feature of the system.

  • We want to allow app vendors to write their programs against very
    specific frameworks, under the knowledge that they will end up being
    executed with the exact same set of libraries chosen.

  • We want to allow parallel installation of multiple OSes and versions
    of them, multiple runtimes in multiple versions, as well as multiple
    frameworks in multiple versions. And of course, multiple apps in
    multiple versions.

  • We want everything double buffered (or actually n-fold buffered), to
    ensure we can reliably update/rollback versions, in particular to
    safely do automatic updates.

  • We want a system where updating a runtime, OS, framework, or OS
    container is as simple as adding in a new snapshot and restarting
    the runtime/OS/framework/OS container.

  • We want a system where we can easily instantiate a number of OS
    instances from a single vendor tree, with zero difference for doing
    this on order to be able to boot it on bare metal/VM or as a
    container.

  • We want to enable Linux to have an open scheme that people can use
    to build app markets and similar schemes, not restricted to a
    specific vendor.

Final Words

I’ll be talking about this at LinuxCon Europe in October. I originally
intended to discuss this at the Linux Plumbers Conference (which I
assumed was the right forum for this kind of major plumbing level
improvement), and at linux.conf.au, but there was no interest in my
session submissions there…

Of course this is all work in progress. These are our current ideas we
are working towards. As we progress we will likely change a number of
things. For example, the precise naming of the sub-volumes might look
very different in the end.

Of course, we are developers of the systemd project. Implementing this
scheme is not just a job for the systemd developers. This is a
reinvention how distributions work, and hence needs great support from
the distributions. We really hope we can trigger some interest by
publishing this proposal now, to get the distributions on board. This
after all is explicitly not supposed to be a solution for one specific
project and one specific vendor product, we care about making this
open, and solving it for the generic case, without cutting corners.

If you have any questions about this, you know how you can reach us
(IRC, mail, G+, …).

The future is going to be awesome!

systemd for Developers II

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/socket-activation2.html

It has been way too long since I posted the first
episode
of my systemd for Developers series. Here’s finally the
second part. Make sure you read the first episode of the series before
you start with this part since I’ll assume the reader grokked the wonders
of socket activation.

Socket Activation, Part II

This time we’ll focus on adding socket activation support to real-life
software, more specifically the CUPS printing server. Most current Linux
desktops run CUPS by default these days, since printing is so basic that it’s a
must have, and must just work when the user needs it. However, most desktop
CUPS installations probably don’t actually see more than a handful of print
jobs each month. Even if you are a busy office worker you’ll unlikely to
generate more than a couple of print jobs an hour on your PC. Also, printing is
not time critical. Whether a job takes 50ms or 100ms until it reaches the
printer matters little. As long as it is less than a few seconds the user
probably won’t notice. Finally, printers are usually peripheral hardware: they
aren’t built into your laptop, and you don’t always carry them around plugged
in. That all together makes CUPS a perfect candidate for lazy activation:
instead of starting it unconditionally at boot we just start it on-demand, when
it is needed. That way we can save resources, at boot and at runtime. However,
this kind of activation needs to take place transparently, so that the user
doesn’t notice that the print server was not actually running yet when he tried
to access it. To achieve that we need to make sure that the print server is
started as soon at least one of three conditions hold:

  1. A local client tries to talk to the print server, for example because
    a GNOME application opened the printing dialog which tries to enumerate
    available printers.
  2. A printer is being plugged in locally, and it should be configured and
    enabled and then optionally the user be informed about it.
  3. At boot, when there’s still an unprinted print job lurking in the queue.

Of course, the desktop is not the only place where CUPS is used. CUPS can be
run in small and big print servers, too. In that case the amount of print jobs
is substantially higher, and CUPS should be started right away at boot. That
means that (optionally) we still want to start CUPS unconditionally at boot and
not delay its execution until when it is needed.

Putting this all together we need four kind of activation to make CUPS work
well in all situations at minimal resource usage: socket based activation (to
support condition 1 above), hardware based activation (to support condition 2),
path based activation (for condition 3) and finally boot-time activation (for
the optional server usecase). Let’s focus on these kinds of activation in more
detail, and in particular on socket-based activation.

Socket Activation in Detail

To implement socket-based activation in CUPS we need to make sure that when
sockets are passed from systemd these are used to listen on instead of binding
them directly in the CUPS source code. Fortunately this is relatively easy to
do in the CUPS sources, since it already supports launchd-style socket
activation, as it is used on MacOS X (note that CUPS is nowadays an Apple
project). That means the code already has all the necessary hooks to add
systemd-style socket activation with minimal work.

To begin with our patching session we check out the CUPS sources.
Unfortunately CUPS is still stuck in unhappy Subversion country and not using
git yet. In order to simplify our patching work our first step is to use
git-svn to check it out locally in a way we can access it with the
usual git tools:

git svn clone http://svn.easysw.com/public/cups/trunk/ cups

This will take a while. After the command finished we use the wonderful
git grep to look for all occurences of the word “launchd”, since
that’s probably where we need to add the systemd support too. This reveals scheduler/main.c
as main source file which implements launchd interaction.

Browsing through this file we notice that two functions are primarily
responsible for interfacing with launchd, the appropriately named
launchd_checkin() and launchd_checkout() functions. The
former acquires the sockets from launchd when the daemon starts up, the latter
terminates communication with launchd and is called when the daemon shuts down.
systemd’s socket activation interfaces are much simpler than those of launchd.
Due to that we only need an equivalent of the launchd_checkin() call,
and do not need a checkout function. Our own function
systemd_checkin() can be implemented very similar to
launchd_checkin(): we look at the sockets we got passed and try to map
them to the ones configured in the CUPS configuration. If we got more sockets
passed than configured in CUPS we implicitly add configuration for them. If the
CUPS configuration includes definitions for more listening sockets those will
be bound natively in CUPS. That way we’ll very robustly listen on all ports
that are listed in either systemd or CUPS configuration.

Our function systemd_checkin() uses sd_listen_fds() from
sd-daemon.c to acquire the file descriptors. Then, we use
sd_is_socket() to map the sockets to the right listening configuration
of CUPS, in a loop for each passed socket. The loop corresponds very closely to
the loop from launchd_checkin() however is a lot simpler. Our patch so far looks like this.

Before we can test our patch, we add sd-daemon.c
and sd-daemon.h
as drop-in files to the package, so that sd_listen_fds() and
sd_is_socket() are available for use. After a few minimal changes to
the Makefile we are almost ready to test our socket activated version
of CUPS. The last missing step is creating two unit files for CUPS, one for the
socket (cups.socket), the
other for the service (cups.service). To make things
simple we just drop them in /etc/systemd/system and make sure systemd
knows about them, with systemctl daemon-reload.

Now we are ready to test our little patch: we start the socket with
systemctl start cups.socket. This will bind the socket, but won’t
start CUPS yet. Next, we simply invoke lpq to test whether CUPS is
transparently started, and yupp, this is exactly what happens. We’ll get the
normal output from lpq as if we had started CUPS at boot already, and
if we then check with systemctl status cups.service we see that CUPS
was automatically spawned by our invocation of lpq. Our test
succeeded, socket activation worked!

Hardware Activation in Detail

The next trigger is hardware activation: we want to make sure that CUPS is
automatically started as soon as a local printer is found, regardless whether
that happens as hotplug during runtime or as coldplug during
boot. Hardware activation in systemd is done via udev rules. Any udev device
that is tagged with the systemd tag can pull in units as needed via
the SYSTEMD_WANTS= environment variable. In the case of CUPS we don’t
even have to add our own udev rule to the mix, we can simply hook into what
systemd already does out-of-the-box with rules shipped upstream. More
specifically, it ships a udev rules file with the following lines:

SUBSYSTEM=="printer", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"
SUBSYSTEM=="usb", KERNEL=="lp*", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"
SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ENV{ID_USB_INTERFACES}=="*:0701??:*", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"

This pulls in the target unit printer.target as soon as at least
one printer is plugged in (supporting all kinds of printer ports). All we now
have to do is make sure that our CUPS service is pulled in by
printer.target and we are done. By placing WantedBy=printer.target
line in the [Install] section of the service file, a
Wants dependency is created from printer.target to
cups.service as soon as the latter is enabled with systemctl
enable
. The indirection via printer.target provides us with a
simple way to use systemctl enable and systemctl disable to
manage hardware activation of a service.

Path-based Activation in Detail

To ensure that CUPS is also started when there is a print job still queued
in the printing spool, we write a simple cups.path that
activates CUPS as soon as we find a file in /var/spool/cups.

Boot-based Activation in Detail

Well, starting services on boot is obviously the most boring and well-known
way to spawn a service. This entire excercise was about making this unnecessary,
but we still need to support it for explicit print server machines. Since those
are probably the exception and not the common case we do not enable this kind
of activation by default, but leave it to the administrator to add it in when
he deems it necessary, with a simple command (ln -s
/lib/systemd/system/cups.service
/etc/systemd/system/multi-user.target.wants/
to be precise).

So, now we have covered all four kinds of activation. To finalize our patch
we have a closer look at the [Install] section of cups.service, i.e.
the part of the unit file that controls how systemctl enable
cups.service
and systemctl disable cups.service will hook the
service into/unhook the service from the system. Since we don’t want to start
cups at boot we do not place WantedBy=multi-user.target in it like we
would do for those services. Instead we just place an Also= line that
makes sure that cups.path and cups.socket are
automatically also enabled if the user asks to enable cups.service
(they are enabled according to the [Install] sections in those unit
files).

As last step we then integrate our work into the build system. In contrast
to SysV init scripts systemd unit files are supposed to be distribution
independent. Hence it is a good idea to include them in the upstream tarball.
Unfortunately CUPS doesn’t use Automake, but Autoconf with a set of handwritten
Makefiles. This requires a bit more work to get our additions integrated, but
is not too difficult either. And
this is how our final patch looks like
, after we commited our work and ran
git format-patch -1 on it to generate a pretty git patch.

The next step of course is to get this patch integrated into the upstream
and Fedora packages (or whatever other distribution floats your boat). To make
this easy I have prepared a
patch for Tim that makes the necessary packaging changes for Fedora 16
, and
includes the patch intended for upstream linked above. Of course, ideally the
patch is merged upstream, however in the meantime we can already include it in
the Fedora packages.

Note that CUPS was particularly easy to patch since it already supported
launchd-style activation, patching a service that doesn’t support that yet is
only marginally more difficult. (Oh, and we have no plans to offer the complex
launchd API as compatibility kludge on Linux. It simply doesn’t translate very
well, so don’t even ask… ;-))

And that finishes our little blog story. I hope this quick walkthrough how to add
socket activation (and the other forms of activation) to a package were
interesting to you, and will help you doing the same for your own packages. If you
have questions, our IRC channel #systemd on freenode and
our mailing
list
are available, and we are always happy to help!