Tag Archives: objects

The Biggest Myths

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/the-biggest-myths.html

Since we first proposed systemd
for inclusion in the distributions it has been frequently discussed in
many forums, mailing lists and conferences. In these discussions one
can often hear certain myths about systemd, that are repeated over and
over again, but certainly don’t gain any truth by constant
repetition. Let’s take the time to debunk a few of them:

  1. Myth: systemd is monolithic.

    If you build systemd with all configuration options enabled you
    will build 69 individual binaries. These binaries all serve different
    tasks, and are neatly separated for a number of reasons. For example,
    we designed systemd with security in mind, hence most daemons run at
    minimal privileges (using kernel capabilities, for example) and are
    responsible for very specific tasks only, to minimize their security
    surface and impact. Also, systemd parallelizes the boot more than any
    prior solution. This parallization happens by running more processes
    in parallel. Thus it is essential that systemd is nicely split up into
    many binaries and thus processes. In fact, many of these
    binaries[1] are separated out so nicely, that they are very
    useful outside of systemd, too.

    A package involving 69 individual binaries can hardly be called
    monolithic. What is different from prior solutions however,
    is that we ship more components in a single tarball, and maintain them
    upstream in a single repository with a unified release cycle.

  2. Myth: systemd is about speed.

    Yes, systemd is fast (A
    pretty complete userspace boot-up in ~900ms, anyone?
    ), but that’s
    primarily just a side-effect of doing things right. In fact, we
    never really sat down and optimized the last tiny bit of performance
    out of systemd. Instead, we actually frequently knowingly picked the
    slightly slower code paths in order to keep the code more
    readable. This doesn’t mean being fast was irrelevant for us, but
    reducing systemd to its speed is certainly quite a misconception,
    since that is certainly not anywhere near the top of our list of
    goals.

  3. Myth: systemd’s fast boot-up is irrelevant for
    servers.

    That is just completely not true. Many administrators actually are
    keen on reduced downtimes during maintenance windows. In High
    Availability setups it’s kinda nice if the failed machine comes back
    up really fast. In cloud setups with a large number of VMs or
    containers the price of slow boots multiplies with the number of
    instances. Spending minutes of CPU and IO on really slow boots of
    hundreds of VMs or containers reduces your system’s density
    drastically, heck, it even costs you more energy. Slow boots can be
    quite financially expensive. Then, fast booting of containers allows
    you to implement a logic such as socket
    activated containers
    , allowing you to drastically increase the
    density of your cloud system.

    Of course, in many server setups boot-up is indeed irrelevant, but
    systemd is supposed to cover the whole range. And yes, I am aware
    that often it is the server firmware that costs the most time at
    boot-up, and the OS anyways fast compared to that, but well, systemd
    is still supposed to cover the whole range (see above…), and no,
    not all servers have such bad firmware, and certainly not VMs and
    containers, which are servers of a kind, too.[2]

  4. Myth: systemd is incompatible with shell scripts.

    This is entirely bogus. We just don’t use them for the boot
    process, because we believe they aren’t the best tool for that
    specific purpose, but that doesn’t mean systemd was incompatible with
    them. You can easily run shell scripts as systemd services, heck, you
    can run scripts written in any language as systemd services,
    systemd doesn’t care the slightest bit what’s inside your
    executable. Moreover, we heavily use shell scripts for our own
    purposes, for installing, building, testing systemd. And you can stick
    your scripts in the early boot process, use them for normal services,
    you can run them at latest shutdown, there are practically no
    limits.

  5. Myth: systemd is difficult.

    This also is entire non-sense. A systemd platform is actually much
    simpler than traditional Linuxes because it unifies
    system objects and their dependencies as systemd units. The
    configuration file language is very simple, and redundant
    configuration files we got rid of. We provide uniform tools for much
    of the configuration of the system. The system is much less
    conglomerate than traditional Linuxes are. We also have pretty
    comprehensive documentation (all linked
    from the homepage
    ) about pretty much every detail of systemd, and
    this not only covers admin/user-facing interfaces, but also developer
    APIs.

    systemd certainly comes with a learning curve. Everything
    does. However, we like to believe that it is actually simpler to
    understand systemd than a Shell-based boot for most people. Surprised
    we say that? Well, as it turns out, Shell is not a pretty language to
    learn, it’s syntax is arcane and complex. systemd unit files are
    substantially easier to understand, they do not expose a programming
    language, but are simple and declarative by nature. That all said, if
    you are experienced in shell, then yes, adopting systemd will take a
    bit of learning.

    To make learning easy we tried hard to provide the maximum
    compatibility to previous solutions. But not only that, on many
    distributions you’ll find that some of the traditional tools will now
    even tell you — while executing what you are asking for — how you
    could do it with the newer tools instead, in a possibly nicer way.

    Anyway, the take-away is probably that systemd is probably as
    simple as such a system can be, and that we try hard to make it easy
    to learn. But yes, if you know sysvinit then adopting systemd will
    require a bit learning, but quite frankly if you mastered sysvinit,
    then systemd should be easy for you.

  6. Myth: systemd is not modular.

    Not true at all. At compile time you have a number of
    configure switches to select what you want to build, and what
    not. And we
    document
    how you can select in even more detail what you need,
    going beyond our configure switches.

    This modularity is not totally unlike the one of the Linux kernel,
    where you can select many features individually at compile time. If the
    kernel is modular enough for you then systemd should be pretty close,
    too.

  7. Myth: systemd is only for desktops.

    That is certainly not true. With systemd we try to cover pretty
    much the same range as Linux itself does. While we care for desktop
    uses, we also care pretty much the same way for server uses, and
    embedded uses as well. You can bet that Red Hat wouldn’t make it a
    core piece of RHEL7 if it wasn’t the best option for managing services
    on servers.

    People from numerous companies work on systemd. Car manufactureres
    build it into cars, Red Hat uses it for a server operating system, and
    GNOME uses many of its interfaces for improving the desktop. You find
    it in toys, in space telescopes, and in wind turbines.

    Most features I most recently worked on are probably relevant
    primarily on servers, such as container
    support
    , resource
    management
    or the security
    features
    . We cover desktop systems pretty well already, and there
    are number of companies doing systemd development for embedded, some
    even offer consulting services in it.

  8. Myth: systemd was created as result of the NIH syndrome.

    This is not true. Before we began working on systemd we were
    pushing for Canonical’s Upstart to be widely adopted (and Fedora/RHEL
    used it too for a while). However, we eventually came to the
    conclusion that its design was inherently flawed at its core (at least
    in our eyes: most fundamentally, it leaves dependency management to
    the admin/developer, instead of solving this hard problem in code),
    and if something’s wrong in the core you better replace it, rather
    than fix it. This was hardly the only reason though, other things that
    came into play, such as the licensing/contribution agreement mess
    around it. NIH wasn’t one of the reasons, though…[3]

  9. Myth: systemd is a freedesktop.org project.

    Well, systemd is certainly hosted at fdo, but freedesktop.org is
    little else but a repository for code and documentation. Pretty much
    any coder can request a repository there and dump his stuff there (as
    long as it’s somewhat relevant for the infrastructure of free
    systems). There’s no cabal involved, no “standardization” scheme, no
    project vetting, nothing. It’s just a nice, free, reliable place to
    have your repository. In that regard it’s a bit like SourceForge,
    github, kernel.org, just not commercial and without over-the-top
    requirements, and hence a good place to keep our stuff.

    So yes, we host our stuff at fdo, but the implied assumption of
    this myth in that there was a group of people who meet and then agree
    on how the future free systems look like, is entirely bogus.

  10. Myth: systemd is not UNIX.

    There’s certainly some truth in that. systemd’s sources do not
    contain a single line of code originating from original UNIX. However,
    we derive inspiration from UNIX, and thus there’s a ton of UNIX in
    systemd. For example, the UNIX idea of “everything is a file” finds
    reflection in that in systemd all services are exposed at runtime in a
    kernel file system, the cgroupfs. Then, one of the original
    features of UNIX was multi-seat support, based on built-in terminal
    support. Text terminals are hardly the state of the art how you
    interface with your computer these days however. With systemd we
    brought native multi-seat
    support back, but this time with full support for today’s hardware,
    covering graphics, mice, audio, webcams and more, and all that fully
    automatic, hotplug-capable and without configuration. In fact the
    design of systemd as a suite of integrated tools that each have their
    individual purposes but when used together are more than just the sum
    of the parts, that’s pretty much at the core of UNIX philosophy. Then,
    the way our project is handled (i.e. maintaining much of the core OS
    in a single git repository) is much closer to the BSD model (which is
    a true UNIX, unlike Linux) of doing things (where most of the core OS
    is kept in a single CVS/SVN repository) than things on Linux ever
    were.

    Ultimately, UNIX is something different for everybody. For us
    systemd maintainers it is something we derive inspiration from. For
    others it is a religion, and much like the other world religions there
    are different readings and understandings of it. Some define UNIX
    based on specific pieces of code heritage, others see it just as a set
    of ideas, others as a set of commands or APIs, and even others as a
    definition of behaviours. Of course, it is impossible to ever make all
    these people happy.

    Ultimately the question whether something is UNIX or not matters
    very little. Being technically excellent is hardly exclusive to
    UNIX. For us, UNIX is a major influence (heck, the biggest one), but
    we also have other influences. Hence in some areas systemd will be
    very UNIXy, and in others a little bit less.

  11. Myth: systemd is complex.

    There’s certainly some truth in that. Modern computers are complex
    beasts, and the OS running on it will hence have to be complex
    too. However, systemd is certainly not more complex than prior
    implementations of the same components. Much rather, it’s simpler, and
    has less redundancy (see above). Moreover, building a simple OS based
    on systemd will involve much fewer packages than a traditional Linux
    did. Fewer packages makes it easier to build your system, gets rid of
    interdependencies and of much of the different behaviour of every
    component involved.

  12. Myth: systemd is bloated.

    Well, bloated certainly has many different definitions. But in
    most definitions systemd is probably the opposite of bloat. Since
    systemd components share a common code base, they tend to share much
    more code for common code paths. Here’s an example: in a traditional
    Linux setup, sysvinit, start-stop-daemon, inetd, cron, dbus, all
    implemented a scheme to execute processes with various configuration
    options in a certain, hopefully clean environment. On systemd the code
    paths for all of this, for the configuration parsing, as well as the
    actual execution is shared. This means less code, less place for
    mistakes, less memory and cache pressure, and is thus a very good
    thing. And as a side-effect you actually get a ton more functionality
    for it…

    As mentioned above, systemd is also pretty modular. You can choose
    at build time which components you need, and which you don’t
    need. People can hence specifically choose the level of “bloat” they
    want.

    When you build systemd, it only requires three dependencies: glibc,
    libcap and dbus. That’s it. It can make use of more dependencies, but
    these are entirely optional.

    So, yeah, whichever way you look at it, it’s really not
    bloated.

  13. Myth: systemd being Linux-only is not nice to the BSDs.

    Completely wrong. The BSD folks are pretty much uninterested in
    systemd. If systemd was portable, this would change nothing, they
    still wouldn’t adopt it. And the same is true for the other Unixes in
    the world. Solaris has SMF, BSD has their own “rc” system, and they
    always maintained it separately from Linux. The init system is very
    close to the core of the entire OS. And these other operating systems
    hence define themselves among other things by their core
    userspace. The assumption that they’d adopt our core userspace if we
    just made it portable, is completely without any foundation.

  14. Myth: systemd being Linux-only makes it impossible for Debian to adopt it as default.

    Debian supports non-Linux kernels in their distribution. systemd
    won’t run on those. Is that a problem though, and should that hinder
    them to adopt system as default? Not really. The folks who ported
    Debian to these other kernels were willing to invest time in a massive
    porting effort, they set up test and build systems, and patched and
    built numerous packages for their goal. The maintainance of both a
    systemd unit file and a classic init script for the packaged services
    is a negligable amount of work compared to that, especially since
    those scripts more often than not exist already.

  15. Myth: systemd could be ported to other kernels if its maintainers just wanted to.

    That is simply not true. Porting systemd to other kernel is not
    feasible. We just use too many Linux-specific interfaces. For a few
    one might find replacements on other kernels, some features one might
    want to turn off, but for most this is nor really possible. Here’s a
    small, very incomprehensive list: cgroups, fanotify, umount2(),
    /proc/self/mountinfo
    (including notification), /dev/swaps (same),
    udev, netlink,
    the structure of /sys, /proc/$PID/comm,
    /proc/$PID/cmdline, /proc/$PID/loginuid, /proc/$PID/stat,
    /proc/$PID/session, /proc/$PID/exe, /proc/$PID/fd, tmpfs, devtmpfs,
    capabilities, namespaces of all kinds, various prctl()s, numerous
    ioctls,
    the mount() system call and its semantics, selinux, audit,
    inotify, statfs, O_DIRECTORY, O_NOATIME, /proc/$PID/root, waitid(),
    SCM_CREDENTIALS, SCM_RIGHTS, mkostemp(), /dev/input, ...

    And no, if you look at this list and pick out the few where you can
    think of obvious counterparts on other kernels, then think again, and
    look at the others you didn’t pick, and the complexity of replacing
    them.

  16. Myth: systemd is not portable for no reason.

    Non-sense! We use the Linux-specific functionality because we need
    it to implement what we want. Linux has so many features that
    UNIX/POSIX didn’t have, and we want to empower the user with
    them. These features are incredibly useful, but only if they are
    actually exposed in a friendly way to the user, and that’s what we do
    with systemd.

  17. Myth: systemd uses binary configuration files.

    No idea who came up with this crazy myth, but it’s absolutely not
    true. systemd is configured pretty much exclusively via simple text
    files. A few settings you can also alter with the kernel command line
    and via environment variables. There’s nothing binary in its
    configuration (not even XML). Just plain, simple, easy-to-read text
    files.

  18. Myth: systemd is a feature creep.

    Well, systemd certainly covers more ground that it used to. It’s
    not just an init system anymore, but the basic userspace building
    block to build an OS from, but we carefully make sure to keep most of
    the features optional. You can turn a lot off at compile time, and
    even more at runtime. Thus you can choose freely how much feature
    creeping you want.

  19. Myth: systemd forces you to do something.

    systemd is not the mafia. It’s Free Software, you can do with it
    whatever you want, and that includes not using it. That’s pretty much
    the opposite of “forcing”.

  20. Myth: systemd makes it impossible to run syslog.

    Not true, we carefully made sure when we introduced
    the journal
    that all data is also passed on to any syslog daemon
    running. In fact, if something changed, then only that syslog gets
    more complete data now than it got before, since we now cover early
    boot stuff as well as STDOUT/STDERR of any system service.

  21. Myth: systemd is incompatible.

    We try very hard to provide the best possible compatibility with
    sysvinit. In fact, the vast majority of init scripts should work just
    fine on systemd, unmodified. However, there actually are indeed a few
    incompatibilities, but we try to document
    these
    and explain what to do about them. Ultimately every system
    that is not actually sysvinit itself will have a certain amount of
    incompatibilities with it since it will not share the exect same code
    paths.

    It is our goal to ensure that differences between the various
    distributions are kept at a minimum. That means unit files usually
    work just fine on a different distribution than you wrote it on, which
    is a big improvement over classic init scripts which are very hard to
    write in a way that they run on multiple Linux distributions, due to
    numerous incompatibilities between them.

  22. Myth: systemd is not scriptable, because of its D-Bus use.

    Not true. Pretty much every single D-Bus interface systemd provides
    is also available in a command line tool, for example in systemctl,
    loginctl,
    timedatectl,
    hostnamectl,
    localectl
    and suchlike. You can easily call these tools from shell scripts, they
    open up pretty much the entire API from the command line with
    easy-to-use commands.

    That said, D-Bus actually has bindings for almost any scripting
    language this world knows. Even from the shell you can invoke
    arbitrary D-Bus methods with dbus-send
    or gdbus. If
    anything, this improves scriptability due to the good support of D-Bus
    in the various scripting languages.

  23. Myth: systemd requires you to use some arcane configuration
    tools instead of allowing you to edit your configuration files
    directly.

    Not true at all. We offer some configuration tools, and using them
    gets you a bit of additional functionality (for example, command line
    completion for all settings!), but there’s no need at all to use
    them. You can always edit the files in question directly if you wish,
    and that’s fully supported. Of course sometimes you need to explicitly
    reload configuration of some daemon after editing the configuration,
    but that’s pretty much true for most UNIX services.

  24. Myth: systemd is unstable and buggy.

    Certainly not according to our data. We have been monitoring the
    Fedora bug tracker (and some others) closely for a long long time. The
    number of bugs is very low for such a central component of the OS,
    especially if you discount the numerous RFE bugs we track for the
    project. We are pretty good in keeping systemd out of the list of
    blocker bugs of the distribution. We have a relatively fast
    development cycle with mostly incremental changes to keep quality and
    stability high.

  25. Myth: systemd is not debuggable.

    False. Some people try to imply that the shell was a good
    debugger. Well, it isn’t really. In systemd we provide you with actual
    debugging features instead. For example: interactive debugging,
    verbose tracing, the ability to mask any component during boot, and
    more. Also, we provide documentation
    for it
    .

    It’s certainly well debuggable, we needed that for our own
    development work, after all. But we’ll grant you one thing: it uses
    different debugging tools, we believe more appropriate ones for the
    purpose, though.

  26. Myth: systemd makes changes for the changes’ sake.

    Very much untrue. We pretty much exclusively have technical
    reasons for the changes we make, and we explain them in the various
    pieces of documentation, wiki pages, blog articles, mailing list
    announcements. We try hard to avoid making incompatible changes, and
    if we do we try to document the why and how in detail. And if you
    wonder about something, just ask us!

  27. Myth: systemd is a Red-Hat-only project, is private property
    of some smart-ass developers, who use it to push their views to the
    world.

    Not true. Currently, there are 16 hackers with commit powers to the
    systemd git tree. Of these 16 only six are employed by Red Hat. The 10
    others are folks from ArchLinux, from Debian, from Intel, even from
    Canonical, Mandriva, Pantheon and a number of community folks with
    full commit rights. And they frequently commit big stuff, major
    changes. Then, there are 374 individuals with patches in our tree, and
    they too came from a number of different companies and backgrounds,
    and many of those have way more than one patch in the tree. The
    discussions about where we want to take systemd are done in the open,
    on our IRC channel (#systemd on freenode, you are always
    weclome), on our mailing
    list
    , and on public hackfests (such
    as our next one in Brno
    , you are invited). We regularly attend
    various conferences, to collect feedback, to explain what we are doing
    and why, like few others do. We maintain blogs, engage in social
    networks (we actually
    have some pretty interesting content on Google+
    , and our Google+
    Community is pretty alive, too
    .), and try really hard to explain
    the why and the how how we do things, and to listen to feedback and
    figure out where the current issues are (for example, from that
    feedback we compiled this lists of often heard myths about
    systemd…).

    What most systemd contributors probably share is a rough idea how a
    good OS should look like, and the desire to make it happen. However,
    by the very nature of the project being Open Source, and rooted in the
    community systemd is just what people want it to be, and if it’s not
    what they want then they can drive the direction with patches and
    code, and if that’s not feasible, then there are numerous other
    options to use, too, systemd is never exclusive.

    One goal of systemd is to unify the dispersed Linux landscape a
    bit. We try to get rid of many of the more pointless differences of
    the various distributions in various areas of the core OS. As part of
    that we sometimes adopt schemes that were previously used by only one
    of the distributions and push it to a level where it’s the default of
    systemd, trying to gently push everybody towards the same set of basic
    configuration. This is never exclusive though, distributions can
    continue to deviate from that if they wish, however, if they end-up
    using the well-supported default their work becomes much easier and
    they might gain a feature or two. Now, as it turns out, more
    frequently than not we actually adopted schemes that where Debianisms,
    rather than Fedoraisms/Redhatisms as best supported scheme by
    systemd. For example, systems running systemd now generally store
    their hostname in /etc/hostname, something that used to be
    specific to Debian and now is used across distributions.

    One thing we’ll grant you though, we sometimes can be
    smart-asses. We try to be prepared whenever we open our mouth, in
    order to be able to back-up with facts what we claim. That might make
    us appear as smart-asses.

    But in general, yes, some of the more influental contributors of
    systemd work for Red Hat, but they are in the minority, and systemd is
    a healthy, open community with different interests, different
    backgrounds, just unified by a few rough ideas where the trip should
    go, a community where code and its design counts, and certainly not
    company affiliation.

  28. Myth: systemd doesn’t support /usr split from the root directory.

    Non-sense. Since its beginnings systemd supports the
    --with-rootprefix= option to its configure script
    which allows you to tell systemd to neatly split up the stuff needed
    for early boot and the stuff needed for later on. All this logic is
    fully present and we keep it up-to-date right there in systemd’s build
    system.

    Of course, we still don’t think that actually
    booting with /usr unavailable is a good idea
    , but we
    support this just fine in our build system. This won’t fix the
    inherent problems of the scheme that you’ll encounter all across the
    board, but you can’t blame that on systemd, because in systemd we
    support this just fine.

  29. Myth: systemd doesn’t allow your to replace its components.

    Not true, you can turn off and replace pretty much any part of
    systemd, with very few exceptions. And those exceptions (such as
    journald) generally allow you to run an alternative side by side to
    it, while cooperating nicely with it.

  30. Myth: systemd’s use of D-Bus instead of sockets makes it intransparent.

    This claim is already contradictory in itself: D-Bus uses sockets
    as transport, too. Hence whenever D-Bus is used to send something
    around, a socket is used for that too. D-Bus is mostly a standardized
    serialization of messages to send over these sockets. If anything this
    makes it more transparent, since this serialization is well
    documented, understood and there are numerous tracing tools and
    language bindings for it. This is very much unlike the usual
    homegrown protocols the various classic UNIX daemons use to
    communicate locally.

Hmm, did I write I just wanted to debunk a “few” myths? Maybe these
were more than just a few… Anyway, I hope I managed to clear up a
couple of misconceptions. Thanks for your time.

Footnotes

[1] For example, systemd-detect-virt,
systemd-tmpfiles,
systemd-udevd are.

[2] Also, we are trying to do our little part on maybe
making this better. By exposing boot-time performance of the firmware
more prominently in systemd’s boot output we hope to shame the
firmware writers to clean up their stuff.

[3] And anyways, guess which project includes a library “libnih” — Upstart or systemd?[4]

[4] Hint: it’s not systemd!

The Most Awesome, Least-Advertised Fedora 17 Feature

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/multi-seat.html

There’s one feature In the upcoming Fedora 17 release that is
immensly useful but very little known, since its feature page
‘ckremoval’
does not explicitly refer to it in its name: true
automatic multi-seat support for Linux.

A multi-seat computer is a system that offers not only one local
seat for a user, but multiple, at the same time. A seat refers to a
combination of a screen, a set of input devices (such as mice and
keyboards), and maybe an audio card or webcam, as individual local
workplace for a user. A multi-seat computer can drive an entire class
room of seats with only a fraction of the cost in hardware, energy,
administration and space: you only have one PC, which usually has way
enough CPU power to drive 10 or more workplaces. (In fact, even a
Netbook has fast enough to drive a couple of seats!) Automatic
multi-seat
refers to an entirely automatically managed seat setup:
whenever a new seat is plugged in a new login screen immediately
appears — without any manual configuration –, and when the seat is
unplugged all user sessions on it are removed without delay.

In Fedora 17 we added this functionality to the low-level user and
device tracking of systemd, replacing the previous ConsoleKit logic
that lacked support for automatic multi-seat. With all the ground work
done in systemd, udev and the other components of our plumbing layer
the last remaining bits were surprisingly easy to add.

Currently, the automatic multi-seat logic works best with the USB
multi-seat hardware from Plugable
you can buy cheaply on Amazon
(US)
. These devices require exactly zero configuration with the
new scheme implemented in Fedora 17: just plug them in at any time,
login screens pop up on them, and you have your additional
seats. Alternatively you can also assemble your seat manually with a
few easy loginctl
attach
commands, from any kind of hardware you might have lying
around. To get a full seat you need multiple graphics cards, keyboards
and mice: one set for each seat. (Later on we’ll probably have a graphical
setup utility for additional seats, but that’s not a pressing issue we
believe, as the plug-n-play multi-seat support with the Plugable
devices is so awesomely nice.)

Plugable provided us for free with hardware for testing
multi-seat. They are also involved with the upstream development of
the USB DisplayLink driver for Linux. Due to their positive
involvement with Linux we can only recommend to buy their
hardware. They are good guys, and support Free Software the way all
hardware vendors should! (And besides that, their hardware is also
nicely put together. For example, in contrast to most similar vendors
they actually assign proper vendor/product IDs to their USB hardware
so that we can easily recognize their hardware when plugged in to set
up automatic seats.)

Currently, all this magic is only implemented in the GNOME stack
with the biggest component getting updated being the GNOME Display
Manager. On the Plugable USB hardware you get a full GNOME Shell
session with all the usual graphical gimmicks, the same way as on any
other hardware. (Yes, GNOME 3 works perfectly fine on simpler graphics
cards such as these USB devices!) If you are hacking on a different
desktop environment, or on a different display manager, please have a
look at the
multi-seat documentation
we put together, and particularly at
our short piece about writing
display managers
which are multi-seat capable.

If you work on a major desktop environment or display manager and
would like to implement multi-seat support for it, but lack the
aforementioned Plugable hardware, we might be able to provide you with
the hardware for free. Please contact us directly, and we might be
able to send you a device. Note that we don’t have unlimited devices
available, hence we’ll probably not be able to pass hardware to
everybody who asks, and we will pass the hardware preferably to people
who work on well-known software or otherwise have contributed good
code to the community already. Anyway, if in doubt, ping us, and
explain to us why you should get the hardware, and we’ll consider you!
(Oh, and this not only applies to display managers, if you hack on some other
software where multi-seat awareness would be truly useful, then don’t
hesitate and ping us!)

Phoronix has this
story about this new multi-seat
support which is quite interesting and
full of pictures. Please have a look.

Plugable started a Pledge
drive
to lower the price of the Plugable USB multi-seat terminals
further. It’s full of pictures (and a video showing all this in action!), and uses the code we now make
available in Fedora 17 as base. Please consider pledging a few
bucks.

Recently David Zeuthen added
multi-seat support to udisks
as well. With this in place, a user
logged in on a specific seat can only see the USB storage plugged into
his individual seat, but does not see any USB storage plugged into any
other local seat. With this in place we closed the last missing bit of
multi-seat support in our desktop stack.

With this code in Fedora 17 we cover the big use cases of
multi-seat already: internet cafes, class rooms and similar
installations can provide PC workplaces cheaply and easily without any
manual configuration. Later on we want to build on this and make this
useful for different uses too: for example, the ability to get a login
screen as easily as plugging in a USB connector makes this not useful
only for saving money in setups for many people, but also in embedded
environments (consider monitoring/debugging screens made available via
this hotplug logic) or servers (get trivially quick local access to
your otherwise head-less server). To be truly useful in these areas we
need one more thing though: the ability to run a simply getty
(i.e. text login) on the seat, without necessarily involving a
graphical UI.

The well-known X successor Wayland already comes out of the box with multi-seat
support based on this logic.

Oh, and BTW, as Ubuntu appears to be “focussing” on “clarity” in the
cloud” now ;-), and chose Upstart instead of systemd, this feature
won’t be available in Ubuntu any time soon. That’s (one detail of) the
price Ubuntu has to pay for choosing to maintain it’s own (largely
legacy, such as ConsoleKit) plumbing stack.

Multi-seat has a long history on Unix. Since the earliest days Unix
systems could be accessed by multiple local terminals at the same
time. Since then local terminal support (and hence multi-seat)
gradually moved out of view in computing. The fewest machines these
days have more than one seat, the concept of terminals survived almost
exclusively in the context of PTYs (i.e. fully virtualized API
objects, disconnected from any real hardware seat) and VCs (i.e. a
single virtualized local seat), but almost not in any other way (well,
server setups still use serial terminals for emergency remote access,
but they almost never have more than one serial terminal). All what we
do in systemd is based on the ideas originally brought forward in
Unix; with systemd we now try to bring back a number of the good ideas
of Unix that since the old times were lost on the roadside. For
example, in true Unix style we already started to expose the concept
of a service in the file system (in
/sys/fs/cgroup/systemd/system/), something where on Linux the
(often misunderstood) “everything is a file” mantra previously
fell short. With automatic multi-seat support we bring back support
for terminals, but updated with all the features of today’s desktops:
plug and play, zero configuration, full graphics, and not limited to
input devices and screens, but extending to all kinds of devices, such
as audio, webcams or USB memory sticks.

Anyway, this is all for now; I’d like to thank everybody who was
involved with making multi-seat work so nicely and natively on the
Linux platform. You know who you are! Thanks a ton!

Linux Plumbers Conference/Gnome Summit Recap

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lpc2010-recap.html

Last week LPC and GS 2010 took place in Cambridge,
MA. Like the last years, LPC showed again that — at least for me — it is one of
the most relevant Linux conferences in existence, if not the single most
relevant one.

Here’s a terse, incomprehensive report of the different discussions I took
part in with various folks at the conference, in no particular order:

The Boot
and Init
track led by Kay Sievers (Suse) was a great success. We had
exciting talks which I think helped quite a bit in clearing a few things up,
and hopefully helps us in consolidating the full Linux boot process among all
the components involved. We had talks covering everything from the BIOS boot,
to initrds, graphical boot splashes and systemd. Kay
Sievers and I spoke about systemd, also covering the state of it in the Fedora
and openSUSE distributions. Gustavo Barbieri (ProFUSION, Gentoo) and Michael
Biebl (Debian) gave interesting talks about systemd adoption in their
respective distributions. I was particularly interested in the various
statistics Michael showed about SysV/LSB init script usage in Debian, because
this gives an idea how much work we have in front of us in the long run. A
longer discussion about the future of initrds and the logic necessary to find
the root file system on boot was quite enlightening. I think this track was
helpful to increase the unification and consolidation of the way Linux systems
boot up and are maintained during runtime.

Kay and I and some other folks sat down with Arjan van de Ven (Intel), to
talk about the prospects of systemd in Meego. The discussions were very
positive. In particular Arjan hat some great suggestions regarding use of the
Simple
Boot Flag
in systemd (expect this in one of the next versions) and
readahead. Before systemd can find adoption in Meego we’d have to add a short
number of features to systemd first, most of them should be easy to add.

Similarly, I sat down with Martin Pitt and James Hunt (both Canonical) and
discussed systemd in relation to Ubuntu. I think we managed to clear a lot of
things up, and have a good chance to improve cooperation between Ubuntu and
systemd in relation to APIs and maybe even more.

We talked to Thomas Gleixner regarding userspace notifications when the
wallclock time jumps relative to the monotonic clock. This is important to
systemd so that we can schedule calendar jobs similar to cron, but without
having to wake up periodically to check whether the wallclock time changed
relatively to the monotonic clock so that we can recalculate the next
point in time a calendar event is triggered. There has been previous work in
this area in the kernel world, but nothing got merged. Thomas’ suggestion how to
add this facility should be much easier than anything proposed so far.

I also tried to talk Andreas Grünbacher into supporting file system
user extended attributes in various virtual file systems such as procfs,
cgroupfs, sysfs and tmpfs. I hope I convinced him that this would be a good
idea, since this would allow setting externally accessible attributes to all
kinds of kernel objects, such as processes and devices. This would not only
have uses in systemd (where we could easily store all meta information systemd
needs to know about a service in the cgroupfs via xattrs, so that systemd could
even crash or go away at any time and we still can read all runtime information
necessary beyond mere cgrouping from the file system when systemd comes to live
again) but also in the desktop environments, so that we could for example
attach the human readable application name, an icon or a desktop file to the
processes currently running, in a simple way where the data we attach follows
the lifecycle of the process itself.

The Audio track
went really well, too. I was particularly excited about Pierre-Louis Bossart’s
(Intel) plans regarding AC3 (and other codecs) support in PulseAudio, and the simplicity of his
approach. Also great was hearing about Laurent Pinchart’s project to expose
audio and video device routing to userspace. Finally, I really enjoyed David
Henningsson’s and Luke Yelavich’s (both Canonical) talk regarding tracking down audio bugs on
Ubuntu. I was really impressed by the elaborate tools they created to test
audio drivers on users machines. Pretty cool stuff. Maybe this can be extended
into a test suite for driver writers, because the current approach for driver
writers (i.e. “If PulseAudio works correctly, your driver is correct”) doesn’t
really scale (although I like the idea and take it as a compliment…). I also
liked the timechart profiling results Pierre showed me that he generated for
PulseAudio. Seems PulseAudio is behaving quite nicely these days.

Together with Harald Hoyer I got a demo of David Zeuthen’s disk assembly
daemon (stc), which makes RAID/MD/LVM assembly more dynamic. Great stuff, and I
think we convinced him to leave actual mounting of file systems to systemd
instead of doing it himself.

Harald and I also hashed out a few things to make integration between dracut
and systemd nicer (i.e. passing along profiling information between the two,
and information regarding the root fsck).

I also hope I convinced Ray Strode to make Plymouth actively listen to udev
for notifications about DRM devices, so that further synchronization between
udev and plymouth won’t be necessary, which both makes things more robust and a
little bit faster.

Kay and I talked to Greg Kroah-Hartman regarding the brokeness of
VT_WAITEVENT in kernel TTY layer, and discussed what to do about this. After returning from the US Kay now
did the necessary hacking work to provide a minimal sysfs based solution that
allows userspace query to which TTYs /dev/console and
/dev/tty0 currently point, and get notifications when this changes.
This should allow us to greatly simplify ConsoleKit and make it possible to
add console-triggered activation to systemd (think: getty gets started the
moment you switch to its virtual terminal, not already at boot).

I also spent some time discussing the upcoming deadline scheduling kernel
logic with Dario, Dhaval and Tommaso regarding its possible use in PulseAudio.
I believe deadline schedule is a useful tool to hand out real-time scheduling
to applications securely. As an easy path to supporting deadline scheduling in
PulseAudio I suggested patching RealtimeKit to optionally use deadline
scheduling for its clients. This would magically teach PA (and other clients) to
use deadline scheduling without further patching in the clients.

At GNOME Summit I sat down with Ryan Lortie and Will Thompson to discuss the
the future of the D-Bus session bus and how we can move to a machine/user bus
instead in a nice way. We managed to come to a nice agreement here, and this
should enable us to introduce systemd for session management soonishly. Now we
only need to convince the other folks having stakes in D-Bus that what we
discussed is actually a good idea, expect more about this soon on dbus-devel.
Ryan and I also hashed out our remaining differences regarding the exact
semantics of XDG_RUNTIME_DIR, the result of which you can already
see on the XDG mailing list
. Ryan already did the GLib work to introduce
XDG_RUNTIME_DIR and systemd already supports this inofficially since a few
versions.

I quite appreciate how Michael Meeks quoted me in his final
keynote. 😉

There was a lot of other stuff going on at the conference, and what I
wrote above is in no way complete. And of course, besides all the technical
stuff, it was great meeting all the good Linux folks again, especially my
colleagues from Red Hat.

I am still amazed how systemd is received so positively and with open arms
all across the board. It’s particularly amazing that systemd at this point in
time has already been adopted by various companies in the automotive and
aviation industry.

systemd for Administrators, Part III

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/systemd-for-admins-3.html

Here’s the third installment of my ongoing
series about
systemd
for administrators
.

How Do I Convert A SysV Init Script Into A systemd Service File?

Traditionally, Unix and Linux services (daemons) are started
via SysV init scripts. These are Bourne Shell scripts, usually
residing in a directory such as /etc/rc.d/init.d/ which when
called with one of a few standardized arguments (verbs) such as
start, stop or restart controls,
i.e. starts, stops or restarts the service in question. For starts
this usually involves invoking the daemon binary, which then forks a
background process (more precisely daemonizes). Shell scripts
tend to be slow, needlessly hard to read, very verbose and
fragile. Although they are immensly flexible (after all, they are just
code) some things are very hard to do properly with shell scripts,
such as ordering parallized execution, correctly supervising processes
or just configuring execution contexts in all detail. systemd provides
compatibility with these shell scripts, but due to the shortcomings
pointed out it is recommended to install native systemd service files
for all daemons installed. Also, in contrast to SysV init scripts
which have to be adjusted to the distribution systemd service files
are compatible with any kind of distribution running systemd (which
become more and more these days…). What follows is a terse guide how
to take a SysV init script and translate it into a native systemd
service file. Ideally, upstream projects should ship and install
systemd service files in their tarballs. If you have successfully
converted a SysV script according to the guidelines it might hence be
a good idea to submit the file as patch to upstream. How to prepare a
patch like that will be discussed in a later installment, suffice to
say at this point that the daemon(7)
manual page shipping with systemd contains a lot of useful information
regarding this.

So, let’s jump right in. As an example we’ll convert the init
script of the ABRT daemon into a systemd service file. ABRT is a
standard component of every Fedora install, and is an acronym for
Automatic Bug Reporting Tool, which pretty much describes what it
does, i.e. it is a service for collecting crash dumps. Its SysV script I have uploaded
here.

The first step when converting such a script is to read it
(surprise surprise!) and distill the useful information from the
usually pretty long script. In almost all cases the script consists of
mostly boilerplate code that is identical or at least very similar in
all init scripts, and usually copied and pasted from one to the
other. So, let’s extract the interesting information from the script
linked above:

  • A description string for the service is “Daemon to detect
    crashing apps
    “. As it turns out, the header comments include a
    redundant number of description strings, some of them describing less
    the actual service but the init script to start it. systemd services
    include a description too, and it should describe the service and not
    the service file.
  • The LSB header[1] contains dependency
    information. systemd due to its design around socket-based activation
    usually needs no (or very little) manually configured
    dependencies. (For details regarding socket activation see the original
    announcement blog post.
    ) In this case the dependency on
    $syslog (which encodes that abrtd requires a syslog daemon),
    is the only valuable information. While the header lists another
    dependency ($local_fs) this one is redundant with systemd as
    normal system services are always started with all local file systems
    available.
  • The LSB header suggests that this service should be started in
    runlevels 3 (multi-user) and 5 (graphical).
  • The daemon binary is /usr/sbin/abrtd

And that’s already it. The entire remaining content of this
115-line shell script is simply boilerplate or otherwise redundant
code: code that deals with synchronizing and serializing startup
(i.e. the code regarding lock files) or that outputs status messages
(i.e. the code calling echo), or simply parsing of the verbs (i.e. the
big case block).

From the information extracted above we can now write our systemd service file:

[Unit]
Description=Daemon to detect crashing apps
After=syslog.target

[Service]
ExecStart=/usr/sbin/abrtd
Type=forking

[Install]
WantedBy=multi-user.target

A little explanation of the contents of this file: The
[Unit] section contains generic information about the
service. systemd not only manages system services, but also devices,
mount points, timer, and other components of the system. The generic
term for all these objects in systemd is a unit, and the
[Unit] section encodes information about it that might be
applicable not only to services but also in to the other unit types
systemd maintains. In this case we set the following unit settings: we
set the description string and configure that the daemon shall be
started after Syslog[2], similar to what is encoded in the
LSB header of the original init script. For this Syslog dependency we
create a dependency of type After= on a systemd unit
syslog.target. The latter is a special target unit in systemd
and is the standardized name to pull in a syslog implementation. For
more information about these standardized names see the systemd.special(7). Note
that a dependency of type After= only encodes the suggested
ordering, but does not actually cause syslog to be started when abrtd
is — and this is exactly what we want, since abrtd actually works
fine even without syslog being around. However, if both are started
(and usually they are) then the order in which they are is controlled
with this dependency.

The next section is [Service] which encodes information
about the service itself. It contains all those settings that apply
only to services, and not the other kinds of units systemd maintains
(mount points, devices, timers, …). Two settings are used here:
ExecStart= takes the path to the binary to execute when the
service shall be started up. And with Type= we configure how
the service notifies the init system that it finished starting up. Since
traditional Unix daemons do this by returning to the parent process
after having forked off and initialized the background daemon we set
the type to forking here. That tells systemd to wait until
the start-up binary returns and then consider the processes still
running afterwards the daemon processes.

The final section is [Install]. It encodes information
about how the suggested installation should look like, i.e. under
which circumstances and by which triggers the service shall be
started. In this case we simply say that this service shall be started
when the multi-user.target unit is activated. This is a
special unit (see above) that basically takes the role of the classic
SysV Runlevel 3[3]. The setting WantedBy= has
little effect on the daemon during runtime. It is only read by the
systemctl enable command, which is the recommended way to
enable a service in systemd. This command will simply ensure that our
little service gets automatically activated as soon as
multi-user.target is requested, which it is on all normal
boots[4].

And that’s it. Now we already have a minimal working systemd
service file. To test it we copy it to
/etc/systemd/system/abrtd.service and invoke systemctl
daemon-reload
. This will make systemd take notice of it, and now
we can start the service with it: systemctl start
abrtd.service
. We can verify the status via systemctl status
abrtd.service
. And we can stop it again via systemctl stop
abrtd.service
. Finally, we can enable it, so that it is activated
by default on future boots with systemctl enable
abrtd.service
.

The service file above, while sufficient and basically a 1:1
translation (feature- and otherwise) of the SysV init script still has room for
improvement. Here it is a little bit updated:

[Unit]
Description=ABRT Automated Bug Reporting Tool
After=syslog.target

[Service]
Type=dbus
BusName=com.redhat.abrt
ExecStart=/usr/sbin/abrtd -d -s

[Install]
WantedBy=multi-user.target

So, what did we change? Two things: we improved the description
string a bit. More importantly however, we changed the type of the
service to dbus and configured the D-Bus bus name of the
service. Why did we do this? As mentioned classic SysV services
daemonize after startup, which usually involves double forking
and detaching from any terminal. While this is useful and necessary
when daemons are invoked via a script, this is unnecessary (and slow)
as well as counterproductive when a proper process babysitter such as
systemd is used. The reason for that is that the forked off daemon
process usually has little relation to the original process started by
systemd (after all the daemonizing scheme’s whole idea is to remove
this relation), and hence it is difficult for systemd to figure out
after the fork is finished which process belonging to the service is
actually the main process and which processes might just be
auxiliary. But that information is crucial to implement advanced
babysitting, i.e. supervising the process, automatic respawning on
abnormal termination, collectig crash and exit code information and
suchlike. In order to make it easier for systemd to figure out the
main process of the daemon we changed the service type to
dbus. The semantics of this service type are appropriate for
all services that take a name on the D-Bus system bus as last step of
their initialization[5]. ABRT is one of those. With this setting systemd
will spawn the ABRT process, which will no longer fork (this is
configured via the -d -s switches to the daemon), and systemd
will consider the service fully started up as soon as
com.redhat.abrt appears on the bus. This way the process
spawned by systemd is the main process of the daemon, systemd has a
reliable way to figure out when the daemon is fully started up and
systemd can easily supervise it.

And that’s all there is to it. We have a simple systemd service
file now that encodes in 10 lines more information than the original
SysV init script encoded in 115. And even now there’s a lot of room
left for further improvement utilizing more features systemd
offers. For example, we could set Restart=restart-always to
tell systemd to automatically restart this service when it dies. Or,
we could use OOMScoreAdjust=-500 to ask the kernel to please
leave this process around when the OOM killer wreaks havoc. Or, we
could use CPUSchedulingPolicy=idle to ensure that abrtd
processes crash dumps in background only, always allowing the kernel
to give preference to whatever else might be running and needing CPU
time.

For more information about the configuration options mentioned
here, see the respective man pages systemd.unit(5),
systemd.service(5),
systemd.exec(5). Or,
browse all of
systemd’s man pages
.

Of course, not all SysV scripts are as easy to convert as this
one. But gladly, as it turns out the vast majority actually are.

That’s it for today, come back soon for the next installment in our series.

Footnotes

[1] The LSB header of init scripts is a convention of
including meta data about the service in comment blocks at the top of
SysV init scripts and is
defined by the Linux Standard Base
. This was intended to
standardize init scripts between distributions. While most
distributions have adopted this scheme, the handling of the headers
varies greatly between the distributions, and in fact still makes it
necessary to adjust init scripts for every distribution. As such the LSB spec
never kept the promise it made.

[2] Strictly speaking, this dependency does not even have to
be encoded here, as it is redundant in a system where the Syslog
daemon is socket activatable. Modern syslog systems (for example
rsyslog v5) have been patched upstream to be socket-activatable. If
such a init system is used configuration of the
After=syslog.target dependency is redundant and
implicit. However, to maintain compatibility with syslog services that
have not been updated we include this dependency here.

[3] At least how it used to be defined on Fedora.

[4] Note that in systemd the graphical bootup
(graphical.target, taking the role of SysV runlevel 5) is an
implicit superset of the console-only bootup
(multi-user.target, i.e. like runlevel 3). That means hooking
a service into the latter will also hook it into the
former.

[5] Actually the majority of services of the default Fedora
install now take a name on the bus after startup.

On IDs

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/ids.html

When programming software that cooperates with software running on behalf of
other users, other sessions or other computers it is often necessary to work with
unique identifiers. These can be bound to various hardware and software objects
as well as lifetimes. Often, when people look for such an ID to use they pick
the wrong one because semantics and lifetime or the IDs are not clear. Here’s a
little incomprehensive list of IDs accessible on Linux and how you should or
should not use them.

Hardware IDs

  1. /sys/class/dmi/id/product_uuid: The main board product UUID, as
    set by the board manufacturer and encoded in the BIOS DMI information. It may
    be used to identify a mainboard and only the mainboard. It changes when the
    user replaces the main board. Also, often enough BIOS manufacturers write bogus
    serials into it. In addition, it is x86-specific. Access for unprivileged users
    is forbidden. Hence it is of little general use.
  2. CPUID/EAX=3 CPU serial number: A CPU UUID, as set by the
    CPU manufacturer and encoded on the CPU chip. It may be used to identify a CPU
    and only a CPU. It changes when the user replaces the CPU. Also, most modern
    CPUs don’t implement this feature anymore, and older computers tend to disable
    this option by default, controllable via a BIOS Setup option. In addition, it
    is x86-specific. Hence this too is of little general use.
  3. /sys/class/net/*/address: One or more network MAC addresses, as
    set by the network adapter manufacturer and encoded on some network card
    EEPROM. It changes when the user replaces the network card. Since network cards
    are optional and there may be more than one the availability if this ID is not
    guaranteed and you might have more than one to choose from. On virtual machines
    the MAC addresses tend to be random. This too is hence of little general use.
  4. /sys/bus/usb/devices/*/serial: Serial numbers of various USB
    devices, as encoded in the USB device EEPROM. Most devices don’t have a serial
    number set, and if they have it is often bogus. If the user replaces his USB
    hardware or plugs it into another machine these IDs may change or appear in
    other machines. This hence too is of little use.

There are various other hardware IDs available, many of which you may
discover via the ID_SERIAL udev property of various devices, such hard disks
and similar. They all have in common that they are bound to specific
(replacable) hardware, not universally available, often filled with bogus data
and random in virtualized environments. Or in other words: don’t use them, don’t
rely on them for identification, unless you really know what you are doing and
in general they do not guarantee what you might hope they guarantee.

Software IDs

  1. /proc/sys/kernel/random/boot_id: A random ID that is regenerated
    on each boot. As such it can be used to identify the local machine’s current
    boot. It’s universally available on any recent Linux kernel. It’s a good and
    safe choice if you need to identify a specific boot on a specific booted
    kernel.
  2. gethostname(), /proc/sys/kernel/hostname: A non-random ID
    configured by the administrator to identify a machine in the network. Often
    this is not set at all or is set to some default value such as
    localhost and not even unique in the local network. In addition it
    might change during runtime, for example because it changes based on updated
    DHCP information. As such it is almost entirely useless for anything but
    presentation to the user. It has very weak semantics and relies on correct
    configuration by the administrator. Don’t use this to identify machines in a
    distributed environment. It won’t work unless centrally administered, which
    makes it useless in a globalized, mobile world. It has no place in
    automatically generated filenames that shall be bound to specific hosts. Just
    don’t use it, please. It’s really not what many people think it is.
    gethostname() is standardized in POSIX and hence portable to other
    Unixes.
  3. IP Addresses returned by SIOCGIFCONF or the respective Netlink APIs: These
    tend to be dynamically assigned and often enough only valid on local networks
    or even only the local links (i.e. 192.168.x.x style addresses, or even
    169.254.x.x/IPv4LL). Unfortunately they hence have little use outside of
    networking.
  4. gethostid(): Returns a supposedly unique 32-bit identifier for the
    current machine. The semantics of this is not clear. On most machines this
    simply returns a value based on a local IPv4 address. On others it is
    administrator controlled via the /etc/hostid file. Since the semantics
    of this ID are not clear and most often is just a value based on the IP address it is
    almost always the wrong choice to use. On top of that 32bit are not
    particularly a lot. On the other hand this is standardized in POSIX and hence
    portable to other Unixes. It’s probably best to ignore this value and if people
    don’t want to ignore it they should probably symlink /etc/hostid to
    /var/lib/dbus/machine-id or something similar.
  5. /var/lib/dbus/machine-id: An ID identifying a specific Linux/Unix
    installation. It does not change if hardware is replaced. It is not unreliable
    in virtualized environments. This value has clear semantics and is considered
    part of the D-Bus API. It is supposedly globally unique and portable to all
    systems that have D-Bus. On Linux, it is universally available, given that
    almost all non-embedded and even a fair share of the embedded machines ship
    D-Bus now. This is the recommended way to identify a machine, possibly with a
    fallback to the host name to cover systems that still lack D-Bus. If your
    application links against libdbus, you may access this ID with
    dbus_get_local_machine_id(), if not you can read it directly from the file system.
  6. /proc/self/sessionid: An ID identifying a specific Linux login
    session. This ID is maintained by the kernel and part of the auditing logic. It
    is uniquely assigned to each login session during a specific system boot,
    shared by each process of a session, even across su/sudo and cannot be changed
    by userspace. Unfortunately some distributions have so far failed to set things
    up properly for this to work (Hey, you, Ubuntu!), and this ID is always
    (uint32_t) -1 for them. But there’s hope they get this fixed
    eventually. Nonetheless it is a good choice for a unique session identifier on
    the local machine and for the current boot. To make this ID globally unique it
    is best combined with /proc/sys/kernel/random/boot_id.
  7. getuid(): An ID identifying a specific Unix/Linux user. This ID is
    usually automatically assigned when a user is created. It is not unique across
    machines and may be reassigned to a different user if the original user was
    deleted. As such it should be used only locally and with the limited validity
    in time in mind. To make this ID globally unique it is not sufficient to
    combine it with /var/lib/dbus/machine-id, because the same ID might be
    used for a different user that is created later with the same UID. Nonetheless
    this combination is often good enough. It is available on all POSIX systems.
  8. ID_FS_UUID: an ID that identifies a specific file system in the
    udev tree. It is not always clear how these serials are generated but this
    tends to be available on almost all modern disk file systems. It is not
    available for NFS mounts or virtual file systems. Nonetheless this is often a
    good way to identify a file system, and in the case of the root directory even
    an installation. However due to the weakly defined generation semantics the
    D-Bus machine ID is generally preferrable.

Generating IDs

Linux offers a kernel interface to generate UUIDs on demand, by reading from
/proc/sys/kernel/random/uuid. This is a very simple interface to
generate UUIDs. That said, the logic behind UUIDs is unnecessarily complex and
often it is a better choice to simply read 16 bytes or so from
/dev/urandom.

Summary

And the gist of it all: Use /var/lib/dbus/machine-id! Use
/proc/self/sessionid! Use /proc/sys/kernel/random/boot_id!
Use getuid()! Use /dev/urandom!
And forget about the
rest, in particular the host name, or the hardware IDs such as DMI. And keep in
mind that you may combine the aforementioned IDs in various ways to get
different semantics and validity constraints.

How to Version D-Bus Interfaces Properly and Why Using / as Service Entry Point Sucks

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/versioning-dbus.html

So you are designing a D-Bus interface and want to make it future-proof. Of
course, you thought about versioning your stuff. But you wonder how to do that
best. Here are a few things I learned about versioning D-Bus APIs which might
be of general interest:

Version your interfaces! This one is pretty obvious. No explanation
needed. Simply include the interface version in the interface name as suffix.
i.e. the initial release should use org.foobar.AwesomeStuff1, and if
you do changes you should introduce org.foobar.AwesomeStuff2, and so
on, possibly dropping the old interface.

When should you bump the interface version? Generally, I’d recommend only
bumping when doing incompatible changes, such as function call signature
changes. This of course requires clients to handle the
org.freedesktop.DBus.Error.UnknownMethod error properly for each function
you add to an existing interface. That said, in a few cases it might make sense
to bump the interface version even without breaking compatibility of the calls.
(e.g. in case you add something to an interface that is not directly visible in
the introspection data)

Version your services! This one is almost as obvious. When you
completely rework your D-Bus API introducing a new service name might be a
good idea. Best way to do this is by simply bumping the service name. Hence,
call your service org.foobar.AwesomeService1 right from the beginning
and then bump the version if you reinvent the wheel. And don’t forget that you
can acquire more than one well-known service name on the bus, so even if you
rework everything you can keep compatibilty. (Example: BlueZ 3 to BlueZ 4 switch)

Version your ‘entry point’ object paths! This one is far from
obvious. The reasons why object paths should be versioned are purely technical,
not philosophical: for signals sent from a service D-Bus overwrites the
originating service name by the unique name (e.g. :1.42) even if you
fill in a well-known name (e.g. org.foobar.AwesomeService1). Now,
let’s say your application registers two well-known service names, let’s say
two versions of the same service, versioned like mentioned above. And you have
two objects — one on each of the two service names — that implement a generic
interface and share the same object path: for the client there will be no way
to figure out to which service name the signals sent from this object path
belong. And that’s why you should make sure to use versioned and hence
different paths for both objects. i.e. start with
/org/foobar/AwesomeStuff1 and then bump to
/org/foobar/AwesomeStuff2 and so on. (Also see David’s comments about this.)

When should you bump the object path version? Probably only when you
bump the service name it belongs to. Important is to version the ‘entry point’
object path. Objects below that don’t need explicit versioning.

In summary: For good D-Bus API design you should version all three: D-Bus interfaces, service names and ‘entry point’ object paths.

And don’t forget: nobody gets API design right the first time. So even if
you think your D-Bus API is perfect: version things right from the beginning
because later on it might turn out you were not quite as bright as you thought
you were.

A corollary from the reasoning behind versioning object paths as described
above is that using / as entry point object path for your service is a
bad idea. It makes it very hard to implement more than one service or service
version on a single D-Bus connection. Again: Don’t use / as entry
point object path. Use something like /org/foobar/AwesomeStuff!

A Sixfold Announcement

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/sixfold-announcement.html

Let’s have a small poll here: what is the most annoying feature of a modern
GNOME desktop? You got three options to choose from:

  1. Event sounds, if they are enabled
  2. Event sounds, if they are enabled
  3. Event sounds, if they are enabled

Difficult choice, right?

In my pursuit to make this choice a little bit less difficult, I’d like to draw your attention to the following six announcements:

Announcement Number One: The XDG Sound Theming Specification

Following closely the mechanisms of the XDG
Icon Theme Specification
I may now announce you the XDG Sound Theme
Specification
which will hopefully be established as the future standard
for better event sound theming for free desktops. This project was started by
Patryk Zawadzki and is now maintained by Marc-André Lureau.

Announcement Number Two: The XDG Sound Naming Specification

If we have a Sound Theming Specification, then we also need an XDG Sound Naming
Specification
, again drawing heavily from the original XDG
Icon Naming Specification
. It’s based on some older Bango work
(which seems to be a defunct project these days), and is also maintained by
Monsieur Lureau. The list of defined sounds is hopefully much more complete
than any previous work in this area for free desktops.

Announcement Number Three: The freedesktop Sound Theme

Of course, what would the mentioned two standards be worth if there wasn’t a
single implementation of them? So here I may now announce you the first
(rubbish) version of the XDG freedesktop
Sound Theme.
. It’s basically just a tarball with a number of symlinks
linking to the old gnome-audio event sounds. It’s only a very small
subset of the entire list of XDG sound names. My hope is that this initial
release will spark community contributions for a better, higher quality default
sound theme for free desktops. If you are some kind of musician or audio
technician I am happy to take your submissions!

Announcement Number Four: The libcanberra Event Sound API

Ok, we now have those two specs, and an example theme, what else is missing
to make this stuff a success? Absolutely right, an actual implementation of the
sound theming logic! And this is what libcanberra is.
It is a very small and lean implementation of the specification. However, it is
also very powerful, and can be used in a much more elaborate way than previous
APIs. It’s all about the central function called ca_context_play()
which takes a NULL terminated list of string properties for the sound you want
to generate. How this looks like?

{
	ca_context *c = NULL;

	/* Create a context for the event sounds for your application */
	ca_context_create(&c);

	/* Set a few application-global properties */
	ca_context_change_props(c,
	                        CA_PROP_APPLICATION_NAME, "An example",
				CA_PROP_APPLICATION_ID, "org.freedesktop.libcanberra.Test",
				CA_PROP_APPLICATION_ICON_NAME, "libcanberra-test",
			        NULL);

	/* ... */

	/* Trigger an event sound */
	ca_context_play(c, 0,
			CA_PROP_EVENT_ID, "button-pressed", /* The XDG sound name */
			CA_PROP_MEDIA_NAME, "The user pressed the button foobar",
			CA_PROP_EVENT_MOUSE_X, "555",
			CA_PROP_EVENT_MOUSE_Y, "666",
			CA_PROP_WINDOW_NAME, "Foobar Dialog",
			CA_PROP_WINDOW_ICON_NAME, "libcanberra-test-foobar-dialog",
			CA_PROP_WINDOW_X11_DISPLAY, ":0",
			CA_PROP_WINDOW_X11_XID, "4711",
			NULL);

	/* ... */

	ca_context_destroy(&c);
}

So, the idea is pretty simple, it’s all built around those sound event
properties. A few you initialize globally for your application, and some you
pass each time you actually want to trigger a sound. The properties listed
above are only a subset of the default ones that are defined. They can be
extended at any time. Why is it good to attach all this information to those
event sounds? First, for a11y reasons, where visual feedback in addition of
audible feedback might be advisable. And then, if the underlying sound system
knows which window triggered the event it can take per-window volumes or other
settings into account. If we know that the sound event was triggered by a mouse
event, then the sound system could position the sound in space: i.e. if you
click a button on the left side of the screen, the event sound will come more
out of your left speaker, and if you click on the right, it will be positioned
nearer to the right speaker. The more information the underlying audio system
has about the event sound the fancier ‘earcandy’ it can do to enhance your user
experience with all kinds of audio effects.

The library is thread-safe, brings no dependencies besides OGG Vorbis (and of
course a Libc), and whatever the used backend requires. The library can
support multiple different backends. Either you can compile a single one
directly into the libcanberra.so library, or you can bind them at
runtime via shared objects. Right now, libcanberra supports ALSA, PulseAudio and a null backend. The library is
designed to be portable, however only supports Linux right now. The idea is to
translate the XDG sound names into the sounds that are native the local
platform (i.e. to whatever API Windows or MacOS use natively for sound events).

Besides all that fancy property stuff it also can do implicit on-demand
cacheing of samples in the sound server, cancel currently playing sounds,
notify an application when a sound finished to play and other features.

My hope is that this piece of core desktop technology can be shared by both
GNOME and the KDE world.

Check out the (complete!) documentation!

Download libcanberra 0.1 now!

Read the README now!

Announcement Number Five: The libcanberra-gtk Sound Event Binding for Gtk+

If you compile libcanberra with Gtk+ support (optional), than you’ll get an
additional library libcanberra-gtk which provides a couple of
functions to simplify event sound generation from Gtk+ programs. It will
maintain a global libcanberra context, and provides a few functions that will
automatically fill in quite a few properties for you, so that you don’t have to
fill them in manually. How does that look like? Deadly simple:

{
	/* Trigger an event sound from a GtkWidget, will automaticall fill in CA_PROP_WINDOW_xxx */
	ca_gtk_play_for_widget(GTK_WIDGET(w), 0,
                               CA_PROP_EVENT_ID, "foobar-event",
			       CA_PROP_EVENT_DESCRIPTION, "foobar event happened",
			       NULL);

	/* Alternatively, triggger an event sound from a GdkEvent, will also fill in CA_PROP_EVENT_MOUSE_xxx  */
	ca_gtk_play_for_event(gtk_get_current_event(), 0
                              CA_PROP_EVENT_ID, "waldo-event",
			      CA_PROP_EVENT_DESCRIPTION, "waldo event happened",
			      NULL);
}

Simple? Yes, deadly simple.

Check out the (complete!) documentation!

Announcement Number Five: the libcanberra-gtk-module Gtk+ Module

Okey, the example code for libcanberra-gtk is already very simple. Can we do
it even shorter? Yes!

If you compile libcanberra with Gtk+ support, then you will also get a ne
GtkModule which will automatically hook into all kinds of events inside a Gtk+
program and generate sound events from them. You can have sounds when you press
a button, when you popup a menu or window, or when you select an item from a
list box. It’s all done automatically, no further change in the program is
necessary. It works very similar to the old sound event code in libgnomeui, but
is far less ugly, much more complete, and most importantly, works for all Gtk+
programs, not just those which link against libgnomeui. To activate this feature $GTK_MODULES=libcanberra-gtk-module must be set. So, just for completeness sake, here’s how the example code for using this feature in your program looks like:

{
}

Yes, indeed. No code changes necessary. You get all those fancy UI sounds for free. Awesome? Awesome!

Of course, if you use custom widgets, or need more than just the simplest
audio feedback for input you should link against libcanberra-gtk yourself, and add
ca_gtk_play_for_widget() and ca_gtk_play_for_event() calls to
your code, at the right places.

Announcement Number Six: My GUADEC talk

You want to know more about all this fancy new sound event world order? Then
make sure to attend my
talk at GUADEC 2008 in Istanbul
!

Ok, that’t enough announcements for now. If you want to discuss or
contribute to the two specs, then please join the XDG mailing list.
If you want to contribute to libcanberra, you are invited to join the libcanberra
mailing list
.

Of course these six announcements won’t add a happy end to the GNOME sound
event story just like that. We still need better sounds, and better integration
into applications. But just think of how high quality the sound events on e.g.
MacOS X are, and you can see (or hear) what I hope to get for the free desktops
as well. Also my hope is that since we now have a decent localization
infrastructure for our sounds in place, we can make speech sound events more
popular, and thus sound events much more useful. i.e. have a nice girl’s voice telling you “You disc finished
burning!” instead of some annoying nobody-knows-what-it-means bing sound. I am
one of those who usually have there event sounds disabled all the time. My hope
is that in a few months time I won’t have any reason more to do so.