systemd for Administrators, Part XI

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/inetd.html

Here’s the eleventh installment
of
my ongoing series
on
systemd
for
Administrators:

Converting inetd Services

In a
previous episode of this series
I covered how to convert a SysV
init script to a systemd unit file. In this story I hope to explain
how to convert inetd services into systemd units.

Let’s start with a bit of background. inetd has a long tradition as
one of the classic Unix services. As a superserver it listens on
an Internet socket on behalf of another service and then activate that
service on an incoming connection, thus implementing an on-demand
socket activation system. This allowed Unix machines with limited
resources to provide a large variety of services, without the need to
run processes and invest resources for all of them all of the
time. Over the years a number of independent implementations of inetd
have been shipped on Linux distributions. The most prominent being the
ones based on BSD inetd and xinetd. While inetd used to be installed
on most distributions by default, it nowadays is used only for very
few selected services and the common services are all run
unconditionally at boot, primarily for (perceived) performance
reasons.

One of the core feature of systemd (and Apple’s launchd for the
matter) is socket activation, a scheme pioneered by inetd, however
back then with a different focus. Systemd-style socket activation focusses on
local sockets (AF_UNIX), not so much Internet sockets (AF_INET), even
though both are supported. And more importantly even, socket
activation in systemd is not primarily about the on-demand aspect that
was key in inetd, but more on increasing parallelization (socket
activation allows starting clients and servers of the socket at the
same time), simplicity (since the need to configure explicit
dependencies between services is removed) and robustness (since
services can be restarted or may crash without loss of connectivity of the
socket). However, systemd can also activate services on-demand when
connections are incoming, if configured that way.

Socket activation of any kind requires support in the services
themselves. systemd provides a very simple interface that services may
implement to provide socket activation, built around sd_listen_fds(). As such
it is already a very minimal, simple scheme
. However, the
traditional inetd interface is even simpler. It allows passing only a
single socket to the activated service: the socket fd is simply
duplicated to STDIN and STDOUT of the process spawned, and that’s
already it. In order to provide compatibility systemd optionally
offers the same interface to processes, thus taking advantage of the
many services that already support inetd-style socket activation, but not yet
systemd’s native activation.

Before we continue with a concrete example, let’s have a look at
three different schemes to make use of socket activation:

  1. Socket activation for parallelization, simplicity,
    robustness:
    sockets are bound during early boot and a singleton
    service instance to serve all client requests is immediately started
    at boot. This is useful for all services that are very likely used
    frequently and continously, and hence starting them early and in
    parallel with the rest of the system is advisable. Examples: D-Bus,
    Syslog.
  2. On-demand socket activation for singleton services: sockets
    are bound during early boot and a singleton service instance is
    executed on incoming traffic. This is useful for services that are
    seldom used, where it is advisable to save the resources and time at
    boot and delay activation until they are actually needed. Example: CUPS.
  3. On-demand socket activation for per-connection service
    instances:
    sockets are bound during early boot and for each
    incoming connection a new service instance is instantiated and the
    connection socket (and not the listening one) is passed to it. This is
    useful for services that are seldom used, and where performance is not
    critical, i.e. where the cost of spawning a new service process for
    each incoming connection is limited. Example: SSH.

The three schemes provide different performance characteristics. After
the service finishes starting up the performance provided by the first two
schemes is identical to a stand-alone service (i.e. one that is
started without a super-server, without socket activation), since the
listening socket is passed to the actual service, and code paths from
then on are identical to those of a stand-alone service and all
connections are processes exactly the same way as they are in a
stand-alone service. On the other hand, performance of the third scheme
is usually not as good: since for each connection a new service needs
to be started the resource cost is much higher. However, it also has a
number of advantages: for example client connections are better
isolated and it is easier to develop services activated this way.

For systemd primarily the first scheme is in focus, however the
other two schemes are supported as well. (In fact, the blog story I
covered the necessary code changes for systemd-style socket activation
in
was about a service of the second type, i.e. CUPS). inetd
primarily focusses on the third scheme, however the second scheme is
supported too. (The first one isn’t. Presumably due the focus on the
third scheme inetd got its — a bit unfair — reputation for being
“slow”.)

So much about the background, let’s cut to the beef now and show an
inetd service can be integrated into systemd’s socket
activation. We’ll focus on SSH, a very common service that is widely
installed and used but on the vast majority of machines probably not
started more often than 1/h in average (and usually even much
less). SSH has supported inetd-style activation since a long time,
following the third scheme mentioned above. Since it is started only
every now and then and only with a limited number of connections at
the same time it is a very good candidate for this scheme as the extra
resource cost is negligble: if made socket-activatable SSH is
basically free as long as nobody uses it. And as soon as somebody logs
in via SSH it will be started and the moment he or she disconnects all
its resources are freed again. Let’s find out how to make SSH
socket-activatable in systemd taking advantage of the provided inetd
compatibility!

Here’s the configuration line used to hook up SSH with classic inetd:

ssh stream tcp nowait root /usr/sbin/sshd sshd -i

And the same as xinetd configuration fragment:

service ssh {
        socket_type = stream
        protocol = tcp
        wait = no
        user = root
        server = /usr/sbin/sshd
        server_args = -i
}

Most of this should be fairly easy to understand, as these two
fragments express very much the same information. The non-obvious
parts: the port number (22) is not configured in inetd configuration,
but indirectly via the service database in /etc/services: the
service name is used as lookup key in that database and translated to
a port number. This indirection via /etc/services has been
part of Unix tradition though has been getting more and more out of
fashion, and the newer xinetd hence optionally allows configuration
with explicit port numbers. The most interesting setting here is the
not very intuitively named nowait (resp. wait=no)
option. It configures whether a service is of the second
(wait) resp. third (nowait) scheme mentioned
above. Finally the -i switch is used to enabled inetd mode in
SSH.

The systemd translation of these configuration fragments are the
following two units. First: sshd.socket is a unit encapsulating
information about a socket to listen on:

[Unit]
Description=SSH Socket for Per-Connection Servers

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target

Most of this should be self-explanatory. A few notes:
Accept=yes corresponds to nowait. It’s hopefully
better named, referring to the fact that for nowait the
superserver calls accept() on the listening socket, where for
wait this is the job of the executed
service process. WantedBy=sockets.target is used to ensure that when
enabled this unit is activated at boot at the right time.

And here’s the matching service file [email protected]:

[Unit]
Description=SSH Per-Connection Server

[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket

This too should be mostly self-explanatory. Interesting is
StandardInput=socket, the option that enables inetd
compatibility for this service. StandardInput= may be used to
configure what STDIN of the service should be connected for this
service (see the man
page for details
). By setting it to socket we make sure
to pass the connection socket here, as expected in the simple inetd
interface. Note that we do not need to explicitly configure
StandardOutput= here, since by default the setting from
StandardInput= is inherited if nothing else is
configured. Important is the “-” in front of the binary name. This
ensures that the exit status of the per-connection sshd process is
forgotten by systemd. Normally, systemd will store the exit status of
a all service instances that die abnormally. SSH will sometimes die
abnormally with an exit code of 1 or similar, and we want to make sure
that this doesn’t cause systemd to keep around information for
numerous previous connections that died this way (until this
information is forgotten with systemctl reset-failed).

[email protected] is an instantiated service, as described in the preceeding
installment of this series
. For each incoming connection systemd
will instantiate a new instance of [email protected], with the
instance identifier named after the connection credentials.

You may wonder why in systemd configuration of an inetd service
requires two unit files instead of one. The reason for this is that to
simplify things we want to make sure that the relation between live
units and unit files is obvious, while at the same time we can order
the socket unit and the service units independently in the dependency
graph and control the units as independently as possible. (Think: this
allows you to shutdown the socket independently from the instances,
and each instance individually.)

Now, let’s see how this works in real life. If we drop these files
into /etc/systemd/system we are ready to enable the socket and
start it:

# systemctl enable sshd.socket
ln -s '/etc/systemd/system/sshd.socket' '/etc/systemd/system/sockets.target.wants/sshd.socket'
# systemctl start sshd.socket
# systemctl status sshd.socket
sshd.socket - SSH Socket for Per-Connection Servers
	  Loaded: loaded (/etc/systemd/system/sshd.socket; enabled)
	  Active: active (listening) since Mon, 26 Sep 2011 20:24:31 +0200; 14s ago
	Accepted: 0; Connected: 0
	  CGroup: name=systemd:/system/sshd.socket

This shows that the socket is listening, and so far no connections
have been made (Accepted: will show you how many connections
have been made in total since the socket was started,
Connected: how many connections are currently active.)

Now, let’s connect to this from two different hosts, and see which services are now active:

$ systemctl --full | grep ssh
[email protected]:22-172.31.0.4:47779.service  loaded active running       SSH Per-Connection Server
[email protected]:22-172.31.0.54:52985.service loaded active running       SSH Per-Connection Server
sshd.socket                                   loaded active listening     SSH Socket for Per-Connection Servers

As expected, there are now two service instances running, for the
two connections, and they are named after the source and destination
address of the TCP connection as well as the port numbers. (For
AF_UNIX sockets the instance identifier will carry the PID and UID of
the connecting client.) This allows us to invidiually introspect or
kill specific sshd instances, in case you want to terminate the
session of a specific client:

# systemctl kill [email protected]:22-172.31.0.4:47779.service

And that’s probably already most of what you need to know for
hooking up inetd services with systemd and how to use them afterwards.

In the case of SSH it is probably a good suggestion for most
distributions in order to save resources to default to this kind of
inetd-style socket activation, but provide a stand-alone unit file to
sshd as well which can be enabled optionally. I’ll soon file a
wishlist bug about this against our SSH package in Fedora.

A few final notes on how xinetd and systemd compare feature-wise,
and whether xinetd is fully obsoleted by systemd. The short answer
here is that systemd does not provide the full xinetd feature set and
that is does not fully obsolete xinetd. The longer answer is a bit
more complex: if you look at the multitude of options
xinetd provides you’ll notice that systemd does not compare. For
example, systemd does not come with built-in echo,
time, daytime or discard servers, and never
will include those. TCPMUX is not supported, and neither are RPC
services. However, you will also find that most of these are either
irrelevant on today’s Internet or became other way out-of-fashion. The
vast majority of inetd services do not directly take advantage of
these additional features. In fact, none of the xinetd services
shipped on Fedora make use of these options. That said, there are a
couple of useful features that systemd does not support, for example
IP ACL management. However, most administrators will probably agree
that firewalls are the better solution for these kinds of problems and
on top of that, systemd supports ACL management via tcpwrap for those
who indulge in retro technologies like this. On the other hand systemd
also provides numerous features xinetd does not provide,
starting with the individual control of instances shown above, or the
more expressive configurability of the execution
context for the instances
. I believe that what systemd provides is
quite comprehensive, comes with little legacy cruft but should provide
you with everything you need. And if there’s something systemd does
not cover, xinetd will always be there to fill the void as
you can easily run it in conjunction with systemd. For the
majority of uses systemd should cover what is necessary, and allows
you cut down on the required components to build your system from. In
a way, systemd brings back the functionality of classic Unix inetd and
turns it again into a center piece of a Linux system.

And that’s all for now. Thanks for reading this long piece. And
now, get going and convert your services over! Even better, do this
work in the individual packages upstream or in your distribution!

systemd for Administrators, Part X

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/instances.html

Here’s the tenth installment
of
my ongoing series
on
systemd
for
Administrators:

Instantiated Services

Most services on Linux/Unix are singleton services: there’s
usually only one instance of Syslog, Postfix, or Apache running on a
specific system at the same time. On the other hand some select
services may run in multiple instances on the same host. For example,
an Internet service like the Dovecot IMAP service could run in
multiple instances on different IP ports or different local IP
addresses. A more common example that exists on all installations is
getty, the mini service that runs once for each TTY and
presents a login prompt on it. On most systems this service is
instantiated once for each of the first six virtual consoles
tty1 to tty6. On some servers depending on
administrator configuration or boot-time parameters an additional
getty is instantiated for a serial or virtualizer console. Another
common instantiated service in the systemd world is fsck, the
file system checker that is instantiated once for each block device
that needs to be checked. Finally, in systemd socket activated
per-connection services (think classic inetd!) are also implemented
via instantiated services: a new instance is created for each incoming
connection. In this installment I hope to explain a bit how systemd
implements instantiated services and how to take advantage of them as
an administrator.

If you followed the previous episodes of this series you are
probably aware that services in systemd are named according to the
pattern foobar.service, where foobar is an
identification string for the service, and .service simply a
fixed suffix that is identical for all service units. The definition files
for these services are searched for in /etc/systemd/system
and /lib/systemd/system (and possibly other directories) under this name. For
instantiated services this pattern is extended a bit: the service name becomes
foobar@quux.service where foobar is the
common service identifier, and quux the instance
identifier. Example: [email protected] is the serial
getty service instantiated for ttyS2.

Service instances can be created dynamically as needed. Without
further configuration you may easily start a new getty on a serial
port simply by invoking a systemctl start command for the new
instance:

# systemctl start [email protected]

If a command like the above is run systemd will first look for a
unit configuration file by the exact name you requested. If this
service file is not found (and usually it isn’t if you use
instantiated services like this) then the instance id is removed from
the name and a unit configuration file by the resulting
template name searched. In other words, in the above example,
if the precise [email protected] unit file cannot
be found, [email protected] is loaded instead. This unit
template file will hence be common for all instances of this
service. For the serial getty we ship a template unit file in systemd
(/lib/systemd/system/[email protected]) that looks
something like this:

[Unit]
Description=Serial Getty on %I
BindTo=dev-%i.device
After=dev-%i.device systemd-user-sessions.service

[Service]
ExecStart=-/sbin/agetty -s %I 115200,38400,9600
Restart=always
RestartSec=0

(Note that the unit template file we actually ship along with
systemd for the serial gettys is a bit longer. If you are interested,
have a look at the actual
file
which includes additional directives for compatibility with
SysV, to clear the screen and remove previous users from the TTY
device. To keep things simple I have shortened the unit file to the
relevant lines here.)

This file looks mostly like any other unit file, with one
distinction: the specifiers %I and %i are used at
multiple locations. At unit load time %I and %i are
replaced by systemd with the instance identifier of the service. In
our example above, if a service is instantiated as
[email protected] the specifiers %I and
%i will be replaced by ttyUSB0. If you introspect
the instanciated unit with systemctl status
[email protected]
you will see these replacements
having taken place:

$ systemctl status [email protected]
[email protected] - Getty on ttyUSB0
	  Loaded: loaded (/lib/systemd/system/[email protected]; static)
	  Active: active (running) since Mon, 26 Sep 2011 04:20:44 +0200; 2s ago
	Main PID: 5443 (agetty)
	  CGroup: name=systemd:/system/[email protected]/ttyUSB0
		  └ 5443 /sbin/agetty -s ttyUSB0 115200,38400,9600

And that is already the core idea of instantiated services in
systemd. As you can see systemd provides a very simple templating
system, which can be used to dynamically instantiate services as
needed. To make effective use of this, a few more notes:

You may instantiate these services on-the-fly in
.wants/ symbolic links in the file system. For example, to
make sure the serial getty on ttyUSB0 is started
automatically at every boot, create a symlink like this:

# ln -s /lib/systemd/system/[email protected] /etc/systemd/system/getty.target.wants/serial-getty@ttyUSB0.service

systemd will instantiate the symlinked unit file with the
instance name specified in the symlink name.

You cannot instantiate a unit template without specifying an
instance identifier. In other words systemctl start
[email protected]
will necessarily fail since the instance
name was left unspecified.

Sometimes it is useful to opt-out of the generic template
for one specific instance. For these cases make use of the fact that
systemd always searches first for the full instance file name before
falling back to the template file name: make sure to place a unit file
under the fully instantiated name in /etc/systemd/system and
it will override the generic templated version for this specific
instance.

The unit file shown above uses %i at some places and
%I at others. You may wonder what the difference between
these specifiers are. %i is replaced by the exact characters
of the instance identifier. For %I on the other hand the
instance identifier is first passed through a simple unescaping
algorithm. In the case of a simple instance identifier like
ttyUSB0 there is no effective difference. However, if the
device name includes one or more slashes (“/“) this cannot be
part of a unit name (or Unix file name). Before such a device name can
be used as instance identifier it needs to be escaped so that “/”
becomes “-” and most other special characters (including “-“) are
replaced by “\xAB” where AB is the ASCII code of the character in
hexadecimal notation[1]. Example: to refer to a USB serial port by its
bus path we want to use a port name like
serial/by-path/pci-0000:00:1d.0-usb-0:1.4:1.1-port0. The
escaped version of this name is
serial-by\x2dpath-pci\x2d0000:00:1d.0\x2dusb\x2d0:1.4:1.1\x2dport0. %I
will then refer to former, %i to the latter. Effectively this
means %i is useful wherever it is necessary to refer to other
units, for example to express additional dependencies. On the other
hand %I is useful for usage in command lines, or inclusion in
pretty description strings. Let’s check how this looks with the above unit file:

# systemctl start 'serial-getty@serial-by\x2dpath-pci\x2d0000:00:1d.0\x2dusb\x2d0:1.4:1.1\x2dport0.service'
# systemctl status 'serial-getty@serial-by\x2dpath-pci\x2d0000:00:1d.0\x2dusb\x2d0:1.4:1.1\x2dport0.service'
serial-getty@serial-by\x2dpath-pci\x2d0000:00:1d.0\x2dusb\x2d0:1.4:1.1\x2dport0.service - Serial Getty on serial/by-path/pci-0000:00:1d.0-usb-0:1.4:1.1-port0
	  Loaded: loaded (/lib/systemd/system/[email protected]; static)
	  Active: active (running) since Mon, 26 Sep 2011 05:08:52 +0200; 1s ago
	Main PID: 5788 (agetty)
	  CGroup: name=systemd:/system/[email protected]/serial-by\x2dpath-pci\x2d0000:00:1d.0\x2dusb\x2d0:1.4:1.1\x2dport0
		  └ 5788 /sbin/agetty -s serial/by-path/pci-0000:00:1d.0-usb-0:1.4:1.1-port0 115200 38400 9600

As we can see the while the instance identifier is the escaped
string the command line and the description string actually use the
unescaped version, as expected.

(Side note: there are more specifiers available than just
%i and %I, and many of them are actually
available in all unit files, not just templates for service
instances. For more details see the man
page
which includes a full list and terse explanations.)

And at this point this shall be all for now. Stay tuned for a
follow-up article on how instantiated services are used for
inetd-style socket activation.

Footnotes

[1] Yupp, this escaping algorithm doesn’t really result in
particularly pretty escaped strings, but then again, most escaping
algorithms don’t help readability. The algorithm we used here is
inspired by what udev does in a similar case, with one change. In the
end, we had to pick something. If you’ll plan to comment on the
escaping algorithm please also mention where you live so that I can
come around and paint your bike shed yellow with blue stripes. Thanks!

Разпределен WPA PSK одит

Post Syndicated from RealEnder original https://alex.stanev.org/blog/?p=291

Алгоритъмът за оторизация към WPA/WPA2 мрежи не е дълбоко пазена тайна и е доста добре описан. Това, което затруднява “пробиването” му са 4096-те SHA1 итерации + HMAC-SHA1 и минималната дължина на ключа от 8 байта, което дори и за съвременен процесор е тежка задача. Често потребителите смятат, че просто слагането на “някаква парола” прави безжичната им мрежа достатъчно сигурна.

Offloading-а на изчисленията върху графична карта (GPU) ускорява значително процеса поради специфичната архитектура на този вид хардуер. Съществуват платени и свободни(pyrit) имплементации за такъв вид атаки, като съвсем наскоро oclHashcat-plus също се сдоби с такава поддръжка.

За целите на одита на безжичните мрежи или ако просто искате да проверите дали някой не е прихванал и изпратил WPA handshakes към вашата мрежа за чупене, създадох Distributed WPA PSK auditor. Услугата приема WPA handshakes в libpcap формат и поддържа специално създадени и уникално филтрирани списъци с думи(wordlists). Самият сървър не извършва cracking процеса, като това се прави от потребители, които автоматизирано с помощта на help_crack свалят поредния packet capture и wordlist и тестват за съвпадение. Скрипта е написан на python и е многоплатформен. За момента под Windows се поддържа aircrack-ng, а под posix системи pyrit и aircrack-ng. Можете да видите открития PSK само за мрежи, информацията за които сте качили вие самите, като преди това е необходимо да сте си генерирали уникален ключ. Повече информация можете да откриете на самия сайт.

Идеята за всичко това дойде от sorbo, който поддържа wpa.darkircop.org. DWPA е написан от нулата и има доста разлики и подобрения – по добро управление на речниците, многоплатформеност на help_crack, специално създадени речници с помощта на wlc, оторизационна схема за достъп до откритите PSK и много други.

Когато пооправя сорса, ще го пусна с отворен код. Междувременно, можете да помогнете като дарите CPU/GPU чрез help_crack.

timeout waiting for input from local during Draining Input

Post Syndicated from Laurie Denness original https://laur.ie/blog/2011/09/timeout-waiting-for-input-from-local-during-draining-input/

I experienced this today, very frustrating; sendmail all locked up and outputting this bizarre “timeout waiting for input from local during Draining Input” error into the logs.

tl;dr: figure out what sendmail is waiting for.

In my case, it was stuck on procmail. But why? Turns out the local mailbox (a user that runs a lot of crons) had hit 3GB, at which point it didn’t seem to be accepting any more email into that inbox. Moving that file out of the way and allowing a new one to be created caused the queue to be flushed instantly.

systemd US Tour Dates

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/us-tour-dates.html

Kay Sievers, Harald Hoyer and I will tour the US in the next weeks. If you
have any questions on systemd, udev or dracut (or any of the related
technologies), then please do get in touch with us on the following occasions:

Linux Plumbers Conference, Santa Rosa, CA, Sep 7-9th
Google, Googleplex, Mountain View, CA, Sep 12th
Red Hat, Westford, MA, Sep 13-14th

As usual LPC is going to rock, so make sure to be there!

How to Write syslog Daemons Which Cooperate Nicely With systemd

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/syslog.html

I just finished putting together a text on the systemd wiki explaining what
to do to write a syslog service that is nicely integrated with systemd, and
does all the right things. It’s supposed to be a checklist for all syslog
hackers:

Read it now.

rsyslog already implements everything on this list afaics, and that’s
pretty cool. If other implementations want to catch up, please consider
following these recommendations, too.

I put this together since I have changed systemd 35 to set
StandardOutput=syslog as default, so that all stdout/stderr of all
services automatically ends up in syslog. And since that change requires some
(minimal) changes to all syslog implementations I decided to document this all
properly (if you are curious: they need to set StandardOutput=null to
opt out of this default in order to avoid logging loops).

Anyway, please have a peek and comment if you spot a mistake or
something I forgot. Or if you have questions, just ask.

PagerdutyPHP: Scripts for the Pagerduty API

Post Syndicated from Laurie Denness original https://laur.ie/blog/2011/08/pagerdutyphp-scripts-for-the-pagerduty-api/

As much as most of us would love to not have to do it, most people reading this now will have to be on call at some point. It sucks, but Pagerduty makes it a little easier to manage when your team starts to grow.

Whilst we still have Nagios sending to all contacts directly (a personal preference) we still rely on Pagerduty for emergency pages from the rest of the company, and to arrange who is on call when (their calendar is pretty good for us, allows for exceptions etc).

We’re also a user of the IRC bot “irccat” which, briefly explained, allows input/output to scripts from an IRC chat.

I wanted to combine the two for a long time, and when Pagerduty released their API to access schedule data it wasn’t long before we had a command that allows anyone in the company to ask irccat who is on call and until when.

I’ve finally got around to releasing this today, a “library” of useful Pagerduty API functions (pagerduty.php) (note currently it has just two, to see who is on call for a given schedule. Pull requests for additional useful functions please!) and more importantly, pagerdutycron.php – A script to run on an interval that will then either broadcast in IRC a new person on call, and/or send an email.

As usual, I’ve stuck the code on Github: https://github.com/lozzd/PagerdutyPHP

Desktop Summit 2011

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/21/desktop-summit.html

I realize nearly ten days after the end of a conference is a bit late
to blog about it. However, I needed some time to recover my usual
workflow, having attended two conferences almost
back-to-back, OSCON 2011
and Desktop Summit. (The
strain of the back-to-back conferences, BTW, made it impossible for me
to
attend Linux
Con North America
2011, although I’ll be
at Linux
Con Europe
. I hope next year’s summer conference schedule is not so
tight.)

This was my first Desktop Summit, as I was unable to attend
the first one in
Grand Canaria two years ago
. I must admit, while it might be a bit
controversial to say so, that I felt the conference was still like two
co-located conferences rather than one conference. I got a chance to
speak to my KDE colleagues about various things, but I ended up mostly
attending GNOME talks and therefore felt more like I was at GUADEC than
at a Desktop Summit for most of the time.

The big exception to that, however, was in fact the primary reason I
was at Desktop Summit this year: to participate in a panel discussion
with Mark Shuttleworth and Michael Meeks
(who
gave the panel a quick one-sentence summary on his blog
). That was
plenary session and the room was filled with KDE and GNOME developers
alike, all of whom seemed very interested in the issue.

Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

The panel format was slightly frustrating — primarily due to
Mark’s insistence that we all make very long open statements —
although Karen Sandler
nevertheless did a good job moderating it and framing the
discussion.

I get the impression most of the audience was already pretty well
informed about all of our positions, although I think I shocked some by
finally saying clearly in a public forum (other than identi.ca) that I
have been lobbying FSF to make copyright assignment for FSF-assigned
projects optional rather than mandatory. Nevertheless, we were cast
well into our three roles: Mark, who wants broad licensing control over
projects his company sponsors so he can control the assets (and possibly
sell them); Michael, who has faced so many troubles in the
OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can
be The Only Way; and me, who believes that copyright assignment is
useful for non-profits willing to promise to do the public good to
enforce the GPL, but otherwise is a Bad Thing.

Lydia
tells me that the videos will be available eventually from Desktop
Summit
, and I’ll update this blog post when they are so folks can
watch the panel. I encourage everyone concerned about the issue of
rights transfers from individual developers to entities (be they via
copyright assignment or other broad CLA means) to watch the video once
it’s available. For the
moment, Jake Edge’s LWN
article about the panel
is a pretty good summary.

My favorite moment of the panel, though, was when Shuttleworth claimed
he was but a distant observer of Project Harmony. Karen, as moderator,
quickly pointed out that he was billed as Project Harmony’s originator
in the panel materials. It’s disturbing that Shuttleworth thinks he can
get away with such a claim: it’s a matter of public record,
that Amanda
Brock
(Canonical, Ltd.’s General Counsel) initiated Project Harmony,
led it for most of its early drafts, and then Canonical Ltd. paid Mark
Radcliffe (a
lawyer who
represents companies that violate the GPL
) to finish the drafting.
I suppose Shuttleworth’s claim is narrowly true (if misleading) since
his personal involvement as an individual was only
tangential, but his money and his staff were clearly central: even now,
it’s led by his employee, Allison Randal. If you run the company that
runs a project, it’s your project: after all, doesn’t that fit clearly
with Shuttleworth’s suppositions about why he should be entitled to be
the recipient of copyright assignments and broad CLAs in the first
place?

The rest of my time at Desktop Summit was more as an attendee than a
speaker. Since I’m not desktop or GUI developer by any means, I mostly
went to talks and learned what others had to teach. I was delighted,
however, that no less than six people came up to me and said they really
liked this blog. It’s always good to be told that something you put a
lot of volunteer work into is valuable to at least a few people, and
fortunately everyone on the Internet is famous to at least six
people. 🙂

Sponsored by the GNOME Foundation!

Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to
Desktop Summit 2011, as
they did last
year for GUADEC 2010
. Given my own work and background, I’m very
appreciative of a non-profit with limited resources providing travel
funding for conferences. It’s a big expense, and I’m thankful that the
GNOME Foundation has funded my trips to their annual conference.

BTW, while we await the videos from Desktop Summit, there’s some
“proof” you can see that I attended Desktop Summit, as
I appear
in the group photo
, although you’ll need
to view
the hi-res version and scroll to the lower right of the image, and find
me
. I’m in the second/third (depending on how you count) row back,
2-3 from the right, and two to the left
from Lydia Pintscher.

Finally, I did my best
to live dent from the
Desktop Summit 2011
. That might be of interest to some as well, for
example, if you want to dig back and see what folks said in some of the
talks I attended. There was also
a two threads
after the panel that may be of interest.

Desktop Summit 2011

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/21/desktop-summit.html

I realize nearly ten days after the end of a conference is a bit late
to blog about it. However, I needed some time to recover my usual
workflow, having attended two conferences almost
back-to-back, OSCON 2011
and Desktop Summit. (The
strain of the back-to-back conferences, BTW, made it impossible for me
to
attend Linux
Con North America
2011, although I’ll be
at Linux
Con Europe
. I hope next year’s summer conference schedule is not so
tight.)

This was my first Desktop Summit, as I was unable to attend
the first one in
Grand Canaria two years ago
. I must admit, while it might be a bit
controversial to say so, that I felt the conference was still like two
co-located conferences rather than one conference. I got a chance to
speak to my KDE colleagues about various things, but I ended up mostly
attending GNOME talks and therefore felt more like I was at GUADEC than
at a Desktop Summit for most of the time.

The big exception to that, however, was in fact the primary reason I
was at Desktop Summit this year: to participate in a panel discussion
with Mark Shuttleworth and Michael Meeks
(who
gave the panel a quick one-sentence summary on his blog
). That was
plenary session and the room was filled with KDE and GNOME developers
alike, all of whom seemed very interested in the issue.

Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

The panel format was slightly frustrating — primarily due to
Mark’s insistence that we all make very long open statements —
although Karen Sandler
nevertheless did a good job moderating it and framing the
discussion.

I get the impression most of the audience was already pretty well
informed about all of our positions, although I think I shocked some by
finally saying clearly in a public forum (other than identi.ca) that I
have been lobbying FSF to make copyright assignment for FSF-assigned
projects optional rather than mandatory. Nevertheless, we were cast
well into our three roles: Mark, who wants broad licensing control over
projects his company sponsors so he can control the assets (and possibly
sell them); Michael, who has faced so many troubles in the
OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can
be The Only Way; and me, who believes that copyright assignment is
useful for non-profits willing to promise to do the public good to
enforce the GPL, but otherwise is a Bad Thing.

Lydia
tells me that the videos will be available eventually from Desktop
Summit
, and I’ll update this blog post when they are so folks can
watch the panel. I encourage everyone concerned about the issue of
rights transfers from individual developers to entities (be they via
copyright assignment or other broad CLA means) to watch the video once
it’s available. For the
moment, Jake Edge’s LWN
article about the panel
is a pretty good summary.

My favorite moment of the panel, though, was when Shuttleworth claimed
he was but a distant observer of Project Harmony. Karen, as moderator,
quickly pointed out that he was billed as Project Harmony’s originator
in the panel materials. It’s disturbing that Shuttleworth thinks he can
get away with such a claim: it’s a matter of public record,
that Amanda
Brock
(Canonical, Ltd.’s General Counsel) initiated Project Harmony,
led it for most of its early drafts, and then Canonical Ltd. paid Mark
Radcliffe (a
lawyer who
represents companies that violate the GPL
) to finish the drafting.
I suppose Shuttleworth’s claim is narrowly true (if misleading) since
his personal involvement as an individual was only
tangential, but his money and his staff were clearly central: even now,
it’s led by his employee, Allison Randal. If you run the company that
runs a project, it’s your project: after all, doesn’t that fit clearly
with Shuttleworth’s suppositions about why he should be entitled to be
the recipient of copyright assignments and broad CLAs in the first
place?

The rest of my time at Desktop Summit was more as an attendee than a
speaker. Since I’m not desktop or GUI developer by any means, I mostly
went to talks and learned what others had to teach. I was delighted,
however, that no less than six people came up to me and said they really
liked this blog. It’s always good to be told that something you put a
lot of volunteer work into is valuable to at least a few people, and
fortunately everyone on the Internet is famous to at least six
people. 🙂

Sponsored by the GNOME Foundation!

Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to
Desktop Summit 2011, as
they did last
year for GUADEC 2010
. Given my own work and background, I’m very
appreciative of a non-profit with limited resources providing travel
funding for conferences. It’s a big expense, and I’m thankful that the
GNOME Foundation has funded my trips to their annual conference.

BTW, while we await the videos from Desktop Summit, there’s some
“proof” you can see that I attended Desktop Summit, as
I appear
in the group photo
, although you’ll need
to view
the hi-res version and scroll to the lower right of the image, and find
me
. I’m in the second/third (depending on how you count) row back,
2-3 from the right, and two to the left
from Lydia Pintscher.

Finally, I did my best
to live dent from the
Desktop Summit 2011
. That might be of interest to some as well, for
example, if you want to dig back and see what folks said in some of the
talks I attended. There was also
a two threads
after the panel that may be of interest.

Desktop Summit 2011

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/21/desktop-summit.html

I realize nearly ten days after the end of a conference is a bit late
to blog about it. However, I needed some time to recover my usual
workflow, having attended two conferences almost
back-to-back, OSCON 2011
and Desktop Summit. (The
strain of the back-to-back conferences, BTW, made it impossible for me
to
attend Linux
Con North America
2011, although I’ll be
at Linux
Con Europe
. I hope next year’s summer conference schedule is not so
tight.)

This was my first Desktop Summit, as I was unable to attend
the first one in
Grand Canaria two years ago
. I must admit, while it might be a bit
controversial to say so, that I felt the conference was still like two
co-located conferences rather than one conference. I got a chance to
speak to my KDE colleagues about various things, but I ended up mostly
attending GNOME talks and therefore felt more like I was at GUADEC than
at a Desktop Summit for most of the time.

The big exception to that, however, was in fact the primary reason I
was at Desktop Summit this year: to participate in a panel discussion
with Mark Shuttleworth and Michael Meeks
(who
gave the panel a quick one-sentence summary on his blog
). That was
plenary session and the room was filled with KDE and GNOME developers
alike, all of whom seemed very interested in the issue.

Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

The panel format was slightly frustrating — primarily due to
Mark’s insistence that we all make very long open statements —
although Karen Sandler
nevertheless did a good job moderating it and framing the
discussion.

I get the impression most of the audience was already pretty well
informed about all of our positions, although I think I shocked some by
finally saying clearly in a public forum (other than identi.ca) that I
have been lobbying FSF to make copyright assignment for FSF-assigned
projects optional rather than mandatory. Nevertheless, we were cast
well into our three roles: Mark, who wants broad licensing control over
projects his company sponsors so he can control the assets (and possibly
sell them); Michael, who has faced so many troubles in the
OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can
be The Only Way; and me, who believes that copyright assignment is
useful for non-profits willing to promise to do the public good to
enforce the GPL, but otherwise is a Bad Thing.

Lydia
tells me that the videos will be available eventually from Desktop
Summit
, and I’ll update this blog post when they are so folks can
watch the panel. I encourage everyone concerned about the issue of
rights transfers from individual developers to entities (be they via
copyright assignment or other broad CLA means) to watch the video once
it’s available. For the
moment, Jake Edge’s LWN
article about the panel
is a pretty good summary.

My favorite moment of the panel, though, was when Shuttleworth claimed
he was but a distant observer of Project Harmony. Karen, as moderator,
quickly pointed out that he was billed as Project Harmony’s originator
in the panel materials. It’s disturbing that Shuttleworth thinks he can
get away with such a claim: it’s a matter of public record,
that Amanda
Brock
(Canonical, Ltd.’s General Counsel) initiated Project Harmony,
led it for most of its early drafts, and then Canonical Ltd. paid Mark
Radcliffe (a
lawyer who
represents companies that violate the GPL
) to finish the drafting.
I suppose Shuttleworth’s claim is narrowly true (if misleading) since
his personal involvement as an individual was only
tangential, but his money and his staff were clearly central: even now,
it’s led by his employee, Allison Randal. If you run the company that
runs a project, it’s your project: after all, doesn’t that fit clearly
with Shuttleworth’s suppositions about why he should be entitled to be
the recipient of copyright assignments and broad CLAs in the first
place?

The rest of my time at Desktop Summit was more as an attendee than a
speaker. Since I’m not desktop or GUI developer by any means, I mostly
went to talks and learned what others had to teach. I was delighted,
however, that no less than six people came up to me and said they really
liked this blog. It’s always good to be told that something you put a
lot of volunteer work into is valuable to at least a few people, and
fortunately everyone on the Internet is famous to at least six
people. 🙂

Sponsored by the GNOME Foundation!

Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to
Desktop Summit 2011, as
they did last
year for GUADEC 2010
. Given my own work and background, I’m very
appreciative of a non-profit with limited resources providing travel
funding for conferences. It’s a big expense, and I’m thankful that the
GNOME Foundation has funded my trips to their annual conference.

BTW, while we await the videos from Desktop Summit, there’s some
“proof” you can see that I attended Desktop Summit, as
I appear
in the group photo
, although you’ll need
to view
the hi-res version and scroll to the lower right of the image, and find
me
. I’m in the second/third (depending on how you count) row back,
2-3 from the right, and two to the left
from Lydia Pintscher.

Finally, I did my best
to live dent from the
Desktop Summit 2011
. That might be of interest to some as well, for
example, if you want to dig back and see what folks said in some of the
talks I attended. There was also
a two threads
after the panel that may be of interest.

Desktop Summit 2011

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/21/desktop-summit.html

I realize nearly ten days after the end of a conference is a bit late
to blog about it. However, I needed some time to recover my usual
workflow, having attended two conferences almost
back-to-back, OSCON 2011
and Desktop Summit. (The
strain of the back-to-back conferences, BTW, made it impossible for me
to
attend Linux
Con North America
2011, although I’ll be
at Linux
Con Europe
. I hope next year’s summer conference schedule is not so
tight.)

This was my first Desktop Summit, as I was unable to attend
the first one in
Grand Canaria two years ago
. I must admit, while it might be a bit
controversial to say so, that I felt the conference was still like two
co-located conferences rather than one conference. I got a chance to
speak to my KDE colleagues about various things, but I ended up mostly
attending GNOME talks and therefore felt more like I was at GUADEC than
at a Desktop Summit for most of the time.

The big exception to that, however, was in fact the primary reason I
was at Desktop Summit this year: to participate in a panel discussion
with Mark Shuttleworth and Michael Meeks
(who
gave the panel a quick one-sentence summary on his blog
). That was
plenary session and the room was filled with KDE and GNOME developers
alike, all of whom seemed very interested in the issue.

Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

The panel format was slightly frustrating — primarily due to
Mark’s insistence that we all make very long open statements —
although Karen Sandler
nevertheless did a good job moderating it and framing the
discussion.

I get the impression most of the audience was already pretty well
informed about all of our positions, although I think I shocked some by
finally saying clearly in a public forum (other than identi.ca) that I
have been lobbying FSF to make copyright assignment for FSF-assigned
projects optional rather than mandatory. Nevertheless, we were cast
well into our three roles: Mark, who wants broad licensing control over
projects his company sponsors so he can control the assets (and possibly
sell them); Michael, who has faced so many troubles in the
OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can
be The Only Way; and me, who believes that copyright assignment is
useful for non-profits willing to promise to do the public good to
enforce the GPL, but otherwise is a Bad Thing.

Lydia
tells me that the videos will be available eventually from Desktop
Summit
, and I’ll update this blog post when they are so folks can
watch the panel. I encourage everyone concerned about the issue of
rights transfers from individual developers to entities (be they via
copyright assignment or other broad CLA means) to watch the video once
it’s available. For the
moment, Jake Edge’s LWN
article about the panel
is a pretty good summary.

My favorite moment of the panel, though, was when Shuttleworth claimed
he was but a distant observer of Project Harmony. Karen, as moderator,
quickly pointed out that he was billed as Project Harmony’s originator
in the panel materials. It’s disturbing that Shuttleworth thinks he can
get away with such a claim: it’s a matter of public record,
that Amanda
Brock
(Canonical, Ltd.’s General Counsel) initiated Project Harmony,
led it for most of its early drafts, and then Canonical Ltd. paid Mark
Radcliffe (a
lawyer who
represents companies that violate the GPL
) to finish the drafting.
I suppose Shuttleworth’s claim is narrowly true (if misleading) since
his personal involvement as an individual was only
tangential, but his money and his staff were clearly central: even now,
it’s led by his employee, Allison Randal. If you run the company that
runs a project, it’s your project: after all, doesn’t that fit clearly
with Shuttleworth’s suppositions about why he should be entitled to be
the recipient of copyright assignments and broad CLAs in the first
place?

The rest of my time at Desktop Summit was more as an attendee than a
speaker. Since I’m not desktop or GUI developer by any means, I mostly
went to talks and learned what others had to teach. I was delighted,
however, that no less than six people came up to me and said they really
liked this blog. It’s always good to be told that something you put a
lot of volunteer work into is valuable to at least a few people, and
fortunately everyone on the Internet is famous to at least six
people. 🙂

Sponsored by the GNOME Foundation!

Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to
Desktop Summit 2011, as
they did last
year for GUADEC 2010
. Given my own work and background, I’m very
appreciative of a non-profit with limited resources providing travel
funding for conferences. It’s a big expense, and I’m thankful that the
GNOME Foundation has funded my trips to their annual conference.

BTW, while we await the videos from Desktop Summit, there’s some
“proof” you can see that I attended Desktop Summit, as
I appear
in the group photo
, although you’ll need
to view
the hi-res version and scroll to the lower right of the image, and find
me
. I’m in the second/third (depending on how you count) row back,
2-3 from the right, and two to the left
from Lydia Pintscher.

Finally, I did my best
to live dent from the
Desktop Summit 2011
. That might be of interest to some as well, for
example, if you want to dig back and see what folks said in some of the
talks I attended. There was also
a two threads
after the panel that may be of interest.

Desktop Summit 2011

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/21/desktop-summit.html

I realize nearly ten days after the end of a conference is a bit late
to blog about it. However, I needed some time to recover my usual
workflow, having attended two conferences almost
back-to-back, OSCON 2011
and Desktop Summit. (The
strain of the back-to-back conferences, BTW, made it impossible for me
to
attend Linux
Con North America
2011, although I’ll be
at Linux
Con Europe
. I hope next year’s summer conference schedule is not so
tight.)

This was my first Desktop Summit, as I was unable to attend
the first one in
Grand Canaria two years ago
. I must admit, while it might be a bit
controversial to say so, that I felt the conference was still like two
co-located conferences rather than one conference. I got a chance to
speak to my KDE colleagues about various things, but I ended up mostly
attending GNOME talks and therefore felt more like I was at GUADEC than
at a Desktop Summit for most of the time.

The big exception to that, however, was in fact the primary reason I
was at Desktop Summit this year: to participate in a panel discussion
with Mark Shuttleworth and Michael Meeks
(who
gave the panel a quick one-sentence summary on his blog
). That was
plenary session and the room was filled with KDE and GNOME developers
alike, all of whom seemed very interested in the issue.

Photo of The CAA/CLA panel discussion at Desktop Summit 2011.

The panel format was slightly frustrating — primarily due to
Mark’s insistence that we all make very long open statements —
although Karen Sandler
nevertheless did a good job moderating it and framing the
discussion.

I get the impression most of the audience was already pretty well
informed about all of our positions, although I think I shocked some by
finally saying clearly in a public forum (other than identi.ca) that I
have been lobbying FSF to make copyright assignment for FSF-assigned
projects optional rather than mandatory. Nevertheless, we were cast
well into our three roles: Mark, who wants broad licensing control over
projects his company sponsors so he can control the assets (and possibly
sell them); Michael, who has faced so many troubles in the
OpenOffice.org/LibreOffice debacle that he believes inbound=outbound can
be The Only Way; and me, who believes that copyright assignment is
useful for non-profits willing to promise to do the public good to
enforce the GPL, but otherwise is a Bad Thing.

Lydia
tells me that the videos will be available eventually from Desktop
Summit
, and I’ll update this blog post when they are so folks can
watch the panel. I encourage everyone concerned about the issue of
rights transfers from individual developers to entities (be they via
copyright assignment or other broad CLA means) to watch the video once
it’s available. For the
moment, Jake Edge’s LWN
article about the panel
is a pretty good summary.

My favorite moment of the panel, though, was when Shuttleworth claimed
he was but a distant observer of Project Harmony. Karen, as moderator,
quickly pointed out that he was billed as Project Harmony’s originator
in the panel materials. It’s disturbing that Shuttleworth thinks he can
get away with such a claim: it’s a matter of public record,
that Amanda
Brock
(Canonical, Ltd.’s General Counsel) initiated Project Harmony,
led it for most of its early drafts, and then Canonical Ltd. paid Mark
Radcliffe (a
lawyer who
represents companies that violate the GPL
) to finish the drafting.
I suppose Shuttleworth’s claim is narrowly true (if misleading) since
his personal involvement as an individual was only
tangential, but his money and his staff were clearly central: even now,
it’s led by his employee, Allison Randal. If you run the company that
runs a project, it’s your project: after all, doesn’t that fit clearly
with Shuttleworth’s suppositions about why he should be entitled to be
the recipient of copyright assignments and broad CLAs in the first
place?

The rest of my time at Desktop Summit was more as an attendee than a
speaker. Since I’m not desktop or GUI developer by any means, I mostly
went to talks and learned what others had to teach. I was delighted,
however, that no less than six people came up to me and said they really
liked this blog. It’s always good to be told that something you put a
lot of volunteer work into is valuable to at least a few people, and
fortunately everyone on the Internet is famous to at least six
people. 🙂

Sponsored by the GNOME Foundation!

Meanwhile, I want to thank the GNOME Foundation for sponsoring my trip to
Desktop Summit 2011, as
they did last
year for GUADEC 2010
. Given my own work and background, I’m very
appreciative of a non-profit with limited resources providing travel
funding for conferences. It’s a big expense, and I’m thankful that the
GNOME Foundation has funded my trips to their annual conference.

BTW, while we await the videos from Desktop Summit, there’s some
“proof” you can see that I attended Desktop Summit, as
I appear
in the group photo
, although you’ll need
to view
the hi-res version and scroll to the lower right of the image, and find
me
. I’m in the second/third (depending on how you count) row back,
2-3 from the right, and two to the left
from Lydia Pintscher.

Finally, I did my best
to live dent from the
Desktop Summit 2011
. That might be of interest to some as well, for
example, if you want to dig back and see what folks said in some of the
talks I attended. There was also
a two threads
after the panel that may be of interest.

How to Behave Nicely in the cgroup Trees

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/pax-cgroups.html

The Linux cgroup hierarchies of the various kernel controllers are a shared
resource. Recently many components of Linux userspace started making use of these
hierarchies. In order to avoid that the various programs step on each others
toes while manipulating this shared resource we have put together a list of
recommendations. Programs following these guidelines should work together
nicely without interfering with other users of the hierarchies.

These
guidelines are available in the systemd wiki.
I’d be very interested in
feedback, and would like to ask you to ping me in case we forgot something or left something too vague.

And please, if you are writing software that interfaces with the cgroup
tree consider following these recommendations. Thank you.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close