Tag Archives: launch

WinConn

Post Syndicated from RealEnder original http://alex.stanev.org/blog/?p=302

От години използвам Linux като основна работна среда.
От горното изречение, особено думата “основна”, личи, че понякога се налага и да се пали win/mac, където се стартират ред приложения, било поради недобра съвместимост, било защото просто са специализиран софтуер, писан само за конкретната операционна система.
Години наред решението бе да се закачиш на отдалечена машина по VNC/RDP/whatever и да си свършиш работата по най-бързия начин, преди да си се издразнил от разликите в интерфейса и от това да падне продуктивността. Възможностите за seamless интеграция (използване на отдалечените приложения въвместно с останалите в работната среда) бяха ограничени и трудни за конфигуриране и използване – SeamlessRDP на rdesktop, Seamless mode на VirtualBox, VMware Fusion и т.н.
Благодарение на развитието на проекта FreeRDP, вече имаме свободна имплементация на RemoteApp. Липсващото парченце от пъзела е приложение, което да улесни конфигурирането на отдалечените приложения.
За целта последните дни работих върху WinConn, графичен мениджър за RemoteApp приложения с отворен код. На страницата можете да видите задължителните снимки на екрана, видео в действие и да го инсталирате от моето PPA в Ubuntu.
Разработката на WinConn съвпадна с Ubuntu App Showdown – триседмичен конкурс за програмиране на приложения в Ubuntu. Ако ви хареса, след 10ти юли ще можете да гласувате чрез рейтинговата система в Ubuntu Software Centre. Разбира се, приемам всякакви предложения за подобрения, бъгове и т.н. в Launchpad.

systemd for Administrators, Part XI

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/inetd.html

Here’s the eleventh installment
of
my ongoing series
on
systemd
for
Administrators:

Converting inetd Services

In a
previous episode of this series
I covered how to convert a SysV
init script to a systemd unit file. In this story I hope to explain
how to convert inetd services into systemd units.

Let’s start with a bit of background. inetd has a long tradition as
one of the classic Unix services. As a superserver it listens on
an Internet socket on behalf of another service and then activate that
service on an incoming connection, thus implementing an on-demand
socket activation system. This allowed Unix machines with limited
resources to provide a large variety of services, without the need to
run processes and invest resources for all of them all of the
time. Over the years a number of independent implementations of inetd
have been shipped on Linux distributions. The most prominent being the
ones based on BSD inetd and xinetd. While inetd used to be installed
on most distributions by default, it nowadays is used only for very
few selected services and the common services are all run
unconditionally at boot, primarily for (perceived) performance
reasons.

One of the core feature of systemd (and Apple’s launchd for the
matter) is socket activation, a scheme pioneered by inetd, however
back then with a different focus. Systemd-style socket activation focusses on
local sockets (AF_UNIX), not so much Internet sockets (AF_INET), even
though both are supported. And more importantly even, socket
activation in systemd is not primarily about the on-demand aspect that
was key in inetd, but more on increasing parallelization (socket
activation allows starting clients and servers of the socket at the
same time), simplicity (since the need to configure explicit
dependencies between services is removed) and robustness (since
services can be restarted or may crash without loss of connectivity of the
socket). However, systemd can also activate services on-demand when
connections are incoming, if configured that way.

Socket activation of any kind requires support in the services
themselves. systemd provides a very simple interface that services may
implement to provide socket activation, built around sd_listen_fds(). As such
it is already a very minimal, simple scheme
. However, the
traditional inetd interface is even simpler. It allows passing only a
single socket to the activated service: the socket fd is simply
duplicated to STDIN and STDOUT of the process spawned, and that’s
already it. In order to provide compatibility systemd optionally
offers the same interface to processes, thus taking advantage of the
many services that already support inetd-style socket activation, but not yet
systemd’s native activation.

Before we continue with a concrete example, let’s have a look at
three different schemes to make use of socket activation:

  1. Socket activation for parallelization, simplicity,
    robustness:
    sockets are bound during early boot and a singleton
    service instance to serve all client requests is immediately started
    at boot. This is useful for all services that are very likely used
    frequently and continously, and hence starting them early and in
    parallel with the rest of the system is advisable. Examples: D-Bus,
    Syslog.
  2. On-demand socket activation for singleton services: sockets
    are bound during early boot and a singleton service instance is
    executed on incoming traffic. This is useful for services that are
    seldom used, where it is advisable to save the resources and time at
    boot and delay activation until they are actually needed. Example: CUPS.
  3. On-demand socket activation for per-connection service
    instances:
    sockets are bound during early boot and for each
    incoming connection a new service instance is instantiated and the
    connection socket (and not the listening one) is passed to it. This is
    useful for services that are seldom used, and where performance is not
    critical, i.e. where the cost of spawning a new service process for
    each incoming connection is limited. Example: SSH.

The three schemes provide different performance characteristics. After
the service finishes starting up the performance provided by the first two
schemes is identical to a stand-alone service (i.e. one that is
started without a super-server, without socket activation), since the
listening socket is passed to the actual service, and code paths from
then on are identical to those of a stand-alone service and all
connections are processes exactly the same way as they are in a
stand-alone service. On the other hand, performance of the third scheme
is usually not as good: since for each connection a new service needs
to be started the resource cost is much higher. However, it also has a
number of advantages: for example client connections are better
isolated and it is easier to develop services activated this way.

For systemd primarily the first scheme is in focus, however the
other two schemes are supported as well. (In fact, the blog story I
covered the necessary code changes for systemd-style socket activation
in
was about a service of the second type, i.e. CUPS). inetd
primarily focusses on the third scheme, however the second scheme is
supported too. (The first one isn’t. Presumably due the focus on the
third scheme inetd got its — a bit unfair — reputation for being
“slow”.)

So much about the background, let’s cut to the beef now and show an
inetd service can be integrated into systemd’s socket
activation. We’ll focus on SSH, a very common service that is widely
installed and used but on the vast majority of machines probably not
started more often than 1/h in average (and usually even much
less). SSH has supported inetd-style activation since a long time,
following the third scheme mentioned above. Since it is started only
every now and then and only with a limited number of connections at
the same time it is a very good candidate for this scheme as the extra
resource cost is negligble: if made socket-activatable SSH is
basically free as long as nobody uses it. And as soon as somebody logs
in via SSH it will be started and the moment he or she disconnects all
its resources are freed again. Let’s find out how to make SSH
socket-activatable in systemd taking advantage of the provided inetd
compatibility!

Here’s the configuration line used to hook up SSH with classic inetd:

ssh stream tcp nowait root /usr/sbin/sshd sshd -i

And the same as xinetd configuration fragment:

service ssh {
        socket_type = stream
        protocol = tcp
        wait = no
        user = root
        server = /usr/sbin/sshd
        server_args = -i
}

Most of this should be fairly easy to understand, as these two
fragments express very much the same information. The non-obvious
parts: the port number (22) is not configured in inetd configuration,
but indirectly via the service database in /etc/services: the
service name is used as lookup key in that database and translated to
a port number. This indirection via /etc/services has been
part of Unix tradition though has been getting more and more out of
fashion, and the newer xinetd hence optionally allows configuration
with explicit port numbers. The most interesting setting here is the
not very intuitively named nowait (resp. wait=no)
option. It configures whether a service is of the second
(wait) resp. third (nowait) scheme mentioned
above. Finally the -i switch is used to enabled inetd mode in
SSH.

The systemd translation of these configuration fragments are the
following two units. First: sshd.socket is a unit encapsulating
information about a socket to listen on:

[Unit]
Description=SSH Socket for Per-Connection Servers

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target

Most of this should be self-explanatory. A few notes:
Accept=yes corresponds to nowait. It’s hopefully
better named, referring to the fact that for nowait the
superserver calls accept() on the listening socket, where for
wait this is the job of the executed
service process. WantedBy=sockets.target is used to ensure that when
enabled this unit is activated at boot at the right time.

And here’s the matching service file [email protected]:

[Unit]
Description=SSH Per-Connection Server

[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket

This too should be mostly self-explanatory. Interesting is
StandardInput=socket, the option that enables inetd
compatibility for this service. StandardInput= may be used to
configure what STDIN of the service should be connected for this
service (see the man
page for details
). By setting it to socket we make sure
to pass the connection socket here, as expected in the simple inetd
interface. Note that we do not need to explicitly configure
StandardOutput= here, since by default the setting from
StandardInput= is inherited if nothing else is
configured. Important is the “-” in front of the binary name. This
ensures that the exit status of the per-connection sshd process is
forgotten by systemd. Normally, systemd will store the exit status of
a all service instances that die abnormally. SSH will sometimes die
abnormally with an exit code of 1 or similar, and we want to make sure
that this doesn’t cause systemd to keep around information for
numerous previous connections that died this way (until this
information is forgotten with systemctl reset-failed).

[email protected] is an instantiated service, as described in the preceeding
installment of this series
. For each incoming connection systemd
will instantiate a new instance of [email protected], with the
instance identifier named after the connection credentials.

You may wonder why in systemd configuration of an inetd service
requires two unit files instead of one. The reason for this is that to
simplify things we want to make sure that the relation between live
units and unit files is obvious, while at the same time we can order
the socket unit and the service units independently in the dependency
graph and control the units as independently as possible. (Think: this
allows you to shutdown the socket independently from the instances,
and each instance individually.)

Now, let’s see how this works in real life. If we drop these files
into /etc/systemd/system we are ready to enable the socket and
start it:

# systemctl enable sshd.socket
ln -s '/etc/systemd/system/sshd.socket' '/etc/systemd/system/sockets.target.wants/sshd.socket'
# systemctl start sshd.socket
# systemctl status sshd.socket
sshd.socket - SSH Socket for Per-Connection Servers
	  Loaded: loaded (/etc/systemd/system/sshd.socket; enabled)
	  Active: active (listening) since Mon, 26 Sep 2011 20:24:31 +0200; 14s ago
	Accepted: 0; Connected: 0
	  CGroup: name=systemd:/system/sshd.socket

This shows that the socket is listening, and so far no connections
have been made (Accepted: will show you how many connections
have been made in total since the socket was started,
Connected: how many connections are currently active.)

Now, let’s connect to this from two different hosts, and see which services are now active:

$ systemctl --full | grep ssh
[email protected]:22-172.31.0.4:47779.service  loaded active running       SSH Per-Connection Server
[email protected]:22-172.31.0.54:52985.service loaded active running       SSH Per-Connection Server
sshd.socket                                   loaded active listening     SSH Socket for Per-Connection Servers

As expected, there are now two service instances running, for the
two connections, and they are named after the source and destination
address of the TCP connection as well as the port numbers. (For
AF_UNIX sockets the instance identifier will carry the PID and UID of
the connecting client.) This allows us to invidiually introspect or
kill specific sshd instances, in case you want to terminate the
session of a specific client:

# systemctl kill [email protected]:22-172.31.0.4:47779.service

And that’s probably already most of what you need to know for
hooking up inetd services with systemd and how to use them afterwards.

In the case of SSH it is probably a good suggestion for most
distributions in order to save resources to default to this kind of
inetd-style socket activation, but provide a stand-alone unit file to
sshd as well which can be enabled optionally. I’ll soon file a
wishlist bug about this against our SSH package in Fedora.

A few final notes on how xinetd and systemd compare feature-wise,
and whether xinetd is fully obsoleted by systemd. The short answer
here is that systemd does not provide the full xinetd feature set and
that is does not fully obsolete xinetd. The longer answer is a bit
more complex: if you look at the multitude of options
xinetd provides you’ll notice that systemd does not compare. For
example, systemd does not come with built-in echo,
time, daytime or discard servers, and never
will include those. TCPMUX is not supported, and neither are RPC
services. However, you will also find that most of these are either
irrelevant on today’s Internet or became other way out-of-fashion. The
vast majority of inetd services do not directly take advantage of
these additional features. In fact, none of the xinetd services
shipped on Fedora make use of these options. That said, there are a
couple of useful features that systemd does not support, for example
IP ACL management. However, most administrators will probably agree
that firewalls are the better solution for these kinds of problems and
on top of that, systemd supports ACL management via tcpwrap for those
who indulge in retro technologies like this. On the other hand systemd
also provides numerous features xinetd does not provide,
starting with the individual control of instances shown above, or the
more expressive configurability of the execution
context for the instances
. I believe that what systemd provides is
quite comprehensive, comes with little legacy cruft but should provide
you with everything you need. And if there’s something systemd does
not cover, xinetd will always be there to fill the void as
you can easily run it in conjunction with systemd. For the
majority of uses systemd should cover what is necessary, and allows
you cut down on the required components to build your system from. In
a way, systemd brings back the functionality of classic Unix inetd and
turns it again into a center piece of a Linux system.

And that’s all for now. Thanks for reading this long piece. And
now, get going and convert your services over! Even better, do this
work in the individual packages upstream or in your distribution!

systemd for Administrators, Part XI

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/inetd.html

Here’s the eleventh installment
of
my ongoing series
on
systemd
for
Administrators:

Converting inetd Services

In a
previous episode of this series
I covered how to convert a SysV
init script to a systemd unit file. In this story I hope to explain
how to convert inetd services into systemd units.

Let’s start with a bit of background. inetd has a long tradition as
one of the classic Unix services. As a superserver it listens on
an Internet socket on behalf of another service and then activate that
service on an incoming connection, thus implementing an on-demand
socket activation system. This allowed Unix machines with limited
resources to provide a large variety of services, without the need to
run processes and invest resources for all of them all of the
time. Over the years a number of independent implementations of inetd
have been shipped on Linux distributions. The most prominent being the
ones based on BSD inetd and xinetd. While inetd used to be installed
on most distributions by default, it nowadays is used only for very
few selected services and the common services are all run
unconditionally at boot, primarily for (perceived) performance
reasons.

One of the core feature of systemd (and Apple’s launchd for the
matter) is socket activation, a scheme pioneered by inetd, however
back then with a different focus. Systemd-style socket activation focusses on
local sockets (AF_UNIX), not so much Internet sockets (AF_INET), even
though both are supported. And more importantly even, socket
activation in systemd is not primarily about the on-demand aspect that
was key in inetd, but more on increasing parallelization (socket
activation allows starting clients and servers of the socket at the
same time), simplicity (since the need to configure explicit
dependencies between services is removed) and robustness (since
services can be restarted or may crash without loss of connectivity of the
socket). However, systemd can also activate services on-demand when
connections are incoming, if configured that way.

Socket activation of any kind requires support in the services
themselves. systemd provides a very simple interface that services may
implement to provide socket activation, built around sd_listen_fds(). As such
it is already a very minimal, simple scheme
. However, the
traditional inetd interface is even simpler. It allows passing only a
single socket to the activated service: the socket fd is simply
duplicated to STDIN and STDOUT of the process spawned, and that’s
already it. In order to provide compatibility systemd optionally
offers the same interface to processes, thus taking advantage of the
many services that already support inetd-style socket activation, but not yet
systemd’s native activation.

Before we continue with a concrete example, let’s have a look at
three different schemes to make use of socket activation:

Socket activation for parallelization, simplicity,
robustness: sockets are bound during early boot and a singleton
service instance to serve all client requests is immediately started
at boot. This is useful for all services that are very likely used
frequently and continously, and hence starting them early and in
parallel with the rest of the system is advisable. Examples: D-Bus,
Syslog.

On-demand socket activation for singleton services: sockets
are bound during early boot and a singleton service instance is
executed on incoming traffic. This is useful for services that are
seldom used, where it is advisable to save the resources and time at
boot and delay activation until they are actually needed. Example: CUPS.

On-demand socket activation for per-connection service
instances: sockets are bound during early boot and for each
incoming connection a new service instance is instantiated and the
connection socket (and not the listening one) is passed to it. This is
useful for services that are seldom used, and where performance is not
critical, i.e. where the cost of spawning a new service process for
each incoming connection is limited. Example: SSH.

The three schemes provide different performance characteristics. After
the service finishes starting up the performance provided by the first two
schemes is identical to a stand-alone service (i.e. one that is
started without a super-server, without socket activation), since the
listening socket is passed to the actual service, and code paths from
then on are identical to those of a stand-alone service and all
connections are processes exactly the same way as they are in a
stand-alone service. On the other hand, performance of the third scheme
is usually not as good: since for each connection a new service needs
to be started the resource cost is much higher. However, it also has a
number of advantages: for example client connections are better
isolated and it is easier to develop services activated this way.

For systemd primarily the first scheme is in focus, however the
other two schemes are supported as well. (In fact, the blog story I
covered the necessary code changes for systemd-style socket activation
in
was about a service of the second type, i.e. CUPS). inetd
primarily focusses on the third scheme, however the second scheme is
supported too. (The first one isn’t. Presumably due the focus on the
third scheme inetd got its — a bit unfair — reputation for being
“slow”.)

So much about the background, let’s cut to the beef now and show an
inetd service can be integrated into systemd’s socket
activation. We’ll focus on SSH, a very common service that is widely
installed and used but on the vast majority of machines probably not
started more often than 1/h in average (and usually even much
less). SSH has supported inetd-style activation since a long time,
following the third scheme mentioned above. Since it is started only
every now and then and only with a limited number of connections at
the same time it is a very good candidate for this scheme as the extra
resource cost is negligble: if made socket-activatable SSH is
basically free as long as nobody uses it. And as soon as somebody logs
in via SSH it will be started and the moment he or she disconnects all
its resources are freed again. Let’s find out how to make SSH
socket-activatable in systemd taking advantage of the provided inetd
compatibility!

Here’s the configuration line used to hook up SSH with classic inetd:

ssh stream tcp nowait root /usr/sbin/sshd sshd -i

And the same as xinetd configuration fragment:

service ssh {
socket_type = stream
protocol = tcp
wait = no
user = root
server = /usr/sbin/sshd
server_args = -i
}

Most of this should be fairly easy to understand, as these two
fragments express very much the same information. The non-obvious
parts: the port number (22) is not configured in inetd configuration,
but indirectly via the service database in /etc/services: the
service name is used as lookup key in that database and translated to
a port number. This indirection via /etc/services has been
part of Unix tradition though has been getting more and more out of
fashion, and the newer xinetd hence optionally allows configuration
with explicit port numbers. The most interesting setting here is the
not very intuitively named nowait (resp. wait=no)
option. It configures whether a service is of the second
(wait) resp. third (nowait) scheme mentioned
above. Finally the -i switch is used to enabled inetd mode in
SSH.

The systemd translation of these configuration fragments are the
following two units. First: sshd.socket is a unit encapsulating
information about a socket to listen on:

[Unit]
Description=SSH Socket for Per-Connection Servers

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target

Most of this should be self-explanatory. A few notes:
Accept=yes corresponds to nowait. It’s hopefully
better named, referring to the fact that for nowait the
superserver calls accept() on the listening socket, where for
wait this is the job of the executed
service process. WantedBy=sockets.target is used to ensure that when
enabled this unit is activated at boot at the right time.

And here’s the matching service file [email protected]:

[Unit]
Description=SSH Per-Connection Server

[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket

This too should be mostly self-explanatory. Interesting is
StandardInput=socket, the option that enables inetd
compatibility for this service. StandardInput= may be used to
configure what STDIN of the service should be connected for this
service (see the man
page for details
). By setting it to socket we make sure
to pass the connection socket here, as expected in the simple inetd
interface. Note that we do not need to explicitly configure
StandardOutput= here, since by default the setting from
StandardInput= is inherited if nothing else is
configured. Important is the “-” in front of the binary name. This
ensures that the exit status of the per-connection sshd process is
forgotten by systemd. Normally, systemd will store the exit status of
a all service instances that die abnormally. SSH will sometimes die
abnormally with an exit code of 1 or similar, and we want to make sure
that this doesn’t cause systemd to keep around information for
numerous previous connections that died this way (until this
information is forgotten with systemctl reset-failed).

[email protected] is an instantiated service, as described in the preceeding
installment of this series
. For each incoming connection systemd
will instantiate a new instance of [email protected], with the
instance identifier named after the connection credentials.

You may wonder why in systemd configuration of an inetd service
requires two unit files instead of one. The reason for this is that to
simplify things we want to make sure that the relation between live
units and unit files is obvious, while at the same time we can order
the socket unit and the service units independently in the dependency
graph and control the units as independently as possible. (Think: this
allows you to shutdown the socket independently from the instances,
and each instance individually.)

Now, let’s see how this works in real life. If we drop these files
into /etc/systemd/system we are ready to enable the socket and
start it:

# systemctl enable sshd.socket
ln -s ‘/etc/systemd/system/sshd.socket’ ‘/etc/systemd/system/sockets.target.wants/sshd.socket’
# systemctl start sshd.socket
# systemctl status sshd.socket
sshd.socket – SSH Socket for Per-Connection Servers
Loaded: loaded (/etc/systemd/system/sshd.socket; enabled)
Active: active (listening) since Mon, 26 Sep 2011 20:24:31 +0200; 14s ago
Accepted: 0; Connected: 0
CGroup: name=systemd:/system/sshd.socket

This shows that the socket is listening, and so far no connections
have been made (Accepted: will show you how many connections
have been made in total since the socket was started,
Connected: how many connections are currently active.)

Now, let’s connect to this from two different hosts, and see which services are now active:

$ systemctl –full | grep ssh
[email protected]:22-172.31.0.4:47779.service loaded active running SSH Per-Connection Server
[email protected]:22-172.31.0.54:52985.service loaded active running SSH Per-Connection Server
sshd.socket loaded active listening SSH Socket for Per-Connection Servers

As expected, there are now two service instances running, for the
two connections, and they are named after the source and destination
address of the TCP connection as well as the port numbers. (For
AF_UNIX sockets the instance identifier will carry the PID and UID of
the connecting client.) This allows us to invidiually introspect or
kill specific sshd instances, in case you want to terminate the
session of a specific client:

# systemctl kill [email protected]:22-172.31.0.4:47779.service

And that’s probably already most of what you need to know for
hooking up inetd services with systemd and how to use them afterwards.

In the case of SSH it is probably a good suggestion for most
distributions in order to save resources to default to this kind of
inetd-style socket activation, but provide a stand-alone unit file to
sshd as well which can be enabled optionally. I’ll soon file a
wishlist bug about this against our SSH package in Fedora.

A few final notes on how xinetd and systemd compare feature-wise,
and whether xinetd is fully obsoleted by systemd. The short answer
here is that systemd does not provide the full xinetd feature set and
that is does not fully obsolete xinetd. The longer answer is a bit
more complex: if you look at the multitude of options
xinetd provides you’ll notice that systemd does not compare. For
example, systemd does not come with built-in echo,
time, daytime or discard servers, and never
will include those. TCPMUX is not supported, and neither are RPC
services. However, you will also find that most of these are either
irrelevant on today’s Internet or became other way out-of-fashion. The
vast majority of inetd services do not directly take advantage of
these additional features. In fact, none of the xinetd services
shipped on Fedora make use of these options. That said, there are a
couple of useful features that systemd does not support, for example
IP ACL management. However, most administrators will probably agree
that firewalls are the better solution for these kinds of problems and
on top of that, systemd supports ACL management via tcpwrap for those
who indulge in retro technologies like this. On the other hand systemd
also provides numerous features xinetd does not provide,
starting with the individual control of instances shown above, or the
more expressive configurability of the execution
context for the instances
. I believe that what systemd provides is
quite comprehensive, comes with little legacy cruft but should provide
you with everything you need. And if there’s something systemd does
not cover, xinetd will always be there to fill the void as
you can easily run it in conjunction with systemd. For the
majority of uses systemd should cover what is necessary, and allows
you cut down on the required components to build your system from. In
a way, systemd brings back the functionality of classic Unix inetd and
turns it again into a center piece of a Linux system.

And that’s all for now. Thanks for reading this long piece. And
now, get going and convert your services over! Even better, do this
work in the individual packages upstream or in your distribution!

systemd for Administrators, Part XI

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/inetd.html

Here’s the eleventh installment
of
my ongoing series
on
systemd
for
Administrators:

Converting inetd Services

In a
previous episode of this series
I covered how to convert a SysV
init script to a systemd unit file. In this story I hope to explain
how to convert inetd services into systemd units.

Let’s start with a bit of background. inetd has a long tradition as
one of the classic Unix services. As a superserver it listens on
an Internet socket on behalf of another service and then activate that
service on an incoming connection, thus implementing an on-demand
socket activation system. This allowed Unix machines with limited
resources to provide a large variety of services, without the need to
run processes and invest resources for all of them all of the
time. Over the years a number of independent implementations of inetd
have been shipped on Linux distributions. The most prominent being the
ones based on BSD inetd and xinetd. While inetd used to be installed
on most distributions by default, it nowadays is used only for very
few selected services and the common services are all run
unconditionally at boot, primarily for (perceived) performance
reasons.

One of the core feature of systemd (and Apple’s launchd for the
matter) is socket activation, a scheme pioneered by inetd, however
back then with a different focus. Systemd-style socket activation focusses on
local sockets (AF_UNIX), not so much Internet sockets (AF_INET), even
though both are supported. And more importantly even, socket
activation in systemd is not primarily about the on-demand aspect that
was key in inetd, but more on increasing parallelization (socket
activation allows starting clients and servers of the socket at the
same time), simplicity (since the need to configure explicit
dependencies between services is removed) and robustness (since
services can be restarted or may crash without loss of connectivity of the
socket). However, systemd can also activate services on-demand when
connections are incoming, if configured that way.

Socket activation of any kind requires support in the services
themselves. systemd provides a very simple interface that services may
implement to provide socket activation, built around sd_listen_fds(). As such
it is already a very minimal, simple scheme
. However, the
traditional inetd interface is even simpler. It allows passing only a
single socket to the activated service: the socket fd is simply
duplicated to STDIN and STDOUT of the process spawned, and that’s
already it. In order to provide compatibility systemd optionally
offers the same interface to processes, thus taking advantage of the
many services that already support inetd-style socket activation, but not yet
systemd’s native activation.

Before we continue with a concrete example, let’s have a look at
three different schemes to make use of socket activation:

  1. Socket activation for parallelization, simplicity,
    robustness:
    sockets are bound during early boot and a singleton
    service instance to serve all client requests is immediately started
    at boot. This is useful for all services that are very likely used
    frequently and continously, and hence starting them early and in
    parallel with the rest of the system is advisable. Examples: D-Bus,
    Syslog.
  2. On-demand socket activation for singleton services: sockets
    are bound during early boot and a singleton service instance is
    executed on incoming traffic. This is useful for services that are
    seldom used, where it is advisable to save the resources and time at
    boot and delay activation until they are actually needed. Example: CUPS.
  3. On-demand socket activation for per-connection service
    instances:
    sockets are bound during early boot and for each
    incoming connection a new service instance is instantiated and the
    connection socket (and not the listening one) is passed to it. This is
    useful for services that are seldom used, and where performance is not
    critical, i.e. where the cost of spawning a new service process for
    each incoming connection is limited. Example: SSH.

The three schemes provide different performance characteristics. After
the service finishes starting up the performance provided by the first two
schemes is identical to a stand-alone service (i.e. one that is
started without a super-server, without socket activation), since the
listening socket is passed to the actual service, and code paths from
then on are identical to those of a stand-alone service and all
connections are processes exactly the same way as they are in a
stand-alone service. On the other hand, performance of the third scheme
is usually not as good: since for each connection a new service needs
to be started the resource cost is much higher. However, it also has a
number of advantages: for example client connections are better
isolated and it is easier to develop services activated this way.

For systemd primarily the first scheme is in focus, however the
other two schemes are supported as well. (In fact, the blog story I
covered the necessary code changes for systemd-style socket activation
in
was about a service of the second type, i.e. CUPS). inetd
primarily focusses on the third scheme, however the second scheme is
supported too. (The first one isn’t. Presumably due the focus on the
third scheme inetd got its — a bit unfair — reputation for being
“slow”.)

So much about the background, let’s cut to the beef now and show an
inetd service can be integrated into systemd’s socket
activation. We’ll focus on SSH, a very common service that is widely
installed and used but on the vast majority of machines probably not
started more often than 1/h in average (and usually even much
less). SSH has supported inetd-style activation since a long time,
following the third scheme mentioned above. Since it is started only
every now and then and only with a limited number of connections at
the same time it is a very good candidate for this scheme as the extra
resource cost is negligble: if made socket-activatable SSH is
basically free as long as nobody uses it. And as soon as somebody logs
in via SSH it will be started and the moment he or she disconnects all
its resources are freed again. Let’s find out how to make SSH
socket-activatable in systemd taking advantage of the provided inetd
compatibility!

Here’s the configuration line used to hook up SSH with classic inetd:

ssh stream tcp nowait root /usr/sbin/sshd sshd -i

And the same as xinetd configuration fragment:

service ssh {
        socket_type = stream
        protocol = tcp
        wait = no
        user = root
        server = /usr/sbin/sshd
        server_args = -i
}

Most of this should be fairly easy to understand, as these two
fragments express very much the same information. The non-obvious
parts: the port number (22) is not configured in inetd configuration,
but indirectly via the service database in /etc/services: the
service name is used as lookup key in that database and translated to
a port number. This indirection via /etc/services has been
part of Unix tradition though has been getting more and more out of
fashion, and the newer xinetd hence optionally allows configuration
with explicit port numbers. The most interesting setting here is the
not very intuitively named nowait (resp. wait=no)
option. It configures whether a service is of the second
(wait) resp. third (nowait) scheme mentioned
above. Finally the -i switch is used to enabled inetd mode in
SSH.

The systemd translation of these configuration fragments are the
following two units. First: sshd.socket is a unit encapsulating
information about a socket to listen on:

[Unit]
Description=SSH Socket for Per-Connection Servers

[Socket]
ListenStream=22
Accept=yes

[Install]
WantedBy=sockets.target

Most of this should be self-explanatory. A few notes:
Accept=yes corresponds to nowait. It’s hopefully
better named, referring to the fact that for nowait the
superserver calls accept() on the listening socket, where for
wait this is the job of the executed
service process. WantedBy=sockets.target is used to ensure that when
enabled this unit is activated at boot at the right time.

And here’s the matching service file [email protected]:

[Unit]
Description=SSH Per-Connection Server

[Service]
ExecStart=-/usr/sbin/sshd -i
StandardInput=socket

This too should be mostly self-explanatory. Interesting is
StandardInput=socket, the option that enables inetd
compatibility for this service. StandardInput= may be used to
configure what STDIN of the service should be connected for this
service (see the man
page for details
). By setting it to socket we make sure
to pass the connection socket here, as expected in the simple inetd
interface. Note that we do not need to explicitly configure
StandardOutput= here, since by default the setting from
StandardInput= is inherited if nothing else is
configured. Important is the “-” in front of the binary name. This
ensures that the exit status of the per-connection sshd process is
forgotten by systemd. Normally, systemd will store the exit status of
a all service instances that die abnormally. SSH will sometimes die
abnormally with an exit code of 1 or similar, and we want to make sure
that this doesn’t cause systemd to keep around information for
numerous previous connections that died this way (until this
information is forgotten with systemctl reset-failed).

[email protected] is an instantiated service, as described in the preceeding
installment of this series
. For each incoming connection systemd
will instantiate a new instance of [email protected], with the
instance identifier named after the connection credentials.

You may wonder why in systemd configuration of an inetd service
requires two unit files instead of one. The reason for this is that to
simplify things we want to make sure that the relation between live
units and unit files is obvious, while at the same time we can order
the socket unit and the service units independently in the dependency
graph and control the units as independently as possible. (Think: this
allows you to shutdown the socket independently from the instances,
and each instance individually.)

Now, let’s see how this works in real life. If we drop these files
into /etc/systemd/system we are ready to enable the socket and
start it:

# systemctl enable sshd.socket
ln -s '/etc/systemd/system/sshd.socket' '/etc/systemd/system/sockets.target.wants/sshd.socket'
# systemctl start sshd.socket
# systemctl status sshd.socket
sshd.socket - SSH Socket for Per-Connection Servers
	  Loaded: loaded (/etc/systemd/system/sshd.socket; enabled)
	  Active: active (listening) since Mon, 26 Sep 2011 20:24:31 +0200; 14s ago
	Accepted: 0; Connected: 0
	  CGroup: name=systemd:/system/sshd.socket

This shows that the socket is listening, and so far no connections
have been made (Accepted: will show you how many connections
have been made in total since the socket was started,
Connected: how many connections are currently active.)

Now, let’s connect to this from two different hosts, and see which services are now active:

$ systemctl --full | grep ssh
[email protected]:22-172.31.0.4:47779.service  loaded active running       SSH Per-Connection Server
[email protected]:22-172.31.0.54:52985.service loaded active running       SSH Per-Connection Server
sshd.socket                                   loaded active listening     SSH Socket for Per-Connection Servers

As expected, there are now two service instances running, for the
two connections, and they are named after the source and destination
address of the TCP connection as well as the port numbers. (For
AF_UNIX sockets the instance identifier will carry the PID and UID of
the connecting client.) This allows us to invidiually introspect or
kill specific sshd instances, in case you want to terminate the
session of a specific client:

# systemctl kill [email protected]:22-172.31.0.4:47779.service

And that’s probably already most of what you need to know for
hooking up inetd services with systemd and how to use them afterwards.

In the case of SSH it is probably a good suggestion for most
distributions in order to save resources to default to this kind of
inetd-style socket activation, but provide a stand-alone unit file to
sshd as well which can be enabled optionally. I’ll soon file a
wishlist bug about this against our SSH package in Fedora.

A few final notes on how xinetd and systemd compare feature-wise,
and whether xinetd is fully obsoleted by systemd. The short answer
here is that systemd does not provide the full xinetd feature set and
that is does not fully obsolete xinetd. The longer answer is a bit
more complex: if you look at the multitude of options
xinetd provides you’ll notice that systemd does not compare. For
example, systemd does not come with built-in echo,
time, daytime or discard servers, and never
will include those. TCPMUX is not supported, and neither are RPC
services. However, you will also find that most of these are either
irrelevant on today’s Internet or became other way out-of-fashion. The
vast majority of inetd services do not directly take advantage of
these additional features. In fact, none of the xinetd services
shipped on Fedora make use of these options. That said, there are a
couple of useful features that systemd does not support, for example
IP ACL management. However, most administrators will probably agree
that firewalls are the better solution for these kinds of problems and
on top of that, systemd supports ACL management via tcpwrap for those
who indulge in retro technologies like this. On the other hand systemd
also provides numerous features xinetd does not provide,
starting with the individual control of instances shown above, or the
more expressive configurability of the execution
context for the instances
. I believe that what systemd provides is
quite comprehensive, comes with little legacy cruft but should provide
you with everything you need. And if there’s something systemd does
not cover, xinetd will always be there to fill the void as
you can easily run it in conjunction with systemd. For the
majority of uses systemd should cover what is necessary, and allows
you cut down on the required components to build your system from. In
a way, systemd brings back the functionality of classic Unix inetd and
turns it again into a center piece of a Linux system.

And that’s all for now. Thanks for reading this long piece. And
now, get going and convert your services over! Even better, do this
work in the individual packages upstream or in your distribution!

Will Nokia Ever Realize Open Source Is Not a Panacea?

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/18/open-source-not-panacea.html

I was pretty sure there was something wrong with the whole thing in
fall of 2009, when they first asked me. A Nokia employee contacted me
to ask if I’d be willing to be a director of
the Symbian
Foundation
(or so I thought that’s what they were asking —
read on). I wrote them a thoughtful response explaining my then-current
concerns about
Symbian:

the poor choice of
the Eclipse Public
License
for the eventual code,

the fact that Symbian couldn’t be built in any software freedom
system environment, and

that the Symbian source code that had been released thus far didn’t
actually run on any existing phones.

I nevertheless offered to serve as a director for one year, and I would
resign at that point if the problems that I’d listed weren’t
resolved.

I figured that was quite a laundry list. I also figured that they
probably wouldn’t be interested anyway once they saw my list.
Amusingly, they still were. But then, I realized what was really going
on.

In response to my laundry list, I got back a rather disturbing response
that showed a confusion in my understanding. I wasn’t being invited to
join the board of the Symbian Foundation. They had asked me instead to
serve as a Director of a small USA entity (that
they heralded
as Symbian DevCo
) that would then be permitted one Representative of
the Symbian Foundation itself, which was, in turn, a trade association
controlled by dozens of proprietary software companies.

In fact, this Nokia employee said that they planned to channel all
individual developers toward this Symbian DevCo in the USA,
and that would be the only voice these developers would have in
the direction of Symbian. It would be one tiny voice against dozens of
proprietary software company who controlled the real Symbian Foundation,
a trade association.

Anyone who has worked in the non-profit sector, or even contributed to
any real software freedom project can see what’s deeply wrong
there. However, my response wasn’t to refuse. I wrote back and said
clearly why this was failing completely to create a software freedom
community that could survive vibrantly. I pointed out the way the Linux
community was structured: whereby the Linux Foundation is a trade
association for companies — and, while they do fund Linus’ salary,
they don’t control his or any other activities of developers.
Meanwhile, the individual Linux developers have all the real authority:
from community structure, to licensing, to holding copyrights, to
technical decision-making. I pointed out if they wanted Symbian to
succeed, they should emulate Linux as much as they could. I suggested
Nokia immediately change the whole structure to have developers in
charge of the project, and have a path for Symbian DevCo to ultimately
be the primary organization in charge of the codebase, while Symbian
Foundation could remain the trade association, roughly akin to the Linux
Foundation. I offered to help them do that.

You might guess that I never got a reply to that email. It was thus no
surprise to me in the least what happened to Symbian after that:

In December 2010 (nearly 13 months to the day after my email exchange
described
above), the
Symbian Foundation shut down all its websites
.

In February
2011, Nokia
announced its partnership with Microsoft to prefer Windows 7 on its phones
.

In April
2011, Nokia
announced that Symbian would no longer be available as Free Software
.

In June
2011, Nokia
announced that some other consulting company will take over proprietary
development of Symbian
.

So, within 17 months of Symbian Foundation’s inquiry to ask me to help
run Symbian DevCo, the (Open Source) Symbian project was
canceled entirely, the codebase was now again proprietary (with
a few of the old
codedumps floating around on other sites
),
and the Symbian Foundation
consists only of a single webpage filled with double-speak
.

Of course, even if Nokia had tried its hardest to build an actual
software freedom community, Symbian still had a good chance of
failing, as I
pointed out in March 2010
. But, if Nokia had actually tried to
release control and let developers have some authority, Symbian might
have had a fighting chance as Free Software. As it turned out, Nokia
threw some code over the wall, gave all the power to decide what happens
to a bunch of proprietary software companies, and then hung it all out
to dry. It’s a shining example of how to liberate software in a way
that will guarantee its deprecation in short order.

Of course, we now know that during all this time, Nokia was busy
preparing a backroom deal that would end its
always-burgeoning-but-never-complete affiliation with software freedom
by making a deal with Microsoft to control the future of Nokia. It’s a
foolish decision for software freedom; whether it’s a good business
decision surely isn’t for me to judge. (After all, I haven’t worked in
the for-profit sector for fifteen years for a reason.)

It’s true that I’ve always given a hard time to Maemo (and to MeeGo as
well). Those involved from inside Nokia spent the last six months
telling me that MeeGo is run by completely different people at Nokia,
and Nokia
did recently launch yet another MeeGo based product
. I’ve meanwhile
gotten the impression that Nokia is one of those companies whose
executives are more like wealthy Romans who like to pit their champions
against each other in the arena to see who wins; Nokia’s various
divisions appear to be in constant competition with each other. I
imagine someone running the place has read too much Ayn Rand.

Of course, it now seems that MeeGo hasn’t, in Nokia’s view,
“survived as the fittest”.
I learned today (thanks
to jwildeboer) that,
In
Elop’s words, there is no returning to MeeGo, even if the N9 turns out
to be a hit
. Nokia’s commitment to Maemo/MeeGo, while it did last
at least four years or so, is now gone too, as they begin their march to
Microsoft’s funeral dirge. Yet another FLOSS project Nokia got serious
about, coordinated poorly, and yet ultimately gave up.

Upon considering Nokia’s bad trajectory, it led me to think about how
Open Source companies tend to succeed. I’ve noticed something
interesting, which I’ve confirmed by talking to a lot of employees of
successful Open Source companies. The successful ones — those
that get something useful done for software freedom while also making
some cash (i.e., the true promise of Open Source) — let the
developers run the software projects themselves. Such
companies don’t relegate the developers into a small
non-profit that has to lobby dozens of proprietary software companies to
actually make an impact. They
don’t throw code over the wall — rather, they
fund developers who make their own decisions about what to do in the
software. Ultimately, smart Open Source companies treat software
freedom development like R&D should be treated: fund it and see what
comes out and try to build a business model after something’s already
working. Companies like Nokia, by contrast, constantly put their carts
in front of all the horses and wonder why those horses whinny loudly at
them but don’t write any code.

Open Source slowly became a fad during the DotCom era, and it strangely
remains such. A lot of companies follow fads, particularly when they
can’t figure what else to do. The fad becomes a quick-fix solution. Of
course, for those of us that started as volunteers and enthusiasts in
1991 or earlier, software freedom isn’t some new attraction at
P. T. Barnum’s circus. It’s a community where we belong and collaborate
to improve society. Companies are welcomed to join us for the ride, but
only if they put developers and users in charge.

Meanwhile, my personal postscript to my old conversation with Nokia
arrived in my inbox late in May 2011. I received a extremely vague email
from a lawyer at Nokia. She wanted really badly to figure out how to
quickly dump some software project — and she wouldn’t tell me what
it was — into the Software Freedom Conservancy. Of course, I’m
sure this lawyer knows nothing about the history of the Symbian project
wooing me for directorship of Symbian DevCo and all the other history of
why “throwing code over the wall” into a non-profit is
rarely known to work, particularly for Nokia. I sent her a response
explaining all the problems with her request, and, true to Nokia’s
style, she didn’t even bother to respond to me thanking me for my
time.

I can’t wait to see what project Nokia dumps over the wall next, and
then, in another 17 months (or if they really want to lead us
on, four years), decides to proprietarize or abandon it because, they’ll
say, this open-sourcing thing just doesn’t work. Yet, so many
companies make money with it. The short answer is: Nokia,
you keep doing it wrong!

Update (2011-08-24):
Boudewijn
Rempt argued another side of this question
.
He says
the Calligra suite is a counterexample of Nokia getting a FLOSS project
right
. I don’t know enough about Calligra to agree or disagree.

Project Harmony (and “Next Generation Contributor Agreements”) Considered Harmful

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/07/07/harmony-harmful.html

Update on 2014-06-10:While this article is about a
specific series of attempts to “unify”
CLAs and
©AAs into a
single set of documents, the issues raised below cover the gamut of
problems that are encountered in many CLAs and ©AAs in common use
today in
FLOSS
projects. Even though it appears that both Project Harmony and its
reincarnation Next Generation Contributor Agreements have both failed, CLAs
and ©AAs are increasing in popularity among FLOSS projects, and
developers should begin action to oppose these agreements for their
projects.

Update on 2013-09-05: Project Harmony was recently
relaunched under the name the Next Generation of Contributor
Agreements. AFAICT, it’s
been publicly
identified as the same initiative
, and
its
funding comes from the same person
. I’ve verified that everything I say below still
applies to their current drafts available from the Contributor
Agreements project. I also emailed this comments to the leaders of
that project before it started, but they wouldn’t respond to my policy
questions.

Much advertising is designed to convince us to buy or use of
something that we don’t need. When I hear someone droning on about some
new, wonderful thing, I have to worry that these folks are actually
trying to market something to me.

Very soon, you’re likely to see a marketing blitz for this thing
called Project Harmony (which just released its 1.0 version
of document templates). Even the name itself is marketing: it’s not
actually descriptive, but is so named to market a “good
feeling” about the project before even knowing what it is. (It’s
also
got serious namespace collision,
including with a project already in
the software freedom community
.)

Project Harmony markets itself as fixing something that our community
doesn’t really consider broken. Project Harmony is a set of document
templates, primarily promulgated and mostly drafted by corporate
lawyers, that entice developers to give control of their software work
over to companies.

My analysis below is primarily about how these agreements are
problematic for individual developers. An analysis of the agreements in
light of companies or organizations using them between each other may
have the same or different conclusions; I just haven’t done that
analysis in detail so I don’t know what the outcome is.

[ BTW, I’m aware that I’ve failed to provide a
TL;DR version of this article.
I tried twice to write one and ultimately decided that I can’t. Simply
put, these issues are complex, and I had to draw on a decade of software
freedom licensing, policy, and organizational knowledge to fully
articulate what’s wrong with the Project Harmony agreements. I realize that sounds
like a It was hard to write — it should be hard to read
justification, but I just don’t know how to summarize these
Gordian problems in a pithy way. I nevertheless hope developers will
take the time to read this before they sign a Project Harmony agreement,
or — indeed — any CLA or ©AA. ]

Copyright Assignment That Lacks Real Assurances

First of all, about half of Project Harmony is copyright assignment
agreements (
©AAs). Assigning copyright completely gives
the work over to someone else. Once the ©AA is signed, the work
ceases to belong to the assignor. It’s as if that work was done by the
assignee. There is admittedly some value to copyright assignment,
particularly if developers want to ensure that the
GPL or other copyleft is
enforced on their work and they don’t have time to do it themselves.
(Although developers can also designate an enforcement agent to do that on their
behalf even if they don’t assign copyright, so even that necessity is
limited.)

One must immensely trust an assignee organization. Personally, I’ve
only ever assigned some of my copyrights to one organization in my
life: the Free Software Foundation,
because FSF is the
only organization I ever encountered that is institutionally committed
to
DTRT’ing with
copyrights in a manner similar to my personal moral beliefs.

First of
all, as
I’ve written about before, FSF’s ©AA make all sorts of promises
back to the assignor
. Second, FSF is institutionally committed
to the GPL and
enforcing GPL in a way
that advances FSF’s non-profit advocacy mission for software freedom.
All of this activity fits my moral principles, so I’ve been willing to
sign FSF’s ©AAs.

Yet, I’ve nevertheless met many developers who refuse to sign
FSF’s ©AAs. While many of such developers like the GPL, they don’t
necessarily agree with the FSF’s moral positions. Indeed, in many
cases, developers are completely opposed to assigning copyright to
anyone, FSF or otherwise. For
example, Linus
Torvalds, founder of Linux, has often stated on record
that
he never wanted to do copyright assignments, for several reasons:
[he] think[s] they are nasty and wrong personally, and [he]’d hate all
the paperwork, and [he] thinks it would actually detract from the
development model.

Obviously, my position is not as radical as Linus’; I do think
©AAs can sometimes be appropriate. But, I also believe that
developers should never assign copyright to a company or to an
organization whose moral philosophy doesn’t fit well with their own.

FSF, for its part, spells out its moral position in its ©AA
itself. As
I’ve mentioned elsewhere
, and as
Groklaw
recently covered in detail, FSF’s ©AA makes various legally
binding promises
to developers who sign it. Meanwhile, Project
Harmony’s ©AAs, while they put forward a few options that look
vaguely acceptable (although they have problems of their own discussed
below), make no such promises mandatory. I have often times pointed
Harmony’s drafters
to the
terms that FSF has proposed should be mandatory in any for-profit
company’s ©AA
, but Harmony’s drafters have refused to
incorporate these assurances as a required part of Harmony’s
agreements. (Note that such assurances would still be required for
the CLA options as well; see below for details why.)

Regarding ©AAs, I’d like to note finally that FSF
does not require ©AAs for all
GNU packages.
This confusion is so common that I’d like to draw attention to it, even
thought it’s only a tangential point in this context. FSF’s ©AA is
only mandatory, to my knowledge, on those
GNU packages where either (a)
FSF employees developed the first versions or (b) the original
developers themselves asked to assign copyright to FSF, upon
their project joining GNU. In all other cases, FSF assignment is
optional. Some GNU projects, such
as GNOME, have their
own positions regarding ©AAs that differ radically from FSF’s
.
I seriously doubt that companies who adopt Project Harmony’s agreement
will ever be as flexible on copyright assignment as FSF, nor will any of
the possible Project Harmony options be acceptable to GNOME’s existing
policy.

Giving Away Rights to Give Companies Warm Fuzzies?

Project Harmony, however, claims that the important part isn’t its
©AA, but its Contributor License Agreement
(CLA). To
briefly consider the history of Free Software CLAs, note that
the Apache CLA was
likely the first CLA used in the Free Software community. Apache
Software Foundation has always been heavily influenced by IBM and other
companies, and such companies have generally sought the “warm
fuzzies” of getting every contributor to formally assent to a
complex legal document that asserts various assurances about the code
and gives certain powers to the company.

The main point of a CLA (and a somewhat valid one) is to ensure that
the developers have verified their right to contribute the code under
the specified copyright license. Both the Apache CLA and Project
Harmony’s CLA go to great length and verbosity to require developers
to agree that they know the contribution is theirs. In fact, if a
developer signs one of these CLA’s, the developer makes a formal
contract with the entity (usually a for-profit
company) that the developer knows for sure that the contribution is
licensed under the specified license. The developer then takes on all
liability if that fact is in any way incorrect or in dispute!

Of course, shifting away all liability about the origins of the code is
a great big “warm fuzzy” for the company’s lawyers. Those
lawyers know that they can now easily sue an individual
developer for breach of contract if the developer was wrong
about the code. If the company redistributes some developer’s code and
ends up in an infringement suit where the company has to pay millions of
dollars, they can easily come back and sue the
developer0. The company would argue in
court that the developer breached the CLA. If this possible outcome
doesn’t immediately worry you as an individual developer
signing a Project Harmony CLA for your
FLOSS contribution, it should.

“Choice of Law” & Contractual Arrangement Muddies Copyright Claims

Apache’s CLA
doesn’t have a choice of law clause, which is preferable in my opinion.
Most lawyers just love a “choice of law” clause for
various reasons. The biggest reason is that it means the rules that
apply to the agreement are the ones with which the lawyers are most
familiar, and the jurisdiction for disputes will be the local
jurisdiction of the company, not of the developer. In addition, lawyers
often pick particular jurisdictions that are very favorable to their
client and not as favorable to the other signers.

Unfortunately, all of Project Harmony’s drafts include a
“choice of law” clause1. I expect that the drafters will
argue in response that the jurisdiction is a configuration variable.
However, the problem is that the company decides the binding of
that variable, which almost always won’t be the binding that an
individual developer prefers. The term will likely be
non-negotiable at that point, even though it was configurable in the
template.

Not only that, but imagine a much more likely scenario about the CLA:
the company fails to use the outbound license they promised. For
example, suppose they promised the developers it’d be
AGPL’d forever
(although, no such option actually exists in Project Harmony, as
described below!), but then the company releases proprietarized
versions. The developers who signed the CLA are still copyright
holders, so they can enforce under copyright law, which, by itself,
would allow the developers to enforce under the laws in whatever jurisdiction suits
them (assuming the infringement is happening in that jurisdiction, of
course).

However, by signing a CLA with a “choice of law” clause,
the developers agreed to whatever jurisdiction is stated in that CLA.
The CLA has now turned what would otherwise be a mundane copyright
enforcement action operating purely under the developer’s local copyright law into a contract
dispute between the developers and the company under the chosen
jurisdiction’s laws. Obviously that agreement might include AGPL and/or GPL by reference,
but the claim of copyright infringement due to violation of GPL is now
muddied by the CLA contract that the developers signed, wherein the
developers granted some rights and permission beyond GPL to the
company.

Even worse, if the developer does bring action in a their own
jurisdiction, their own jurisdiction is forced to interpret the laws of
another place. This leads to highly variable and confusing results.

Problems for Individual Copyright Enforcement Against Third-Parties

Furthermore, even though individual developers still hold the
copyrights, the Project Harmony CLAs grant many transferable rights and
permissions to the CLA recipient (again, usually a company).
Even if the reasons for requiring that were noble, it
introduces a bundle of extra permissions that can be passed along to
other entities.

Suddenly, what was once a simple copyright enforcement action for a
developer discovering a copyleft violation becomes a question: Did
this violating entity somehow receive special permissions from the
CLA-collecting entity? Violators will quickly become aware of this
defense. While the defense may not have merit (i.e., the CLA recipient
may not even know the violator), it introduces confusion. Most legal
proceedings involving software are already confusing enough for courts
due to the complex technology involved. Adding something like this will
just cause trouble and delays, further taxing our already minimally
funded community copyleft enforcement efforts.

Inbound=Outbound Is All You Need

Meanwhile, the whole CLA question actually is but one fundamental
consideration: Do we need this? Project Harmony’s answer is clear: its
proponents claim that there is mass confusion about CLAs and no
standardization, and therefore Project Harmony must give a standard set
of agreements that embody all the options that are typically used.

Yet, Project Harmony has purposely refused to offer the simplest and
most popular option of all, which my colleague Richard Fontana (a lawyer
at Red Hat who also opposes Project
Harmony
) last year
dubbed inbound=outbound. Specifically, the default agreement
in the overwhelming majority of FLOSS projects is simply this: each
contributor agrees to license each contribution using the project’s
specified copyright license (or a license compatible with the project’s
license).

No matter what way you dice Project Harmony, the other contractual
problems described above make true inbound=outbound impossible because
the CLA recipient is never actually bound formally by the project’s
license itself. Meanwhile, even under its best configuration, Project
Harmony can’t adequately approximate inbound=outbound. Specifically,
Project Harmony attempts to limit outbound licensing with its §
2.3 (called Outbound License). However, all the copyleft
versions of this template include a clause that say: We [the CLA
recipient] agree to license the Contribution … under terms of the
… licenses which We are using on the Submission Date for the
Material. Yet, there is no way for the contributor to
reliably verify what licenses are in use privately by the entity
receiving the CLA. If the entity is already engaged in, for example, a
proprietary
relicensing business model
at the Submission Date, then the
contributor grants permission for such relicensing on the new
contribution, even if the rest of § 2.3 promises copyleft. This is
not a hypothetical: there have been many cases where it was unclear
whether or not a company was engaged in proprietary relicensing, and
then later it was discovered that they had been privately doing so for
years. As written, therefore, every configuration of Project Harmony’s
§ 2.3 is useless to prevent proprietarization.

Even if that bug were fixed, the closest Project Harmony gets to
inbound=outbound is restricting the CLA version to “FSF’s list of
‘recommended copyleft licenses’”. However, this
category makes no distinction between
the AGPL and GPL,
and furthermore ultimately grants FSF power over relicensing (as FSF can
change its
list of recommended copylefts
at will). If the contributors are
serious about the AGPL, then Project Harmony cannot
assure their changes stay AGPL’d. Furthermore,
contributors must trust the FSF for perpetuity, even more
than already needed in the -or-later options in the existing
FSF-authored licenses. I’m all for trusting the FSF myself in most
cases. However, because I prefer plain AGPLv3-or-later for my code,
Project Harmony is completely unable to accommodate my licensing
preferences to even approximate an AGPL version of inbound=outbound
(even if I ignored the numerous problems already discussed).

Meanwhile, the normal, mundane, and already widely used
inbound=outbound practice is simple, effective, and doesn’t mix in
complicated contract disputes and control structures with the project’s
governance. In essence, for most FLOSS projects, the copyright license
of the project serves as the Constitution of the project, and doesn’t
mix in any other complications. Project Harmony seeks to give warm
fuzzies to lawyers at the expense of offloading liability, annoyance,
and extra hoop-jumping onto developers.

Linux Hackers Ingeniously Trailblazed inbound=outbound

Almost exactly 10 years ago today, I recall distinctly attending
the USENIX
2001 Linux
BoF
session. At that session, Ted Ts’o
and I had a rather lively debate; I claimed that FSF’s ©AA assured
legal certainty of the GNU codebase, but that Linux had no such
assurance. (BTW, even I was confused in those days and thought
all GNU packages required FSF’s ©AA.) Ted explained, in his usual
clear and bright manner, that such heavy-handed methods shouldn’t be
needed to give legal certainty to the GPL and that the Linux community
wanted to find an alternative.

I walked away skeptically shaking my head. I remember thinking: Ted
just doesn’t get it. But I was wrong; he did get it. In
fact, many of the core Linux developers did. Three years to the month
after that public conversation with
Ted, the
Developer’s Certificate of Origin (DCO) became the official required
way to handle the “CLA issue” for Linux
and
it remains
the policy of Linux today. (See item 12 in Linux’s
Documentation/SubmittingPatches
file.)

The DCO,
in fact, is the only CLA any FLOSS project ever needs! It implements
inbound=outbound in a simple and straightforward way, without giving
special powers over to any particular company or entity. Developers
keep their own copyright and they unilaterally attest to their right to
contribute and the license of the contribution. (Developers can even
sign a ©AA with some other entity, such as the FSF, if they wish.)
The DCO also gives a simple methodology (i.e.,
the Signed-off-by: tag) for developers to so attest.

I admit that I once scoffed at the (what I then considered
naïve) simplicity of the DCO when compared to FSF’s ©AA.
Yet, I’ve been since convinced that the Linux DCO clearly accomplishes
the primary job and simultaneously fits how most developers like to
work. ©AA’s have their place, particularly when the developers
find a trusted organization that aligns with their personal moral code
and will enforce copyleft for them. However, for CLAs, the Linux DCO
gets the important job done and tosses aside the pointless and
pro-corporate stuff.

Frankly, if I have to choose between making things easy for developers
and making them easy for corporate lawyers, I’m going to chose the
former every time: developers actually write the code; while, most of
the time, company’s legal departments just get in our way. The FLOSS
community needs just enough
CYA stuff to get by; the DCO
shows what’s actually necessary, as opposed to what corporate
attorneys wish they could get developers to do.

What about Relicensing?

Admittedly, Linux’s DCO does not allow for relicensing wholesale of the
code by some single entity; it’s indeed the reason a Linux switch to GPLv3
will be an arduous task of public processes to ensure permission to make
the change. However, it’s important to note that the Linux
culture believes in GPLv2-only as a moral foundation and
principle of their community. It’s not a principle I espouse; most of my
readers know
that my
preferred software license is AGPLv3-or-later
. However, that’s the
point here: inbound=outbound is the way a FLOSS community
implements their morality; Project Harmony seeks to remove community
license decision-making from most projects.

Meanwhile, I’m all for the “-or-later” brand of relicensing
permission; GPL, LGPL and AGPL have left this as an option for community
choice since GPLv1 was
published in late 1980s
. Projects declare
themselves GPLv2-or-later or LGPLv3-or-later, or
even (GPLv1-or-later|Artistic)
(ala Perl 5)
to identify their culture and relicensing permissions.
While it would sometimes be nice to have a broad post-hoc relicensing
authority, the price for that’s expensive: abandonment of community
clarity regarding what terms define their software development
culture.

An Anti-Strong-Copyleft Bias?

Even worse, Project Harmony remains biased against some of the more
fine-grained versions of copyleft culture. For
example, Allison
Randal, who is heavily involved with Project Harmony, argued

on Linux
Outlaws Episode 204
that Most developers who contribute
under a copyleft license — they’d be happy with any copyleft
license — AGPL, GPL, LGPL. Yet there
are well
stated reasons why developers might pick GPL rather than LGPL
.
Thus, giving a for-profit company (or non-profit that doesn’t
necessarily share the developers’ values) unilateral decision-making
power to relicense GPL’d works under LGPL or other weak copyleft
licenses is ludicrous.

In its 1.0 release, Project Harmony attempted to add a “strong
copyleft only” option. It doesn’t actually work, of course, for
the various reasons discussed in detail above. But even so, this
solution is just one option among many, and is not required as a default
when a project is otherwise copylefted.

Finally, it’s important to realize that
the GPLv3,
AGPLv3, and LGPLv3 already offer a “proxy option”; projects
can name someone to decide the -or-later question at a later time
.
So, for those projects that use any of the set { LGPLv3-only,
AGPLv3-only, GPLv3-only, GPLv2-or-later, GPLv1-or-later, or
LGPLv2.1-or-later }, the developers already have mechanisms to
move to later versions of the license with ease — by specifying a
proxy. There is no need for a CLA to accomplish that task in the GPL
family of licenses, unless the goal is to erode stronger copylefts into
weaker copylefts.

This is No Creative Commons, But Even If It Were, Is It Worth
Emulation?

Project Harmony’s proponents love to compare the project
to Creative Commons, but the
comparison isn’t particularly apt. Furthermore, I’m not convinced the
FLOSS community should emulate the
CC license suite wholesale,
as some of the aspects of the CC structure are problematic when imported
back into FLOSS licensing.

First of
all, Larry
Lessig
(who is widely considered a visionary) started the CC
licensing suite to bootstrap a Free Culture movement that modeled on the
software freedom movement (which he spent a decade studying). However,
Lessig made some moral compromises in an attempt to build a bridge to
the “some rights reserved” mentality. As such, many of the
CC licenses — notably those that include the non-commercial (NC)
or no-derivatives (ND) terms — are considered overly restrictive
of freedom and are
therefore shunned
by Free Culture activists

and
software freedom advocates
alike.

Over nearly decade, such advocates have slowly begun to convince
copyright holders to avoid CC’s NC and ND options, but CC’s own
continued promulgation of those options lend them undue legitimacy.
Thus, CC and Project Harmony make the same mistake: they act amorally in
an attempt to build a structure of licenses/agreements that tries to
bridge a gulf in understanding between a
FaiF community and those
only barely dipping their toe in that community. I chose the word
amoral, as
I often do
, to note a situation where important moral principles
exist, but the primary actors involved seek to remove morality from the
considerations under the guise of leaving decision-making to the
“magic of the marketplace”. Project Harmony is repeating
the mistake of the CC license suite that the Free Culture community has
spent a decade (and counting) cleaning up.

Conclusions

Please note that IANAL and
TINLA. I’m just a
community- and individual-developer- focused software freedom policy
wonk who has some grave concerns about how these Project Harmony
Agreements operate. I can’t give you a fine-grained legal analysis,
because I’m frankly only an amateur when it comes to the law, but
I am an expert in software freedom project policy. In that
vein — corporate attorney endorsements notwithstanding — my
opinion is that Project Harmony should be abandoned entirely.

In fact, the distinction between policy and legal expertise actually
shows the root of the problem with Project Harmony. It’s a system of
documents designed by a committee primarily comprised of corporate
attorneys, yet it’s offered up as if it’s a FLOSS developer consensus.
Indeed, Project Harmony itself was initiated
by Amanda
Brock
, a for-profit corporate attorney for Canonical, Ltd, who is
remains involved in its
drafting. Canonical,
Ltd. later hired
Mark Radcliffe (a big law firm attorney,
who has
defended
GPL
violators)
to draft the alpha revisions of the document, and Radcliffe remains
involved in the process. Furthermore, the primary drafting process was
done secretly in closed meetings dominated by corporate attorneys
until the documents were almost complete; the process was not made
publicly open to the FLOSS community until April 2011. The 1.0
documents differ little from the drafts that were released in April
2011, and thus remain to this day primarily documents drafted in secrecy
by corporate attorneys who have only a passing familiarity with software
freedom culture.

Meanwhile,
I’ve asked
Project Harmony’s advocates
many times who is in charge of Project
Harmony now, and no one can give me a straight answer. One is left to
wonder who decides final draft approval and what process exists to
prevent or permit text for the drafts. The process which once was in
secrecy appears to be now in chaos because it was opened up too late for
fundamental problems to be resolved.

A few developers are indeed actively involved in Project Harmony. But
Project Harmony is not something that most developers requested; it was
initiated by companies who would like to convince developers to
passively adopt overreaching CLAs and ©AAs. To me, the whole
Project Harmony process feels like a war of attrition to convince
developers to accept something that they don’t necessarily want with
minimal dissent. In short, the need for Project Harmony has not been
fully articulated to developers.

Finally, I ask, what’s really broken here? The industry has been
steadily and widely adopting GNU and Linux for years. GNU, for its
part, has FSF assignments in place for much of its earlier projects, but
the later projects
(GNOME,
in particular
) have either been against both ©AA’s and CLA’s
entirely, or are mostly indifferent to them and use inbound=outbound.
Linux, for its part, uses the DCO, which does the job of handling the
urgent and important parts of a CLA without getting in developers’ way
and without otherwise forcing extra liabilities onto the developers and
handing over important licensing decisions (including copyleft weakening
ones) to a single (usually for-profit) entity.

In short, Project Harmony is a design-flawed solution looking for a
problem.

Further Reading

Richard Fontana’s The Trouble With Harmony, Part I
Richard Fontana’s The Trouble With Harmony, Part II
Dave
Neary’s Harmony Agreements reach 1.0

OpenStack
community acrimony regarding their CLA and contributors’ desire to end it

Simon
Phipps’ Out Of Tune With Community

Martin
Gräßlin’s Why I would not sign a Harmony Agreement

Michael
Meeks’ Some Thoughts on Copyright Assignment

Dave
Neary’s Copyright assignment and other barriers to
entry

My [Proprietary
Relicensing] is the New Shareware

RMS’ When
a company asks for your copyright

Brett
Smith’s The FSF and Project Harmony

Jos
Poortvliet’s Harmony 1.0 is out


<!– –>
<!– There –>
<!– are –>
<!– many –>
<!– different –>
<!– threads –>
<!– that –>
<!– can –>
<!– be –>
<!– found –>
<!– on –>
<!– identi.ca –>
<!– discussing –>
<!– the –>
<!– Project –>
<!– Harmony –>
<!– Agreements. The –>
<!– hashtag “#Harmony” is –>

<!– identi.ca
. The hashtag –>
<!– “#CLA” may also be of interest
. –>
Jos
Poortvliet’s The issue of bringing harmony to copyright
assignment

Simon
Phipps’ Balancing transparency and privacy

GNOME Policy on
Copyright Assignment

GNOME
Foundation Guidelines on Copyright Assignment

Amanda Brock’s Project Harmony looks to improve contribution agreements
Allison Randal’s Harmony 1.0 Reflections
Project Harmony Agreements Mailing List
Archives

Harmony Agreement Drafts
Richard
Fontana’s slides from his Contribution Policies for Open Source
Projects talk

Mark J. Wielaard’s
Trusting companies with your code…

Jed
Brown cited this article on 2014-08-29 when arguing against the openmpi
project’s CLA
.

0Project Harmony
advocates will likely claim to their § 5, “Consequential Damage
Waiver” protects developers adequately. I note that it
explicitly leaves out, for example, statutory damages for copyright
infringement. Also, some types of damages cannot be waived (which is why
that section shouts at the reader TO THE MAXIMUM EXTENT PERMITTED BY
APPLICABLE LAW). Note my discussion of jurisdictions in the main text
of this article, and consider
the fact that the CLA recipient will obviously select a jurisdiction where
the fewest possible damages can be waived. Finally, note that the OR
US part of that § 5 is optionally available, and surely corporate
attorneys will use it, which means that if they violate the agreement,
there’s basically no way for you to get any damages from them, even if
they their promise to keep the code copylefted and fail.

1Note:
Earlier versions of this blog post conflated slightly “choice of
venue” with “choice of law”. The wording has been
cleared up to address this problem. Please comment or email me if you
believe it’s not adequately corrected.

systemd for Developers II

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/socket-activation2.html

It has been way too long since I posted the first
episode
of my systemd for Developers series. Here’s finally the
second part. Make sure you read the first episode of the series before
you start with this part since I’ll assume the reader grokked the wonders
of socket activation.

Socket Activation, Part II

This time we’ll focus on adding socket activation support to real-life
software, more specifically the CUPS printing server. Most current Linux
desktops run CUPS by default these days, since printing is so basic that it’s a
must have, and must just work when the user needs it. However, most desktop
CUPS installations probably don’t actually see more than a handful of print
jobs each month. Even if you are a busy office worker you’ll unlikely to
generate more than a couple of print jobs an hour on your PC. Also, printing is
not time critical. Whether a job takes 50ms or 100ms until it reaches the
printer matters little. As long as it is less than a few seconds the user
probably won’t notice. Finally, printers are usually peripheral hardware: they
aren’t built into your laptop, and you don’t always carry them around plugged
in. That all together makes CUPS a perfect candidate for lazy activation:
instead of starting it unconditionally at boot we just start it on-demand, when
it is needed. That way we can save resources, at boot and at runtime. However,
this kind of activation needs to take place transparently, so that the user
doesn’t notice that the print server was not actually running yet when he tried
to access it. To achieve that we need to make sure that the print server is
started as soon at least one of three conditions hold:

  1. A local client tries to talk to the print server, for example because
    a GNOME application opened the printing dialog which tries to enumerate
    available printers.
  2. A printer is being plugged in locally, and it should be configured and
    enabled and then optionally the user be informed about it.
  3. At boot, when there’s still an unprinted print job lurking in the queue.

Of course, the desktop is not the only place where CUPS is used. CUPS can be
run in small and big print servers, too. In that case the amount of print jobs
is substantially higher, and CUPS should be started right away at boot. That
means that (optionally) we still want to start CUPS unconditionally at boot and
not delay its execution until when it is needed.

Putting this all together we need four kind of activation to make CUPS work
well in all situations at minimal resource usage: socket based activation (to
support condition 1 above), hardware based activation (to support condition 2),
path based activation (for condition 3) and finally boot-time activation (for
the optional server usecase). Let’s focus on these kinds of activation in more
detail, and in particular on socket-based activation.

Socket Activation in Detail

To implement socket-based activation in CUPS we need to make sure that when
sockets are passed from systemd these are used to listen on instead of binding
them directly in the CUPS source code. Fortunately this is relatively easy to
do in the CUPS sources, since it already supports launchd-style socket
activation, as it is used on MacOS X (note that CUPS is nowadays an Apple
project). That means the code already has all the necessary hooks to add
systemd-style socket activation with minimal work.

To begin with our patching session we check out the CUPS sources.
Unfortunately CUPS is still stuck in unhappy Subversion country and not using
git yet. In order to simplify our patching work our first step is to use
git-svn to check it out locally in a way we can access it with the
usual git tools:

git svn clone http://svn.easysw.com/public/cups/trunk/ cups

This will take a while. After the command finished we use the wonderful
git grep to look for all occurences of the word “launchd”, since
that’s probably where we need to add the systemd support too. This reveals scheduler/main.c
as main source file which implements launchd interaction.

Browsing through this file we notice that two functions are primarily
responsible for interfacing with launchd, the appropriately named
launchd_checkin() and launchd_checkout() functions. The
former acquires the sockets from launchd when the daemon starts up, the latter
terminates communication with launchd and is called when the daemon shuts down.
systemd’s socket activation interfaces are much simpler than those of launchd.
Due to that we only need an equivalent of the launchd_checkin() call,
and do not need a checkout function. Our own function
systemd_checkin() can be implemented very similar to
launchd_checkin(): we look at the sockets we got passed and try to map
them to the ones configured in the CUPS configuration. If we got more sockets
passed than configured in CUPS we implicitly add configuration for them. If the
CUPS configuration includes definitions for more listening sockets those will
be bound natively in CUPS. That way we’ll very robustly listen on all ports
that are listed in either systemd or CUPS configuration.

Our function systemd_checkin() uses sd_listen_fds() from
sd-daemon.c to acquire the file descriptors. Then, we use
sd_is_socket() to map the sockets to the right listening configuration
of CUPS, in a loop for each passed socket. The loop corresponds very closely to
the loop from launchd_checkin() however is a lot simpler. Our patch so far looks like this.

Before we can test our patch, we add sd-daemon.c
and sd-daemon.h
as drop-in files to the package, so that sd_listen_fds() and
sd_is_socket() are available for use. After a few minimal changes to
the Makefile we are almost ready to test our socket activated version
of CUPS. The last missing step is creating two unit files for CUPS, one for the
socket (cups.socket), the
other for the service (cups.service). To make things
simple we just drop them in /etc/systemd/system and make sure systemd
knows about them, with systemctl daemon-reload.

Now we are ready to test our little patch: we start the socket with
systemctl start cups.socket. This will bind the socket, but won’t
start CUPS yet. Next, we simply invoke lpq to test whether CUPS is
transparently started, and yupp, this is exactly what happens. We’ll get the
normal output from lpq as if we had started CUPS at boot already, and
if we then check with systemctl status cups.service we see that CUPS
was automatically spawned by our invocation of lpq. Our test
succeeded, socket activation worked!

Hardware Activation in Detail

The next trigger is hardware activation: we want to make sure that CUPS is
automatically started as soon as a local printer is found, regardless whether
that happens as hotplug during runtime or as coldplug during
boot. Hardware activation in systemd is done via udev rules. Any udev device
that is tagged with the systemd tag can pull in units as needed via
the SYSTEMD_WANTS= environment variable. In the case of CUPS we don’t
even have to add our own udev rule to the mix, we can simply hook into what
systemd already does out-of-the-box with rules shipped upstream. More
specifically, it ships a udev rules file with the following lines:

SUBSYSTEM=="printer", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"
SUBSYSTEM=="usb", KERNEL=="lp*", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"
SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ENV{ID_USB_INTERFACES}=="*:0701??:*", TAG+="systemd", ENV{SYSTEMD_WANTS}="printer.target"

This pulls in the target unit printer.target as soon as at least
one printer is plugged in (supporting all kinds of printer ports). All we now
have to do is make sure that our CUPS service is pulled in by
printer.target and we are done. By placing WantedBy=printer.target
line in the [Install] section of the service file, a
Wants dependency is created from printer.target to
cups.service as soon as the latter is enabled with systemctl
enable
. The indirection via printer.target provides us with a
simple way to use systemctl enable and systemctl disable to
manage hardware activation of a service.

Path-based Activation in Detail

To ensure that CUPS is also started when there is a print job still queued
in the printing spool, we write a simple cups.path that
activates CUPS as soon as we find a file in /var/spool/cups.

Boot-based Activation in Detail

Well, starting services on boot is obviously the most boring and well-known
way to spawn a service. This entire excercise was about making this unnecessary,
but we still need to support it for explicit print server machines. Since those
are probably the exception and not the common case we do not enable this kind
of activation by default, but leave it to the administrator to add it in when
he deems it necessary, with a simple command (ln -s
/lib/systemd/system/cups.service
/etc/systemd/system/multi-user.target.wants/
to be precise).

So, now we have covered all four kinds of activation. To finalize our patch
we have a closer look at the [Install] section of cups.service, i.e.
the part of the unit file that controls how systemctl enable
cups.service
and systemctl disable cups.service will hook the
service into/unhook the service from the system. Since we don’t want to start
cups at boot we do not place WantedBy=multi-user.target in it like we
would do for those services. Instead we just place an Also= line that
makes sure that cups.path and cups.socket are
automatically also enabled if the user asks to enable cups.service
(they are enabled according to the [Install] sections in those unit
files).

As last step we then integrate our work into the build system. In contrast
to SysV init scripts systemd unit files are supposed to be distribution
independent. Hence it is a good idea to include them in the upstream tarball.
Unfortunately CUPS doesn’t use Automake, but Autoconf with a set of handwritten
Makefiles. This requires a bit more work to get our additions integrated, but
is not too difficult either. And
this is how our final patch looks like
, after we commited our work and ran
git format-patch -1 on it to generate a pretty git patch.

The next step of course is to get this patch integrated into the upstream
and Fedora packages (or whatever other distribution floats your boat). To make
this easy I have prepared a
patch for Tim that makes the necessary packaging changes for Fedora 16
, and
includes the patch intended for upstream linked above. Of course, ideally the
patch is merged upstream, however in the meantime we can already include it in
the Fedora packages.

Note that CUPS was particularly easy to patch since it already supported
launchd-style activation, patching a service that doesn’t support that yet is
only marginally more difficult. (Oh, and we have no plans to offer the complex
launchd API as compatibility kludge on Linux. It simply doesn’t translate very
well, so don’t even ask… ;-))

And that finishes our little blog story. I hope this quick walkthrough how to add
socket activation (and the other forms of activation) to a package were
interesting to you, and will help you doing the same for your own packages. If you
have questions, our IRC channel #systemd on freenode and
our mailing
list
are available, and we are always happy to help!

Identi.ca Weekly Summary

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/06/26/identica-weekly.html

Identi.ca Summary, 2011-06-19 through 2011-06-26

The conversation
that I
mentioned last week

about GPL for Javascript
libraries continued in a new thread this week
. The thread was
rather long:

@fontana rather
strangely argued that no one
should use GPL for Javascript
, this seemed
like a generally
anti-copyleft position to me
, and @fontana went on further to say
he’s now anti-copyleft in
some situations
, when it relates
to proprietary
relicensing
.

I pointed out,
using OpenFOAM
as an example
that being
against illegitimate use of otherwise good things doesn’t mean you need
to be universally against the thing
.

There was
a subthread
discussing how GPL requirements work with Javascript
, but the
subthread diverged into a discussion of CLAs and Fedora,
wherein @fontana strangely
said that multiple copyright holders won’t solve proprietary relicensing
problem
.

@fontana asked for example of a
GPL’d Javascript library with multiple copyright holders (i.e., one that
isn’t using a proprietary recliensing business model)
. I’d much
appreciate if someone can look for an example of a GPL’d
Javascript library matching the criteria @fontana describes
; I haven’t had time to
look. I offered @fontana a
prop bet on this
, regardless.

Finally, in the same
thread, @jasonriedy mentioned
the so-called Lisp LGPL
, which I said was
a seemed unnecessary now that
we have LGPLv3
.

I noted
that I wrote a blog post on OpenFOAM
.

I complained
about the (lack of a) USA healthcare system
.

@fontana
and I had a discussion about crossposting on identi.ca
.

I
ack’ed that @fabsh had launched
the
oggcast, rantofabkuhn.

The biggest news this week was
that @kaz
is now Executive Director of the GNOME Foundation
, although
the thread discussing
it on identica was rather short
. OTOH, @fontana asked
if @kaz
would be required to use GNOME 3
.

The thread
about @allisonrandal’s
appearance on Linux Outlaws
continued:

@allisonrandal claimed to
have not said
that those
who chose strong copyleft were just as happy with weak copyleft
relicensing
.
I found the exact place
where she said that

in the LO 204 ogg
file
, wherein she says at 36:15 and 37:30:

Part of that reason is that when a developer develops code they want
their code to be used. They may have a general philosophy that
they want used. Most developers who contribute under a copyleft license
&mdash they’d be happy with any copyleft license — AGPL,
GPL, LGPL — they think — that’s my
“set”. …

You’re using GPL and we’re using LGPL, so we can’t use your code.
Hmmm, we can’t do that! … this just doesn’t fit the way
developers think! We want our code to be used — and we’re happy
to have — if I said GPL, it’s probably true that I’m happy to have
it under LGPL as well. It’s just too much work [without Harmony] to
make that happen.

@allisonrandal,
@fontana and I debated the differences between strong and weak
copyleft in a subthread
.

A
subthread discussed who the leadership of Harmony is
. I asked for a
definitive place where I
can find who are the decision-makers of Harmony
and no one
answered this, but @fontana
made some
speculations
, @allisonrandal
claims that Harmony has no leadership
(I wondered but didn’t dent:
should people really be adopting important documents from a group
with no leadership?).
Also, @fabsh pointed out
that he doubted that it was without
leaders
. @fontana
pointed out that SFLC was not previously leader of Harmony
;
@allisonrandal says she
thought they were and yet SFLC claims they weren’t
. I ended the subtread
by asking again how Harmony
governing works
and got no response.

In a subthread, @allisonrandal reiterated that FSF was wrong
to change the terms of GPL with GPLv3
(which she’d
previously stated on the LO
interview
. I
still believe her position
on this ironically contradicts the plans of Harmony
, which seeks
to empower companies to change licenses unilaterally. (Why should
companies have a right to change a license, but FSF shouldn’t?)

I pointed out to
@allisonrandal that GPLv2 already specified inside the license plans
for
GPLv3
. @allisonrandal
said in response that FSF updating GPL wasn’t helpful to Free
Software
developers
. She
further claimed that FSF’s update to GPLv3 constituted Manifest
Destiny
, which I
disputed
.

The conversation on that sub-thread descended
into a
discussion of @allisonrandal’s culturally relativistic attitude toward
Free Software
,
wherein @allisonrandal
admitted she’s primarily a cultural relativist
.

Finally, there
was subthread
discussing how one can be pro-copyleft, believe that proprietary
software is morally wrong, but also not believe permissive licensing is
morally wrong
. I would think such is obvious and well established
by, for example, RMS’ writings since 1984, but we nevertheless rehashed
that old debate. In this subthread, I
did point out that Harmony is
biased against copyleft
, and therefore is not merely an amoral
proposition of all options, as @allisonrandal has claimed. (Oh,
and this dent of mine in that
thread was redented a bit
.) I favorites and nearly
redented @mlinksva’s
contribution to the subthread
.

@fontana linked to a
Harmony list post

wherein @allisonrandal
attempts to make an 11th-hour effort to remove anti-strong-copyleft
parts of Harmony
.

There was
a rather pointlessly
lengthy thread about accents, mostly my Balmur accent (or adjusted
version thereof)
. That
discussion bled over
onto another thread that started when I left @fontana a voicemail in a
think Balmur accent
.

@fontana
doesn’t like it that I call Hitler a “dude”, even though I
said evil dude
.

I was
a guest on FLOSS Weekly on
Wednesday
. @joncruz
mentioned he enjoyed the show
.

I mentioned
again to @mcgrof my copyleft-by-guilt theory of OpenBSD
, which I’d
previously mentioned
publicly
,
which @chromatic
found amusing
.

FSF
intern @williamtheaker is working this summer on some historical GPLv3
data-gathering
.

@fontana
started a thread on a Fedora list and on identi.ca about Gilligan’s
Island copyright of the Fedora website
. This was previously
discussed
in two threads
about a month ago, wherein
I coined the phrase
“Gilligan’s Island
copyright”
. @fontana
gave me credit on the Fedora thread for coining the phrase
. I’m
working on a more complete blog post on Gilligan’s Island copyright.

dneary’s
blog post made me think of an old boss
.

There
was a discussion of my reasons for phoning @fontana
.

My
beloved plastic $2 Pretty neat travel soap dish (tray / holder) that I
got in 1991 is now cracked
.

@kraai
is registered to donate bone marrow. I’m considering it.

I’m
continuing to work on some patches for GNU Bash
.

Some
people apparently want
an @bkuhn GPL enforcement action figure
.

systemd for Developers I

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/socket-activation.html

systemd
not only brings improvements for administrators and users, it also
brings a (small) number of new APIs with it. In this blog story (which might
become the first of a series) I hope to shed some light on one of the
most important new APIs in systemd:

Socket Activation

In the original blog
story about systemd
I tried to explain why socket activation is a
wonderful technology to spawn services. Let’s reiterate the background
here a bit.

The basic idea of socket activation is not new. The inetd
superserver was a standard component of most Linux and Unix systems
since time began: instead of spawning all local Internet services
already at boot, the superserver would listen on behalf of the
services and whenever a connection would come in an instance of the
respective service would be spawned. This allowed relatively weak
machines with few resources to offer a big variety of services at the
same time. However it quickly got a reputation for being somewhat
slow: since daemons would be spawned for each incoming connection a
lot of time was spent on forking and initialization of the services
— once for each connection, instead of once for them all.

Spawning one instance per connection was how inetd was primarily
used, even though inetd actually understood another mode: on the first
incoming connection it would notice this via poll() (or
select()) and spawn a single instance for all future
connections. (This was controllable with the
wait/nowait options.) That way the first connection
would be slow to set up, but subsequent ones would be as fast as with
a standalone service. In this mode inetd would work in a true
on-demand mode: a service would be made available lazily when it was
required.

inetd’s focus was clearly on AF_INET (i.e. Internet) sockets. As
time progressed and Linux/Unix left the server niche and became
increasingly relevant on desktops, mobile and embedded environments
inetd was somehow lost in the troubles of time. Its reputation for
being slow, and the fact that Linux’ focus shifted away from only
Internet servers made a Linux machine running inetd (or one of its newer
implementations, like xinetd) the exception, not the rule.

When Apple engineers worked on optimizing the MacOS boot time they
found a new way to make use of the idea of socket activation: they
shifted the focus away from AF_INET sockets towards AF_UNIX
sockets. And they noticed that on-demand socket activation was only
part of the story: much more powerful is socket activation when used
for all local services including those which need to be started
anyway on boot. They implemented these ideas in launchd, a central building
block of modern MacOS X systems, and probably the main reason why
MacOS is so fast booting up.

But, before we continue, let’s have a closer look what the benefits
of socket activation for non-on-demand, non-Internet services in
detail are. Consider the four services Syslog, D-Bus, Avahi and the
Bluetooth daemon. D-Bus logs to Syslog, hence on traditional Linux
systems it would get started after Syslog. Similarly, Avahi requires
Syslog and D-Bus, hence would get started after both. Finally
Bluetooth is similar to Avahi and also requires Syslog and D-Bus but
does not interface at all with Avahi. Sinceoin a traditional
SysV-based system only one service can be in the process of getting
started at a time, the following serialization of startup would take
place: Syslog → D-Bus → Avahi → Bluetooth (Of course, Avahi and
Bluetooth could be started in the opposite order too, but we have to
pick one here, so let’s simply go alphabetically.). To illustrate
this, here’s a plot showing the order of startup beginning with system
startup (at the top).

Parallelization plot

Certain distributions tried to improve this strictly serialized
start-up: since Avahi and Bluetooth are independent from each other,
they can be started simultaneously. The parallelization is increased,
the overall startup time slightly smaller. (This is visualized in the
middle part of the plot.)

Socket activation makes it possible to start all four services
completely simultaneously, without any kind of ordering. Since the
creation of the listening sockets is moved outside of the daemons
themselves we can start them all at the same time, and they are able
to connect to each other’s sockets right-away. I.e. in a single step
the /dev/log and /run/dbus/system_bus_socket sockets
are created, and in the next step all four services are spawned
simultaneously. When D-Bus then wants to log to syslog, it just writes
its messages to /dev/log. As long as the socket buffer does
not run full it can go on immediately with what else it wants to do
for initialization. As soon as the syslog service catches up it will
process the queued messages. And if the socket buffer runs full then
the client logging will temporarily block until the socket is writable
again, and continue the moment it can write its log messages. That
means the scheduling of our services is entirely done by the kernel:
from the userspace perspective all services are run at the same time,
and when one service cannot keep up the others needing it will
temporarily block on their request but go on as soon as these
requests are dispatched. All of this is completely automatic and
invisible to userspace. Socket activation hence allows us to
drastically parallelize start-up, enabling simultaneous start-up of
services which previously were thought to strictly require
serialization. Most Linux services use sockets as communication
channel. Socket activation allows starting of clients and servers of
these channels at the same time.

But it’s not just about parallelization. It offers a number of
other benefits:

  • We no longer need to configure dependencies explicitly. Since the
    sockets are initialized before all services they are simply available,
    and no userspace ordering of service start-up needs to take place
    anymore. Socket activation hence drastically simplifies configuration
    and development of services.
  • If a service dies its listening socket stays around, not losing a
    single message. After a restart of the crashed service it can continue
    right where it left off.
  • If a service is upgraded we can restart the service while keeping
    around its sockets, thus ensuring the service is continously
    responsive. Not a single connection is lost during the upgrade.
  • We can even replace a service during runtime in a way that is
    invisible to the client. For example, all systems running systemd
    start up with a tiny syslog daemon at boot which passes all log
    messages written to /dev/log on to the kernel message
    buffer. That way we provide reliable userspace logging starting from
    the first instant of boot-up. Then, when the actual rsyslog daemon is
    ready to start we terminate the mini daemon and replace it with the
    real daemon. And all that while keeping around the original logging
    socket and sharing it between the two daemons and not losing a single
    message. Since rsyslog flushes the kernel log buffer to disk after
    start-up all log messages from the kernel, from early-boot and from
    runtime end up on disk.

For another explanation of this idea consult the original blog
story about systemd
.

Socket activation has been available in systemd since its
inception. On Fedora 15 a number of services have been modified to
implement socket activation, including Avahi, D-Bus and rsyslog (to continue with the example above).

systemd’s socket activation is quite comprehensive. Not only classic
sockets are support but related technologies as well:

  • AF_UNIX sockets, in the flavours SOCK_DGRAM, SOCK_STREAM and SOCK_SEQPACKET; both in the filesystem and in the abstract namespace
  • AF_INET sockets, i.e. TCP/IP and UDP/IP; both IPv4 and IPv6
  • Unix named pipes/FIFOs in the filesystem
  • AF_NETLINK sockets, to subscribe to certain kernel features. This
    is currently used by udev, but could be useful for other
    netlink-related services too, such as audit.
  • Certain special files like /proc/kmsg or device nodes like /dev/input/*.
  • POSIX Message Queues

A service capable of socket activation must be able to receive its
preinitialized sockets from systemd, instead of creating them
internally. For most services this requires (minimal)
patching. However, since systemd actually provides inetd compatibility
a service working with inetd will also work with systemd — which is
quite useful for services like sshd for example.

So much about the background of socket activation, let’s now have a
look how to patch a service to make it socket activatable. Let’s start
with a theoretic service foobard. (In a later blog post we’ll focus on
real-life example.)

Our little (theoretic) service includes code like the following for
creating sockets (most services include code like this in one way or
another):

/* Source Code Example #1: ORIGINAL, NOT SOCKET-ACTIVATABLE SERVICE */
...
union {
        struct sockaddr sa;
        struct sockaddr_un un;
} sa;
int fd;

fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0) {
        fprintf(stderr, "socket(): %mn");
        exit(1);
}

memset(&sa, 0, sizeof(sa));
sa.un.sun_family = AF_UNIX;
strncpy(sa.un.sun_path, "/run/foobar.sk", sizeof(sa.un.sun_path));

if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
        fprintf(stderr, "bind(): %mn");
        exit(1);
}

if (listen(fd, SOMAXCONN) < 0) {
        fprintf(stderr, "listen(): %mn");
        exit(1);
}
...

A socket activatable service may use the following code instead:

/* Source Code Example #2: UPDATED, SOCKET-ACTIVATABLE SERVICE */
...
#include "sd-daemon.h"
...
int fd;

if (sd_listen_fds(0) != 1) {
        fprintf(stderr, "No or too many file descriptors received.n");
        exit(1);
}

fd = SD_LISTEN_FDS_START + 0;
...

systemd might pass you more than one socket (based on
configuration, see below). In this example we are interested in one
only. sd_listen_fds()
returns how many file descriptors are passed. We simply compare that
with 1, and fail if we got more or less. The file descriptors systemd
passes to us are inherited one after the other beginning with fd
#3. (SD_LISTEN_FDS_START is a macro defined to 3). Our code hence just
takes possession of fd #3.

As you can see this code is actually much shorter than the
original. This of course comes at the price that our little service
with this change will no longer work in a non-socket-activation
environment. With minimal changes we can adapt our example to work nicely
both with and without socket activation:

/* Source Code Example #3: UPDATED, SOCKET-ACTIVATABLE SERVICE WITH COMPATIBILITY */
...
#include "sd-daemon.h"
...
int fd, n;

n = sd_listen_fds(0);
if (n > 1) {
        fprintf(stderr, "Too many file descriptors received.n");
        exit(1);
} else if (n == 1)
        fd = SD_LISTEN_FDS_START + 0;
else {
        union {
                struct sockaddr sa;
                struct sockaddr_un un;
        } sa;

        fd = socket(AF_UNIX, SOCK_STREAM, 0);
        if (fd < 0) {
                fprintf(stderr, "socket(): %mn");
                exit(1);
        }

        memset(&sa, 0, sizeof(sa));
        sa.un.sun_family = AF_UNIX;
        strncpy(sa.un.sun_path, "/run/foobar.sk", sizeof(sa.un.sun_path));

        if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
                fprintf(stderr, "bind(): %mn");
                exit(1);
        }

        if (listen(fd, SOMAXCONN) < 0) {
                fprintf(stderr, "listen(): %mn");
                exit(1);
        }
}
...

With this simple change our service can now make use of socket
activation but still works unmodified in classic environments. Now,
let’s see how we can enable this service in systemd. For this we have
to write two systemd unit files: one describing the socket, the other
describing the service. First, here’s foobar.socket:

[Socket]
ListenStream=/run/foobar.sk

[Install]
WantedBy=sockets.target

And here’s the matching service file foobar.service:

[Service]
ExecStart=/usr/bin/foobard

If we place these two files in /etc/systemd/system we can
enable and start them:

# systemctl enable foobar.socket
# systemctl start foobar.socket

Now our little socket is listening, but our service not running
yet. If we now connect to /run/foobar.sk the service will be
automatically spawned, for on-demand service start-up. With a
modification of foobar.service we can start our service
already at startup, thus using socket activation only for
parallelization purposes, not for on-demand auto-spawning anymore:

[Service]
ExecStart=/usr/bin/foobard

[Install]
WantedBy=multi-user.target

And now let’s enable this too:

# systemctl enable foobar.service
# systemctl start foobar.service

Now our little daemon will be started at boot and on-demand,
whatever comes first. It can be started fully in parallel with its
clients, and when it dies it will be automatically restarted when it
is used the next time.

A single .socket file can include multiple ListenXXX stanzas, which
is useful for services that listen on more than one socket. In this
case all configured sockets will be passed to the service in the exact
order they are configured in the socket unit file. Also,
you may configure various socket settings in the .socket
files.

In real life it’s a good idea to include description strings in
these unit files, to keep things simple we’ll leave this out of our
example. Speaking of real-life: our next installment will cover an
actual real-life example. We’ll add socket activation to the CUPS
printing server.

The sd_listen_fds() function call is defined in sd-daemon.h
and sd-daemon.c. These
two files are currently drop-in .c sources which projects should
simply copy into their source tree. Eventually we plan to turn this
into a proper shared library, however using the drop-in files allows
you to compile your project in a way that is compatible with socket
activation even without any compile time dependencies on
systemd. sd-daemon.c is liberally licensed, should compile
fine on the most exotic Unixes and the algorithms are trivial enough
to be reimplemented with very little code if the license should
nonetheless be a problem for your project. sd-daemon.c
contains a couple of other API functions besides
sd_listen_fds() that are useful when implementing socket
activation in a project. For example, there’s sd_is_socket()
which can be used to distuingish and identify particular sockets when
a service gets passed more than one.

Let me point out that the interfaces used here are in no way bound
directly to systemd. They are generic enough to be implemented in
other systems as well. We deliberately designed them as simple and
minimal as possible to make it possible for others to adopt similar
schemes.

Stay tuned for the next installment. As mentioned, it will cover a
real-life example of turning an existing daemon into a
socket-activatable one: the CUPS printing service. However, I hope
this blog story might already be enough to get you started if you plan
to convert an existing service into a socket activatable one. We
invite everybody to convert upstream projects to this scheme. If you
have any questions join us on #systemd on freenode.

systemd for Developers I

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/socket-activation.html

systemd
not only brings improvements for administrators and users, it also
brings a (small) number of new APIs with it. In this blog story (which might
become the first of a series) I hope to shed some light on one of the
most important new APIs in systemd:

Socket Activation

In the original blog
story about systemd
I tried to explain why socket activation is a
wonderful technology to spawn services. Let’s reiterate the background
here a bit.

The basic idea of socket activation is not new. The inetd
superserver was a standard component of most Linux and Unix systems
since time began: instead of spawning all local Internet services
already at boot, the superserver would listen on behalf of the
services and whenever a connection would come in an instance of the
respective service would be spawned. This allowed relatively weak
machines with few resources to offer a big variety of services at the
same time. However it quickly got a reputation for being somewhat
slow: since daemons would be spawned for each incoming connection a
lot of time was spent on forking and initialization of the services
— once for each connection, instead of once for them all.

Spawning one instance per connection was how inetd was primarily
used, even though inetd actually understood another mode: on the first
incoming connection it would notice this via poll() (or
select()) and spawn a single instance for all future
connections. (This was controllable with the
wait/nowait options.) That way the first connection
would be slow to set up, but subsequent ones would be as fast as with
a standalone service. In this mode inetd would work in a true
on-demand mode: a service would be made available lazily when it was
required.

inetd’s focus was clearly on AF_INET (i.e. Internet) sockets. As
time progressed and Linux/Unix left the server niche and became
increasingly relevant on desktops, mobile and embedded environments
inetd was somehow lost in the troubles of time. Its reputation for
being slow, and the fact that Linux’ focus shifted away from only
Internet servers made a Linux machine running inetd (or one of its newer
implementations, like xinetd) the exception, not the rule.

When Apple engineers worked on optimizing the MacOS boot time they
found a new way to make use of the idea of socket activation: they
shifted the focus away from AF_INET sockets towards AF_UNIX
sockets. And they noticed that on-demand socket activation was only
part of the story: much more powerful is socket activation when used
for all local services including those which need to be started
anyway on boot. They implemented these ideas in launchd, a central building
block of modern MacOS X systems, and probably the main reason why
MacOS is so fast booting up.

But, before we continue, let’s have a closer look what the benefits
of socket activation for non-on-demand, non-Internet services in
detail are. Consider the four services Syslog, D-Bus, Avahi and the
Bluetooth daemon. D-Bus logs to Syslog, hence on traditional Linux
systems it would get started after Syslog. Similarly, Avahi requires
Syslog and D-Bus, hence would get started after both. Finally
Bluetooth is similar to Avahi and also requires Syslog and D-Bus but
does not interface at all with Avahi. Sinceoin a traditional
SysV-based system only one service can be in the process of getting
started at a time, the following serialization of startup would take
place: Syslog → D-Bus → Avahi → Bluetooth (Of course, Avahi and
Bluetooth could be started in the opposite order too, but we have to
pick one here, so let’s simply go alphabetically.). To illustrate
this, here’s a plot showing the order of startup beginning with system
startup (at the top).

Parallelization plot

Certain distributions tried to improve this strictly serialized
start-up: since Avahi and Bluetooth are independent from each other,
they can be started simultaneously. The parallelization is increased,
the overall startup time slightly smaller. (This is visualized in the
middle part of the plot.)

Socket activation makes it possible to start all four services
completely simultaneously, without any kind of ordering. Since the
creation of the listening sockets is moved outside of the daemons
themselves we can start them all at the same time, and they are able
to connect to each other’s sockets right-away. I.e. in a single step
the /dev/log and /run/dbus/system_bus_socket sockets
are created, and in the next step all four services are spawned
simultaneously. When D-Bus then wants to log to syslog, it just writes
its messages to /dev/log. As long as the socket buffer does
not run full it can go on immediately with what else it wants to do
for initialization. As soon as the syslog service catches up it will
process the queued messages. And if the socket buffer runs full then
the client logging will temporarily block until the socket is writable
again, and continue the moment it can write its log messages. That
means the scheduling of our services is entirely done by the kernel:
from the userspace perspective all services are run at the same time,
and when one service cannot keep up the others needing it will
temporarily block on their request but go on as soon as these
requests are dispatched. All of this is completely automatic and
invisible to userspace. Socket activation hence allows us to
drastically parallelize start-up, enabling simultaneous start-up of
services which previously were thought to strictly require
serialization. Most Linux services use sockets as communication
channel. Socket activation allows starting of clients and servers of
these channels at the same time.

But it’s not just about parallelization. It offers a number of
other benefits:

We no longer need to configure dependencies explicitly. Since the
sockets are initialized before all services they are simply available,
and no userspace ordering of service start-up needs to take place
anymore. Socket activation hence drastically simplifies configuration
and development of services.

If a service dies its listening socket stays around, not losing a
single message. After a restart of the crashed service it can continue
right where it left off.

If a service is upgraded we can restart the service while keeping
around its sockets, thus ensuring the service is continously
responsive. Not a single connection is lost during the upgrade.

We can even replace a service during runtime in a way that is
invisible to the client. For example, all systems running systemd
start up with a tiny syslog daemon at boot which passes all log
messages written to /dev/log on to the kernel message
buffer. That way we provide reliable userspace logging starting from
the first instant of boot-up. Then, when the actual rsyslog daemon is
ready to start we terminate the mini daemon and replace it with the
real daemon. And all that while keeping around the original logging
socket and sharing it between the two daemons and not losing a single
message. Since rsyslog flushes the kernel log buffer to disk after
start-up all log messages from the kernel, from early-boot and from
runtime end up on disk.

For another explanation of this idea consult the original blog
story about systemd
.

Socket activation has been available in systemd since its
inception. On Fedora 15 a number of services have been modified to
implement socket activation, including Avahi, D-Bus and rsyslog (to continue with the example above).

systemd’s socket activation is quite comprehensive. Not only classic
sockets are support but related technologies as well:

AF_UNIX sockets, in the flavours SOCK_DGRAM, SOCK_STREAM and SOCK_SEQPACKET; both in the filesystem and in the abstract namespace

AF_INET sockets, i.e. TCP/IP and UDP/IP; both IPv4 and IPv6

Unix named pipes/FIFOs in the filesystem

AF_NETLINK sockets, to subscribe to certain kernel features. This
is currently used by udev, but could be useful for other
netlink-related services too, such as audit.

Certain special files like /proc/kmsg or device nodes like /dev/input/*.

POSIX Message Queues

A service capable of socket activation must be able to receive its
preinitialized sockets from systemd, instead of creating them
internally. For most services this requires (minimal)
patching. However, since systemd actually provides inetd compatibility
a service working with inetd will also work with systemd — which is
quite useful for services like sshd for example.

So much about the background of socket activation, let’s now have a
look how to patch a service to make it socket activatable. Let’s start
with a theoretic service foobard. (In a later blog post we’ll focus on
real-life example.)

Our little (theoretic) service includes code like the following for
creating sockets (most services include code like this in one way or
another):

/* Source Code Example #1: ORIGINAL, NOT SOCKET-ACTIVATABLE SERVICE */

union {
struct sockaddr sa;
struct sockaddr_un un;
} sa;
int fd;

fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0) {
fprintf(stderr, “socket(): %mn”);
exit(1);
}

memset(&sa, 0, sizeof(sa));
sa.un.sun_family = AF_UNIX;
strncpy(sa.un.sun_path, “/run/foobar.sk”, sizeof(sa.un.sun_path));

if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
fprintf(stderr, “bind(): %mn”);
exit(1);
}

if (listen(fd, SOMAXCONN) < 0) {
fprintf(stderr, “listen(): %mn”);
exit(1);
}

A socket activatable service may use the following code instead:

/* Source Code Example #2: UPDATED, SOCKET-ACTIVATABLE SERVICE */

#include “sd-daemon.h”

int fd;

if (sd_listen_fds(0) != 1) {
fprintf(stderr, “No or too many file descriptors received.n”);
exit(1);
}

fd = SD_LISTEN_FDS_START + 0;

systemd might pass you more than one socket (based on
configuration, see below). In this example we are interested in one
only. sd_listen_fds()
returns how many file descriptors are passed. We simply compare that
with 1, and fail if we got more or less. The file descriptors systemd
passes to us are inherited one after the other beginning with fd
#3. (SD_LISTEN_FDS_START is a macro defined to 3). Our code hence just
takes possession of fd #3.

As you can see this code is actually much shorter than the
original. This of course comes at the price that our little service
with this change will no longer work in a non-socket-activation
environment. With minimal changes we can adapt our example to work nicely
both with and without socket activation:

/* Source Code Example #3: UPDATED, SOCKET-ACTIVATABLE SERVICE WITH COMPATIBILITY */

#include “sd-daemon.h”

int fd, n;

n = sd_listen_fds(0);
if (n > 1) {
fprintf(stderr, “Too many file descriptors received.n”);
exit(1);
} else if (n == 1)
fd = SD_LISTEN_FDS_START + 0;
else {
union {
struct sockaddr sa;
struct sockaddr_un un;
} sa;

fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0) {
fprintf(stderr, “socket(): %mn”);
exit(1);
}

memset(&sa, 0, sizeof(sa));
sa.un.sun_family = AF_UNIX;
strncpy(sa.un.sun_path, “/run/foobar.sk”, sizeof(sa.un.sun_path));

if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
fprintf(stderr, “bind(): %mn”);
exit(1);
}

if (listen(fd, SOMAXCONN) < 0) {
fprintf(stderr, “listen(): %mn”);
exit(1);
}
}

With this simple change our service can now make use of socket
activation but still works unmodified in classic environments. Now,
let’s see how we can enable this service in systemd. For this we have
to write two systemd unit files: one describing the socket, the other
describing the service. First, here’s foobar.socket:

[Socket]
ListenStream=/run/foobar.sk

[Install]
WantedBy=sockets.target

And here’s the matching service file foobar.service:

[Service]
ExecStart=/usr/bin/foobard

If we place these two files in /etc/systemd/system we can
enable and start them:

# systemctl enable foobar.socket
# systemctl start foobar.socket

Now our little socket is listening, but our service not running
yet. If we now connect to /run/foobar.sk the service will be
automatically spawned, for on-demand service start-up. With a
modification of foobar.service we can start our service
already at startup, thus using socket activation only for
parallelization purposes, not for on-demand auto-spawning anymore:

[Service]
ExecStart=/usr/bin/foobard

[Install]
WantedBy=multi-user.target

And now let’s enable this too:

# systemctl enable foobar.service
# systemctl start foobar.service

Now our little daemon will be started at boot and on-demand,
whatever comes first. It can be started fully in parallel with its
clients, and when it dies it will be automatically restarted when it
is used the next time.

A single .socket file can include multiple ListenXXX stanzas, which
is useful for services that listen on more than one socket. In this
case all configured sockets will be passed to the service in the exact
order they are configured in the socket unit file. Also,
you may configure various socket settings in the .socket
files.

In real life it’s a good idea to include description strings in
these unit files, to keep things simple we’ll leave this out of our
example. Speaking of real-life: our next installment will cover an
actual real-life example. We’ll add socket activation to the CUPS
printing server.

The sd_listen_fds() function call is defined in sd-daemon.h
and sd-daemon.c. These
two files are currently drop-in .c sources which projects should
simply copy into their source tree. Eventually we plan to turn this
into a proper shared library, however using the drop-in files allows
you to compile your project in a way that is compatible with socket
activation even without any compile time dependencies on
systemd. sd-daemon.c is liberally licensed, should compile
fine on the most exotic Unixes and the algorithms are trivial enough
to be reimplemented with very little code if the license should
nonetheless be a problem for your project. sd-daemon.c
contains a couple of other API functions besides
sd_listen_fds() that are useful when implementing socket
activation in a project. For example, there’s sd_is_socket()
which can be used to distuingish and identify particular sockets when
a service gets passed more than one.

Let me point out that the interfaces used here are in no way bound
directly to systemd. They are generic enough to be implemented in
other systems as well. We deliberately designed them as simple and
minimal as possible to make it possible for others to adopt similar
schemes.

Stay tuned for the next installment. As mentioned, it will cover a
real-life example of turning an existing daemon into a
socket-activatable one: the CUPS printing service. However, I hope
this blog story might already be enough to get you started if you plan
to convert an existing service into a socket activatable one. We
invite everybody to convert upstream projects to this scheme. If you
have any questions join us on #systemd on freenode.

systemd for Developers I

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/socket-activation.html

systemd
not only brings improvements for administrators and users, it also
brings a (small) number of new APIs with it. In this blog story (which might
become the first of a series) I hope to shed some light on one of the
most important new APIs in systemd:

Socket Activation

In the original blog
story about systemd
I tried to explain why socket activation is a
wonderful technology to spawn services. Let’s reiterate the background
here a bit.

The basic idea of socket activation is not new. The inetd
superserver was a standard component of most Linux and Unix systems
since time began: instead of spawning all local Internet services
already at boot, the superserver would listen on behalf of the
services and whenever a connection would come in an instance of the
respective service would be spawned. This allowed relatively weak
machines with few resources to offer a big variety of services at the
same time. However it quickly got a reputation for being somewhat
slow: since daemons would be spawned for each incoming connection a
lot of time was spent on forking and initialization of the services
— once for each connection, instead of once for them all.

Spawning one instance per connection was how inetd was primarily
used, even though inetd actually understood another mode: on the first
incoming connection it would notice this via poll() (or
select()) and spawn a single instance for all future
connections. (This was controllable with the
wait/nowait options.) That way the first connection
would be slow to set up, but subsequent ones would be as fast as with
a standalone service. In this mode inetd would work in a true
on-demand mode: a service would be made available lazily when it was
required.

inetd’s focus was clearly on AF_INET (i.e. Internet) sockets. As
time progressed and Linux/Unix left the server niche and became
increasingly relevant on desktops, mobile and embedded environments
inetd was somehow lost in the troubles of time. Its reputation for
being slow, and the fact that Linux’ focus shifted away from only
Internet servers made a Linux machine running inetd (or one of its newer
implementations, like xinetd) the exception, not the rule.

When Apple engineers worked on optimizing the MacOS boot time they
found a new way to make use of the idea of socket activation: they
shifted the focus away from AF_INET sockets towards AF_UNIX
sockets. And they noticed that on-demand socket activation was only
part of the story: much more powerful is socket activation when used
for all local services including those which need to be started
anyway on boot. They implemented these ideas in launchd, a central building
block of modern MacOS X systems, and probably the main reason why
MacOS is so fast booting up.

But, before we continue, let’s have a closer look what the benefits
of socket activation for non-on-demand, non-Internet services in
detail are. Consider the four services Syslog, D-Bus, Avahi and the
Bluetooth daemon. D-Bus logs to Syslog, hence on traditional Linux
systems it would get started after Syslog. Similarly, Avahi requires
Syslog and D-Bus, hence would get started after both. Finally
Bluetooth is similar to Avahi and also requires Syslog and D-Bus but
does not interface at all with Avahi. Sinceoin a traditional
SysV-based system only one service can be in the process of getting
started at a time, the following serialization of startup would take
place: Syslog → D-Bus → Avahi → Bluetooth (Of course, Avahi and
Bluetooth could be started in the opposite order too, but we have to
pick one here, so let’s simply go alphabetically.). To illustrate
this, here’s a plot showing the order of startup beginning with system
startup (at the top).

Parallelization plot

Certain distributions tried to improve this strictly serialized
start-up: since Avahi and Bluetooth are independent from each other,
they can be started simultaneously. The parallelization is increased,
the overall startup time slightly smaller. (This is visualized in the
middle part of the plot.)

Socket activation makes it possible to start all four services
completely simultaneously, without any kind of ordering. Since the
creation of the listening sockets is moved outside of the daemons
themselves we can start them all at the same time, and they are able
to connect to each other’s sockets right-away. I.e. in a single step
the /dev/log and /run/dbus/system_bus_socket sockets
are created, and in the next step all four services are spawned
simultaneously. When D-Bus then wants to log to syslog, it just writes
its messages to /dev/log. As long as the socket buffer does
not run full it can go on immediately with what else it wants to do
for initialization. As soon as the syslog service catches up it will
process the queued messages. And if the socket buffer runs full then
the client logging will temporarily block until the socket is writable
again, and continue the moment it can write its log messages. That
means the scheduling of our services is entirely done by the kernel:
from the userspace perspective all services are run at the same time,
and when one service cannot keep up the others needing it will
temporarily block on their request but go on as soon as these
requests are dispatched. All of this is completely automatic and
invisible to userspace. Socket activation hence allows us to
drastically parallelize start-up, enabling simultaneous start-up of
services which previously were thought to strictly require
serialization. Most Linux services use sockets as communication
channel. Socket activation allows starting of clients and servers of
these channels at the same time.

But it’s not just about parallelization. It offers a number of
other benefits:

  • We no longer need to configure dependencies explicitly. Since the
    sockets are initialized before all services they are simply available,
    and no userspace ordering of service start-up needs to take place
    anymore. Socket activation hence drastically simplifies configuration
    and development of services.
  • If a service dies its listening socket stays around, not losing a
    single message. After a restart of the crashed service it can continue
    right where it left off.
  • If a service is upgraded we can restart the service while keeping
    around its sockets, thus ensuring the service is continously
    responsive. Not a single connection is lost during the upgrade.
  • We can even replace a service during runtime in a way that is
    invisible to the client. For example, all systems running systemd
    start up with a tiny syslog daemon at boot which passes all log
    messages written to /dev/log on to the kernel message
    buffer. That way we provide reliable userspace logging starting from
    the first instant of boot-up. Then, when the actual rsyslog daemon is
    ready to start we terminate the mini daemon and replace it with the
    real daemon. And all that while keeping around the original logging
    socket and sharing it between the two daemons and not losing a single
    message. Since rsyslog flushes the kernel log buffer to disk after
    start-up all log messages from the kernel, from early-boot and from
    runtime end up on disk.

For another explanation of this idea consult the original blog
story about systemd
.

Socket activation has been available in systemd since its
inception. On Fedora 15 a number of services have been modified to
implement socket activation, including Avahi, D-Bus and rsyslog (to continue with the example above).

systemd’s socket activation is quite comprehensive. Not only classic
sockets are support but related technologies as well:

  • AF_UNIX sockets, in the flavours SOCK_DGRAM, SOCK_STREAM and SOCK_SEQPACKET; both in the filesystem and in the abstract namespace
  • AF_INET sockets, i.e. TCP/IP and UDP/IP; both IPv4 and IPv6
  • Unix named pipes/FIFOs in the filesystem
  • AF_NETLINK sockets, to subscribe to certain kernel features. This
    is currently used by udev, but could be useful for other
    netlink-related services too, such as audit.
  • Certain special files like /proc/kmsg or device nodes like /dev/input/*.
  • POSIX Message Queues

A service capable of socket activation must be able to receive its
preinitialized sockets from systemd, instead of creating them
internally. For most services this requires (minimal)
patching. However, since systemd actually provides inetd compatibility
a service working with inetd will also work with systemd — which is
quite useful for services like sshd for example.

So much about the background of socket activation, let’s now have a
look how to patch a service to make it socket activatable. Let’s start
with a theoretic service foobard. (In a later blog post we’ll focus on
real-life example.)

Our little (theoretic) service includes code like the following for
creating sockets (most services include code like this in one way or
another):

/* Source Code Example #1: ORIGINAL, NOT SOCKET-ACTIVATABLE SERVICE */
...
union {
        struct sockaddr sa;
        struct sockaddr_un un;
} sa;
int fd;

fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0) {
        fprintf(stderr, "socket(): %m\n");
        exit(1);
}

memset(&sa, 0, sizeof(sa));
sa.un.sun_family = AF_UNIX;
strncpy(sa.un.sun_path, "/run/foobar.sk", sizeof(sa.un.sun_path));

if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
        fprintf(stderr, "bind(): %m\n");
        exit(1);
}

if (listen(fd, SOMAXCONN) < 0) {
        fprintf(stderr, "listen(): %m\n");
        exit(1);
}
...

A socket activatable service may use the following code instead:

/* Source Code Example #2: UPDATED, SOCKET-ACTIVATABLE SERVICE */
...
#include "sd-daemon.h"
...
int fd;

if (sd_listen_fds(0) != 1) {
        fprintf(stderr, "No or too many file descriptors received.\n");
        exit(1);
}

fd = SD_LISTEN_FDS_START + 0;
...

systemd might pass you more than one socket (based on
configuration, see below). In this example we are interested in one
only. sd_listen_fds()
returns how many file descriptors are passed. We simply compare that
with 1, and fail if we got more or less. The file descriptors systemd
passes to us are inherited one after the other beginning with fd
#3. (SD_LISTEN_FDS_START is a macro defined to 3). Our code hence just
takes possession of fd #3.

As you can see this code is actually much shorter than the
original. This of course comes at the price that our little service
with this change will no longer work in a non-socket-activation
environment. With minimal changes we can adapt our example to work nicely
both with and without socket activation:

/* Source Code Example #3: UPDATED, SOCKET-ACTIVATABLE SERVICE WITH COMPATIBILITY */
...
#include "sd-daemon.h"
...
int fd, n;

n = sd_listen_fds(0);
if (n > 1) {
        fprintf(stderr, "Too many file descriptors received.\n");
        exit(1);
} else if (n == 1)
        fd = SD_LISTEN_FDS_START + 0;
else {
        union {
                struct sockaddr sa;
                struct sockaddr_un un;
        } sa;

        fd = socket(AF_UNIX, SOCK_STREAM, 0);
        if (fd < 0) {
                fprintf(stderr, "socket(): %m\n");
                exit(1);
        }

        memset(&sa, 0, sizeof(sa));
        sa.un.sun_family = AF_UNIX;
        strncpy(sa.un.sun_path, "/run/foobar.sk", sizeof(sa.un.sun_path));

        if (bind(fd, &sa.sa, sizeof(sa)) < 0) {
                fprintf(stderr, "bind(): %m\n");
                exit(1);
        }

        if (listen(fd, SOMAXCONN) < 0) {
                fprintf(stderr, "listen(): %m\n");
                exit(1);
        }
}
...

With this simple change our service can now make use of socket
activation but still works unmodified in classic environments. Now,
let’s see how we can enable this service in systemd. For this we have
to write two systemd unit files: one describing the socket, the other
describing the service. First, here’s foobar.socket:

[Socket]
ListenStream=/run/foobar.sk

[Install]
WantedBy=sockets.target

And here’s the matching service file foobar.service:

[Service]
ExecStart=/usr/bin/foobard

If we place these two files in /etc/systemd/system we can
enable and start them:

# systemctl enable foobar.socket
# systemctl start foobar.socket

Now our little socket is listening, but our service not running
yet. If we now connect to /run/foobar.sk the service will be
automatically spawned, for on-demand service start-up. With a
modification of foobar.service we can start our service
already at startup, thus using socket activation only for
parallelization purposes, not for on-demand auto-spawning anymore:

[Service]
ExecStart=/usr/bin/foobard

[Install]
WantedBy=multi-user.target

And now let’s enable this too:

# systemctl enable foobar.service
# systemctl start foobar.service

Now our little daemon will be started at boot and on-demand,
whatever comes first. It can be started fully in parallel with its
clients, and when it dies it will be automatically restarted when it
is used the next time.

A single .socket file can include multiple ListenXXX stanzas, which
is useful for services that listen on more than one socket. In this
case all configured sockets will be passed to the service in the exact
order they are configured in the socket unit file. Also,
you may configure various socket settings in the .socket
files.

In real life it’s a good idea to include description strings in
these unit files, to keep things simple we’ll leave this out of our
example. Speaking of real-life: our next installment will cover an
actual real-life example. We’ll add socket activation to the CUPS
printing server.

The sd_listen_fds() function call is defined in sd-daemon.h
and sd-daemon.c. These
two files are currently drop-in .c sources which projects should
simply copy into their source tree. Eventually we plan to turn this
into a proper shared library, however using the drop-in files allows
you to compile your project in a way that is compatible with socket
activation even without any compile time dependencies on
systemd. sd-daemon.c is liberally licensed, should compile
fine on the most exotic Unixes and the algorithms are trivial enough
to be reimplemented with very little code if the license should
nonetheless be a problem for your project. sd-daemon.c
contains a couple of other API functions besides
sd_listen_fds() that are useful when implementing socket
activation in a project. For example, there’s sd_is_socket()
which can be used to distuingish and identify particular sockets when
a service gets passed more than one.

Let me point out that the interfaces used here are in no way bound
directly to systemd. They are generic enough to be implemented in
other systems as well. We deliberately designed them as simple and
minimal as possible to make it possible for others to adopt similar
schemes.

Stay tuned for the next installment. As mentioned, it will cover a
real-life example of turning an existing daemon into a
socket-activatable one: the CUPS printing service. However, I hope
this blog story might already be enough to get you started if you plan
to convert an existing service into a socket activatable one. We
invite everybody to convert upstream projects to this scheme. If you
have any questions join us on #systemd on freenode.

Questioning The Original Analysis On The Bionic Debate

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/03/18/bionic-debate.html

I was hoping to avoid having to comment further on this problematic
story. I figured a comment
as a brief identi.ca statement
was enough when it
was just
a story on the Register
. But, it’s now
hit a
major tech news outlet
, and I feel that, given that I’m typically
the first person everyone in the Free Software world comes to ask if
something is a GPL violation, I’m going to get asked about this soon, so
I might as well preempt the questions with a blog post, so I can answer
any questions about it with this URL.

In short, the question is: Does Bionic (the Android/Linux default C
library developed by Google) violate the GPL by importing
“scrubbed” headers from Linux? For those of you seeking
TL;DR version: You can
stop now if you expect me to answer this question; I’m not going to. I’m
just going to show that the apparent original analysis material that started
this brouhaha is a speculative hypothesis which would require much more
research to amount to anything of note.

Indeed, the kind of work needed to answer these questions typically
requires the painstaking work of a talented developer working very
closely with legal counsel. I’ve done analysis like this before for
other projects. The only one I can easily talk about publicly is the
ath5k situation. (If you want to hear more on that, you can listen to
an old
oggcast where I discussed this with Karen Sandler

or read
papers
that were written on the subject back where I used to work.)

Anyway, most of what’s been written about this subject of the Linux
headers in Bionic has been poorly drafted speculation. I
suppose some will say this blog post is no better, since I am not
answering any questions, but my primary goal here is to draw attention
that absolutely no one, as near as I can tell, has done the incredibly
time consuming work to figure out anything approaching a definitive
answer! Furthermore, the original article that launched this debate
(Naughton’s
paper, The Bionic Library: Did Google Work Around the
GPL?
) is merely a position paper for a research project yet
to be done.

Naughton’s full paper gives some examples that would make a good
starting point for a complete analysis. It’s disturbing, however, that
his paper is presented as if it’s a complete analysis. At best, his
paper is a position statement of a hypothesis that then needs the actual
experiment to figure things out. That rigorous research (as I keep
reiterating) is still undone.

To his credit, Naughton does admit that only the kind of analysis I’m
talking about would yield a definitive answer. You have to get almost
all the way through his paper to get to:

Determining copyrightability is thus a fact-specific, case-by-case
exercise. … Certainly, sorting out what is and isn’t subject to
GPLv2 in Bionic would require at least a file-by-file, and most likely
line-by-line, analysis of Bionic — a daunting task[.]

Of course, in that statement, Naughton makes the mistake of subtly
including an assumption in the hypothesis: he fails to acknowledge clearly
that it’s entirely possible the set of GPLv2-covered work found in Bionic
could be the empty set; he hasn’t shown it’s not the empty set (even
notwithstanding his very cursory analysis of a few files).

Yet, even though Naughton admits full analysis (that he hasn’t done) is
necessary, he nevertheless later makes sweeping conclusions:

The 750 Linux kernel header files … define a complex overarching
structure, an application programming interface, that is thoughtfully and
cleverly designed, and almost assuredly protected by copyright.

Again, this is a hypothesis, that would have be tested and proved with
evidence generated by the careful line-by-line analysis Naughton himself
admits is necessary. Yet, he doesn’t acknowledge that fact in his
conclusions, leaving his readers (and IMO he’s expecting to dupe lots of
readers unsophisticated on these issues) with the impression he’s shown
something he hasn’t. For example, one of my first questions would be
whether or not Bionic uses only parts of Linux headers that are required
by specification to write POSIX programs, a question that Naughton doesn’t
even consider.

Finally, Naughton moves from the merely shoddy analysis to completely
alarmist speculation with:

But if Google is right, if it has succeeded in removing all copyrightable
material from the Linux kernel headers, then it has unlocked the Linux
kernel from the restrictions of GPLv2. Google can now use the
“clean” Bionic headers to create a non-GPL’d fork of the Linux
kernel, one that can be extended under proprietary license terms. Even if
Google does not do this itself, it has enabled others to do so. It also
has provided a useful roadmap for those who might want to do the same
thing with other GPLv2-licensed [sic] programs, such as databases.

If it turns out that Google has succeeded in making sure that the GPLv2
does not apply to Bionic, then Google’s success is substantially more
narrow. The success would be merely the extraction of the
non-copyrightable facts that any C library needs to know about Linux to
make a binary run when Linux happens to be the kernel underneath. Now, it
should be duly noted that there already exist two libraries under the LGPL
that have already implemented that (namely, glibc, and uClibc — the
latter of which Naughton’s cursory research apparently didn’t even turn
up). As it stands, anyone who wants to write user-space applications on a
Linux-based system already can; there are multiple C library choices
available under the weak copyleft license, LGPL. Google, for its
part, believes they’ve succeed at is to make a permissively licensed third
alternative, which is an outcome that would be no surprise to us who have
seen something like it done twice before.

In short, everyone opining here seems to be conflating a lot of issues.
There are many ways to interface with Linux. Many people, including me,
believe quite strongly that there is no way to make a subprogram in
kernel space (such as a device driver) without the terms of the GPLv2
applying to it. But writing a device driver is a specialized task
that’s very different from what most Linux users do. Most developers
who “use Linux” — by which they typically mean write a
user space program that runs on a GNU/Linux operating system — have
(at most) weak copyleft (LGPL) terms to follow due to glibc or uClibc.
I admit that I sometimes feel chagrin that proprietary applications can
be written for GNU/Linux (and other Linux-based) systems, but that was a
strategic decision that RMS made (correctly) at the start of the GNU
project one that the Linux project, for its part, has also always
sought.

I’m quite sure no one — including hard-core copyleft advocates
like me — expects nor seeks the GPLv2 terms to apply to programs
that interface with Linux solely as user-space programs that
runs on an operating system that uses Linux as its kernel. Thus, I’d
guess that even if it turned out that Google made some mistakes
in this regard for Bionic, we’d all work together to rectify those
mistakes so that the outcome everyone intended could occur.

Moreover, to compare the specifics of this situation to other types of
so-called “copyleft circumvention techniques” is just
link-baiting that borders on trolling. Google wasn’t seeking to
circumvent the GPL at all; they were seeking to write and/or adapt a
permissively licensed library that replaced an LGPL’d one. I’m of
course against that task on principle (I think Google should have just
used glibc and/or uClibc and required LGPL-compliance by applications).
But, to deny that it’s possible to rewrite a C library for Linux under a
license that isn’t GPLv2 would also imply immediately the (incorrect)
conclusion that uClibc and glibc are covered by the GPLv2, and we are
all quite sure they aren’t; even Naughton himself admits that (regarding
glibc).

Google may have erred; no one actually knows for sure at this time.
But the task they sought to do has been done before and everyone
intended it to be permitted. The worst mistake of which we might ultimately accuse
Google is inadvertently taking a copyright-infringing short-cut. If
someone actually does all the research to prove that Google did so, I’d
easily offer a 1,000-to-1 bet to anyone that such a copyright
infringement could be cleared up easily, that Bionic would still work as
a permissively licensed C library for Linux, and the implications of the
whole thing wouldn’t go beyond: “It’s possible to write your own C
library for Linux that isn’t covered by the GPLv2” — a fact
which we’ve all known for a decade and a half anyway.

Update (2011-03-20):
Many people,
including slashdot,
have been linking to
this comment
by RMS on LKML
about .h files. It’s important to look carefully at
what RMS is saying. Specifically, RMS says that sometimes #include’ing a
.h file creates a copyright derivative work, and sometimes it doesn’t; it
depends on the details. Then, RMS goes to talk on some rules of thumb
that can help determine the outcome of the question. The details are what
matters; and those are, as I explain in the main post above, what requires
careful analysis done jointly and in close collaboration between a
developer and a lawyer. There is no general rule of thumb that always
immediately leads one to the right answer on this question.

Software Freedom Is Elementary, My Dear Watson.

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/03/01/watson.html

I’ve watched the game
show, Jeopardy!
, regularly since its Trebek-hosted
relaunch on 1984-09-10. I even remember distinctly the Final Jeopardy
question that night as This date is the first day of the new
millennium. At the age of 11, I got the answer wrong, falling for
the incorrect What is 2000-01-01?, but I recalled this memory
eleven years ago during the
debates
regarding when the millennium turnover happened
.

I had periods of life where I watched Jeopardy! only
rarely, but in recent years (as I’ve become more of a student of games
(in part, because of poker)), I’ve watched Jeopardy! almost
nightly over dinner with my wife. I’ve learned that I’m unlikely to
excel as a Jeopardy! player myself because (a) I read slow
and (b) my recall of facts, while reasonably strong, is not
instantaneous. I thus haven’t tried out for the show, but I’m
nevertheless a fan of strong players.

Jeopardy! isn’t my only spectator game. Right after
college, even though I’m a worse-than-mediocre chess player, I watched
with excitement
as Deep
Blue
played and defeated Kasparov. Kasparov has disputed the
results and how much humans were actually involved, but even so, such
interference was minimal (between matches) and the demonstration still
showed computer algorithmic mastery of chess.

Of course, the core algorithms that Deep Blue used were well known and
often implemented. I learned α-β pruning in my undergraduate
AI course and it was clear that a sufficiently fast computer, given a
few strong heuristics, could beat most any full information game with a
reasonable branching factor. And, computers typically do these days.

I suppose I never really thought about the issues of Deep Blue being
released as Free Software. First, because I was not as involved with
Free Software then as I am now, and also, as near as anyone could tell,
Deep Blue’s software was probably not useful for anything other than
playing chess, and its primary power was in its ability to go very deep
(hence the name, I guess) in the search tree. In short, Deep Blue was
primarily a hardware, not a software, success story.

It was nevertheless, impressive, and last month, I saw the next
installment in this IBM story. I watched with interest
as IBM’s
Watson defeated two champion Jeopardy! players
. Ken
Jennings, for one, even welcomed our new computer overlords.

Watson beating Jeopardy! is, frankly, a lot more
innovative than Deep Blue beating chess. Most don’t know this about me,
but I came very close to focusing my career on PhD work in Natural
Language Processing; I believe fundamentally it’s the area of AI most in
need of attention and research. Watson is a shining example of success
in modern NLP, and I actually believe some of the IBM hype about
how Watson’s
technology can be applied elsewhere, such as medical information
systems
. Indeed, IBM
has announced
a deal with Columbia University Medical Center to adapt the system for
medical diagnostics
. (Perhaps Watson’s next TV appearance will be
on House.)

This all sounds great to most people, but to me, my real concern is the
freedom of the software. We’ve shown in the software freedom community
that to advance software and improve it, sharing the software is
essential. Technology locked up in a vaulted cave doesn’t allow all the
great minds to collaborate. Just as we don’t lock up libraries so that
only the guilded overlords have access, nor should the best software
technology be restricted in proprietariness.

Indeed, Eric
Brown
, at
his Linux
Foundation End User Linux Summit talk
, told us that Watson relied
heavily on the publicly available software freedom codebase, such as
GNU/Linux, Hadoop, and other
FLOSS
components. They clearly couldn’t do their work without building upon the
work we shared with IBM, yet IBM apparently ignores its moral obligation to
reciprocate.

So, I just point-blank asked Brown why Watson is proprietary. Of
course, I long ago learned to never ask a confrontational question from
the crowd at a technical talk without knowing what the answer is likely to
be. Brown answered in the way I expected: We’re working with
Universities to provide a framework for their research. I followed
up asking
when he would actually release the sources and what license
would be. He dodged the question, and instead speculated about what
licenses IBM sometimes like to use when it does chose to release code;
he did not indicate if Watson’s sources will ever be released. In
short, the answer from IBM is clear: Watson’s general ideas
will be shared with academics, but the source code won’t be.

This point is precisely one of the reasons I didn’t pursue a career in
academic Computer Science. Since most jobs — including
professorships at Universities — for PhDs in Computer Science
require that any code written be kept proprietary, most
Computer Science researchers have convinced themselves that code doesn’t
matter; only publishing ideas do. This belief is so pervasive that I
knew something like this would be Brown’s response to my query. (I was
even so sure, I wrote almost this entire blog post before I asked the
question).

I’d easily agree that publishing papers is better than the technology
being only a trade secret. At least we can learn a little bit about the
work. But in all but the pure theoretical areas of Computer
Science, code is written to exemplify, test, and exercise the
ideas. Merely publishing papers and not the code is akin to a chemist
publishing final results but nothing about the methodologies or raw
data. Science, in such cases, is unverifiable and unreproducible. If
we accepted such in fields other than CS, we’d have accepted the idea
that cold
fusion was discovered in 1989
.

I don’t think I’m going to convince IBM to release Watson’s sources as
Free Software. What I do hope is that perhaps this blog post convinces
a few more people that we just shouldn’t accept that Computer Science is
advanced by researchers who give us flashy demos and code-less
research papers. I, for one, welcome our computer overlords…but only
if I can study and modify their source code.

In Defense of Bacon

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/11/16/bacon.html

Jono Bacon is currently being
criticized
for the manner in which
he launched
an initiative

called OpenRespect.Org. Much of
this criticism is unfair, and I decided to write briefly here in support
of Jono, because he’s a victim of a type of mistreatment that I’ve
experienced myself, so I have particularly strong empathy for his
situation.

To be clear, I’m not even a supporter of Jono’s OpenRespect.Org
initiative myself. I think there are others who are doing good work in
this area already (for
example, various
efforts
around getting women involved in Free Software have long recognized and
worked on the issue, since mutual respect is an essential part having a
more diverse community). Also, I felt that Jono’s initiative was
slanted toward encouraging people respect all actions by
companies, some of which don’t advance Free Software.
I commented
on Jono’s blog
to share my criticisms of the initiative when he was
still formulating it. In short, I think the wording of the current
statement on OpenRespect.org seems to indicate people should accept
anyone else’s choice as equally moral. As someone who believes software
freedom as a moral issue, and thus view development and distribution of
proprietary software as an immoral act, I have a problem with such a
mandate, although I nevertheless strive to be respectful in pursuit of
that view. I would hate to be declared disrespectful merely because I
believe in the morality of software freedom.

Yet, despite the fact that I disagree with some of the details of
Jono’s initiative, I believe most of the criticisms have been unfair.
First and foremost, we should
take Jono at his word
that this initiative is his own and not one undertaken on behalf of
Canonical, Ltd. I doubt Jono would dispute that his work at Canonical,
Ltd. inspired him to think about these issues, but that doesn’t mean
that everything he does on his own time on his own website is a
Canonical, Ltd. activity.

Indeed, I’ve personally been similarly attacked for items I’ve said on
this blog of my own, which of course does not represent the views of any
of my employers (past nor present) nor any organizations with which I
have volunteer affiliations. When I have things to say on those topics,
I have other fora to post officially, as does Jono.

So, I’ve experienced first-hand what Jono is currently experiencing:
namely, that people ignore disclaimers precisely to attack someone who
has an opinion that they don’t like. By conflating your personal
opinions with those of your employer’s, people subtly discredit you
— for example, by using your employment relationship to put
inappropriate pressure on you to change your positions. I’m very sad to
see that this same thing I’ve been a victim of is now happening to Jono,
too. I couldn’t just watch it happen without making a statement of
solidarity and pointing out that such treatment is unfair.

Even if we don’t agree with the OpenRespect.org initiative (and I
don’t, for reasons stated above), there is no one to blame but Jono
himself, as he’s told us clearly this isn’t a Canonical initiative, and
I’ve seen no evidence that shows the situation is otherwise.

I do note that there are other criticisms raised, such as whether or
not Jono reached out in the best possible way to others during the
launch, or whether others thought they’d be involved when it turned out
to be a unilateral initiative. All of that, of course, is something
that’s reparable (as is my primary complaint above, too), so on those
fronts, we should just give our criticism and ask Jono to change it.
That’s what I did on my issue. He chose not to take my advice, which is
his prerogative. My response thereafter was simply to not support the
initiative.

To the extent we don’t have enough respect in the FLOSS community,
here’s an easy place to improve: we should take people at their word
until we have evidence to believe otherwise. Jono says OpenRespect.org
is his own thing; we should believe him. We shouldn’t insist that
everything someone says is on behalf of their employer, even if they
have a spokesperson role. People have a right to be something more
than automatons for their bosses.

Disclosure: I did not tell Jono I was going to write
this post, but after it was completely written, I gave him the chance to
make a binary decision about whether I posted it publicly or not. Since
you’re reading this, he obviously answered 1.

Comments on Perens’ Comments on Software Patents

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/11/15/perens-on-patents.html

Bruce Perens and I often disagree
about lots of things. However, I urge everyone to read what Bruce
wrote
this weekend about software patents
. I’m very glad he’s looking
deep into recent events surrounding this issue; I haven’t had the time
to do so myself because I’ve been so busy with
the launch
of my full-time work
at
Conservancy this fall.

Despite my current focus on getting Conservancy ramped up with staff,
so it can do more of its work, I nevertheless still remain frightfully
concerned about the impact of software patents on the future of software
freedom, and I support any activities that seek to make sure that software
patent threats do not stand in the way of software freedom. Bruce and I
have always agreed about this issue: software patents should end, and
while individuals with limited means can’t easily make that happen
themselves, we must all work to raise awareness and public opinion against
all patenting of software.

Specifically, I’m really glad that Bruce has mentioned the issue of
lobbying against software
patents. Post-Bilski,
it’s become obvious that software patents can only be ended with
legislative change. In the USA, sadly, the only way to do this
effectively is through lobbying. Therefore, I’ve called on businesses
(such as Google and Red Hat), that have been targets of software patent
litigation, to fund lobbying efforts to end software patents; such funding
would simultaneously help themselves as well as software freedom.
Unfortunately, as far as I’m aware, no companies have stepped forward to
fund such an effort, and they instead seem to spend their patent-related
resources on getting more software patents of their own. Meanwhile,
individual, not-for-profit Free Software developers simply don’t have the
resources to do this lobbying work ourselves.

Nevertheless, there are still a few things individual developers can do
in the meantime against software patents. I wrote a
complete
list of suggestions after Bilski
; I just reread it and confirmed all
of the suggestions listed there are still useful.

Canonical, Ltd. Finally On Record: Seeking Open Core

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/10/17/shuttleworth-admits-it.html

I’ve
written before
about my deep skepticism regarding the true motives
of Canonical, Ltd.’s advocacy and demand of for-profit corporate
copyright assignment without promises to adhere
to copyleft. I’ve
often asked Canonical employees,
including Jono
Bacon
, Amanda
Brock
, Jane
Silber
, Mark
Shuttleworth
himself, and —
in the
comments of this very blog post

Matt Asay to
explain (a) why exactly
they demand copyright
assignment on their projects
, rather than merely having contributors
agree to the GNU
GPL
formally (like
projects such as Linux do), and (b) why, having received a contributor’s
copyright assignment, Canonical, Ltd. refuses to promise
to keep the software copylefted and never proprietarize it
(FSF, for example, has always done the latter in assignments). When I
ask these questions of Canonical, Ltd. employees, they invariably
artfully change the subject.

I’ve actually been asking these questions for at least a year and a
half, but I really began to get worried earlier this year
when Mark
Shuttleworth falsely claimed
that Canonical, Ltd.’s copyright
assignment was no different than the FSF’s copyright assignment.
That event made it clear to me that there was a job of salesmanship
going on: Canonical, Ltd. was trying to sell something to community that
the community doesn’t want nor need, and trying to reuse the good name
of other people and organizations to do it.

Since that interview in February, Canonical, Ltd. has launched a
manipulatively named product called
“Project
Harmony”
. They market this product as a “summit”
of sorts — purported to have no determined agenda other than
to discuss the issue of contributor agreements and copyright
assignment, and come to a community consensus on this. Their
goal, however, was merely to get community members to lend their good
names to the process. Indeed, Canonical, Ltd. has oft attempted to use
the involvement of good people to make it seem as if Canonical, Ltd.’s
agenda is endorsed by many. In
fact, FSF
recently distanced itself from the process
because of Canonical,
Ltd.’s actions in this
regard. Simon
Phipps had similarly distanced himself before that
.

Nevertheless, it seems Canonical, Ltd. now believes that they’ve
succeed in their sales job, because they’ve now confessed their true
motive. In
an IRC
Q&A session
last
Thursday0, Shuttleworth
finally admits that his goal is to increase the amount
of “Open
Core”
activity. Specifically, Shuttleworth says at 15:21 (and
following):

[C]ompare Qt and Gtk, Qt has a contribution agreement, Gtk doesn’t, for a
while, back in the bubble, Sun, Red Hat, Ximian and many other companies
threw money at Gtk and it grew and improved very quickly but, then they
lost interest, and it has stagnated. Qt was owned by Trolltech it was open
source (GPL) but because of the contribution agreement they had many
options including proprietary licensing, which is just fine with me
alongside the GPL and later, because they owned Qt completely, they were
an attractive acquisition for Nokia, All in all, the Qt ecosystem has
benefitted and the Gtk ecosystem hasn’t.

It takes some careful analysis to parse what’s going on here. First of
all, Shuttleworth is glossing over a lot of complicated Qt history. Qt
started with a non-FaiF
license (QPL),
which later
became a GPL-incompatible Free Software license
. After a few years
of this
oddball, license-proliferation-style
software freedom license, Trolltech stumbled upon the “Open
Core” model (likely inspired by MySQL AB), and switched to GPL.
When Nokia
bought Trolltech
, Nokia itself discovered that full-on “Open Core”
was bad for the code base, and
(as I
heralded at the
time
) relicensed
the codebase to LGPL
(the same license used by Gtk). A few
months after that, Nokia
abandoned
copyright assignment completely for Qt
as well! (I.e., Shuttleworth
is just wrong on this point entirely.) In fact, Shuttleworth, rather
than supporting his pro-Open-Core argument, actually gave the prime
example of Nokia/TrollTech’s lesson learned: “don’t do an
Open-Core-style contributor agreement, you’ll regret it”.
(RMS
also
recently published
a good essay on this subject
).

Furthermore, Shuttleworth also ignores completely plenty of historical angst in
communities that rely on Qt, which often had difficulty getting bugfixes
upstream and other such challenges when dealing with a for-profit
controlled “Open Core” library. (These were, in fact, among the
reasons Nokia
gave in May 2009 for the change in policy
). Indeed, if the proprietary
relicensing business is what made Trolltech such a lucrative acquisition
for Nokia, why did they abandon the business model entirely within four
months of the acquisition?

Although, Shuttleworth’s “lucrative acquisition” point has
some validity. Namely, “Open Core” makes wealthy,
profit-driven types (e.g.,
VCs) drool. Meanwhile,
people like
me,
Simon
Phipps
, NASA’s
Chris
Kemp
, John
Mark Walker
, Tarus
Balog
and many others are either very skeptical about “Open Core”, or
dead-set against it. The reason it’s meeting with so much opposition is
because “Open Core” is a VC-friendly way to control all the
copyright “assets” while pretending to actually have the
goal of building an Open Source community. The real goal of “Open
Core”, of course, is a bait-and-switch move. (Details on that are
beyond the scope of this post and well covered in the links I’ve
given.)

As to Shuttleworth’s argument of Gtk stagnation, after
my trip
this past summer to GUADEC
, I’m quite convinced that the GNOME
community is extremely healthy. Indeed,
as Dave
Neary’s GNOME Census shows
, the GNOME codebases are well-contributed
to by various corporate entities and (more importantly) volunteers.
For-profit corporate folks like Shuttleworth and his executives tend not
to like communities where a non-profit (in this case,
the GNOME Foundation)
shepherds a project and keeps the multiple for-profit interests at bay.
In fact, he dislikes this so much that when GNOME was
recently documenting
its long standing copyright policies
, he sent Silber to the GNOME
Advisory Board (the first and only time Canonical, Ltd. sent such a high
profile person to the Advisory Board) to argue against
the long-standing GNOME community preference for no
copyright assignment on its
projects1.
Silber’s primary argument was that it was unreasonable for individual
contributors to even ask to keep their own copyrights, since
Canonical, Ltd. puts in the bulk of the work on their projects that
require copyright assignment. Her argument was, in other words, an
anti-software-freedom equality argument: a for-profit company is more
valuable to the community than the individual contributor. Fortunately,
GNOME Foundation didn’t fall for this, continued its work with Intel to
get the Clutter codebase free of copyright assignment (and that work has
since succeeded). It’s also particularly ironic that, a few months
later, Neary showed that the very company making that argument
contributes 22% less to the GNOME codebase than the volunteers
Silber once argued don’t contribute enough to warrant keeping their
copyrights.

So, why have Shuttleworth and his staff been on a year-long campaign to
convince everyone to embrace “Open Core” and give up all
their rights that copyleft provides? Well, in the same IRC log (at
15:15) I quoted above, Shuttleworth admits that he has some work
left to do to make Canonical, Ltd. profitable. And therein lies the
connection: Shuttleworth admits Canonical, Ltd.’s profitability is a
major goal (which is probably obvious). Then, in his next answer, he
explains at great length how lucrative and important “Open
Core” is. We should accept “Open Core”, Shuttleworth
argues, merely because it’s so important that Canonical, Ltd. be
profitable.

Shuttleworth’s argument reminds me of a story
that Michael Moore (who
famously made the
documentary Roger and Me
, and has since made other
documentaries) told at a book-signing in the mid-1990s. Moore said (I’m
paraphrasing from memory here, BTW):

Inevitably, I end up on planes next to some corporate executive. They
look at me a few times, and then say: Hey, I know you, you’re Roger
Moore [audience laughs]. What I want to know, is what the hell have you
got against profit? What’s wrong with profit, anyway? The
answer I give is simple: There’s nothing wrong with profit at all. The
question I’m raising is: What lengths are acceptable to achieve profit?
We all agree that we can’t exploit child labor and other such things, even
if that helps profitability. Yet, once upon a time, these sorts of
horrible policies were acceptable for corporations. So, my point is that
we still need more changes to balance the push for profit with what’s
right for workers.

I quote this at length to make it abundantly clear: I’m not opposed to
Canonical, Ltd. making a profit by supporting software freedom. I’m
glad that Shuttleworth has contributed a non-trivial part of his
personal wealth to start a company that employs many excellent
FLOSS
developers (and even sometimes lets those developers work on upstream
projects). But the question really is: Are the values of software
freedom worth giving up merely to make Canonical, Ltd. profitable?
Should we just accept
that proprietary
network services like UbuntuOne
, integrated on nearly every menu of
the desktop, as reasonable merely because it might help Canonical,
Ltd. make a few bucks? Do we think we should abandon copyleft’s
assurances of fair treatment to all, and hand over full
proprietarization powers on GPL’d software to for-profit companies,
merely so they can employ a few FLOSS developers to work primarily on
non-upstream projects?

I don’t think so. I’m often critical of Red Hat, but one thing they do
get right in this regard is a healthy encouragement of their developers
to start, contribute to, and maintain upstream projects that live in the
community rather than inside Red Hat. Red Hat currently allows its
engineers to keep their own copyrights and license them under whatever
license the upstream project uses, binding them to the terms of the
copyleft licenses (when the upstream project is copylefted). For
projects generated inside Red Hat,
after experimenting
with the sorts of CLAs that I’m complaining about
,
they learned
from the mistake and corrected it
(although
unfortunately, Red
Hat hasn’t universally corrected the problem
). For the most part,
Red Hat encourages outside contributors to give under their own
copyright under the outbound license Red Hat chose for its projects
(some of which are also copylefted). Red Hat’s newer policies have some
flaws (details of which are beyond the scope of this post), but it’s
orders of magnitude better than the copyright assignment intimidation
tactics that other companies, like Canonical, Ltd., now employ.

So, don’t let a friendly name like “Harmony” fool you.
Our community has some key infrastructure, such as the copyleft itself,
that
actually keeps us harmonious.
Contributor
agreements aren’t created equal
, and therefore we should oppose the
idea that contributor and assignment agreements should be set to the
lowest common denominator to enable a for-profit corporate land-grab
that Shuttleworth and other “Open Core” proponents seek.
I also strongly advise the organizations and individuals who are
assisting Canonical, Ltd. in this goal to stop immediately,
particularly now that Shuttleworth has announced his “Open
Core” plans.

Update (2010-10-18): In comments, many people have,
quite correctly, argued that I have not proved that Canonical,
Ltd. has plans to go “Open Core” with their
copyright-assigned copyleft products. Such comments are correct; I
intended this article to be an opinion piece, not a logical proof. I
further agree that without absolute proof, the title of
this blog post is an exaggeration. (I didn’t change it, as that seemed
disingenuous after the fact).

Anyway, to be clear, the only thing the chain of events described
above prove is that Canonical, Ltd. wants “Open
Core” as a possibility for the future. That part is
trivially true: if they didn’t want to reserve the possibility, they’d
simply make a promise-back to keep the software as Free Software in
their assignment. The only reason not to make an
FSF-style promise-back is that you want to reserve the
possibility of proprietary relicensing.

Meanwhile, even though I cannot construct a logical proof of it, I
still believe the only possible explanation for this 1+ year marketing
campaign described above is that Canonical, Ltd. is moving toward
“Open Core” for those projects on which they are the sole
copyright holder. I have asked others to offer alternative
explanations of why Canonical, Ltd. is carrying out this campaign: I
agree that there could exist another logical explanation other than the
one I’ve presented. If someone can come up with one, then I would be
happy to link to it here.

Finally, if Canonical, Ltd. comes out with a statement that they’ll
switch to using FSF’s promise-back in their assignments, I will be very
happy to admit I was wrong. The outcome I want is for individual
developers to be treated right by corporations in control of particular
codebases; I would much rather that happen than be correct in my
opinions.

0I
originally credited OMG Ubuntu
as publishing
Shutleworth’s comments as an interview
. Their reformatting
of his comments temporarily confused me, and I thought they’d
done an interview. Thanks
to @gotunandan who
pointed this out.

1Ironically, the
debate had nothing to do with a Canonical, Ltd. codebase, since their
contributions amount to so little (1%) of the GNOME codebase anyway.
The debate was about the Clutter/Intel situation, which has since been
resolved.

Responses Not In the Identica Thread:

Alex
Hudson’s blog post

Discussion on
Hacker News

LWN comments
Matt
Aslett’s response

and my
response to him
Ingolf
Schaefer’s blog post
, which only allows comments with a Google
Account, so I comment below instead (to be clear, I’m not criticizing
Ingolf’s choice of Google-account-to-comment, especially since I make
everyone who wants to comment here sign up for identi.ca ;):
Ingolf, you noted that you’d rather I not try to read between the lines
to deduce that proprietary relicensing and/or “Open Core” is
where Canonical, Ltd.’s marketing is leading. I disagree; I think it’s
useful to consider what seems a likely end-outcome here. My primary
goal is to draw attention to it now in hopes of preventing it from
happening. My best possible outcome is that I get proved wrong, and
Canonical makes a promise-back in their assignment and/or CLA.

Meanwhile, I don’t think they can go “Open Core”
and/or proprietary relicensing for all of Ubuntu, as you are saying.
They aren’t sole copyright holder in most of Ubuntu. The places where
they can pursue these options is in Launchpad, pbuilder, upstart, and
the other projects that require CLA and/or assignment.

I don’t know for sure that they’ll do this, as I say above. I can
deduce no other explanation. As I keep saying, if someone else has
another possible explanation for Canonical, Ltd.’s behavior that I list
above, I’m happy to link to it here. I can’t see any other reason;
they’d surely by now just made an FSF-style promise-back in their CLA if
they didn’t want to hold proprietarization as a possibility.

Leaving Last.fm

Post Syndicated from Laurie Denness original https://laur.ie/blog/2010/09/leaving-last-fm/

I’ve spent 3.43 years at Last.fm, which seems almost like a lifetime. For a long time, I couldn’t ever imagine leaving; every morning I would wake up excited to go and face new challenges and do fascinating new things. In the last 6-12 months so much has changed, as Last.fm gradually slips out of being a startup to being a company that, for better or for worse, has to make some money. I will certainly think twice before working for a company that has anything to do with the music industry… it’s a pain of a situation.

I’ve babysat the wonderful creation that is Last.fm through launches (both expected and unexpected), crashes (always unexpected), overheatings (and break-ins, and power failures… All the kind of thing that should never happen to a datacentre) and plenty of blood, sweat and tears.

It’s been an amazing experience, working with some of the most amazing people I have ever met (some of which have come and gone), but it’s time for me to help another startup through getting up at 4am to fix databases and exciting scaling questions.

And that will be Etsy; another website that has an awesome product that I love, plenty of traffic and graphs that point upwards and a bunch of guys who are passionate and have an awesome method of working. I’m really excited about getting involved and learning things again, as well as enabling a different group of passionate users go about their day to day business. I’ll still be in London, but popping to NY on occasion.

Let’s hope the next 3.43 years will be just as exciting.

systemd for Administrators, Part II

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/systemd-for-admins-2.html

Here’s the second installment of my ongoing series about systemd for administrators.

Which Service Owns Which Processes?

On most Linux systems the number of processes that are running by
default is substantial. Knowing which process does what and where it
belongs to becomes increasingly difficult. Some services even maintain
a couple of worker processes which clutter the “ps” output with
many additional processes that are often not easy to recognize. This is
further complicated if daemons spawn arbitrary 3rd-party processes, as
Apache does with CGI processes, or cron does with user jobs.

A slight remedy for this is often the process inheritance tree, as
shown by “ps xaf”. However this is usually not reliable, as processes
whose parents die get reparented to PID 1, and hence all information
about inheritance gets lost. If a process “double forks” it hence loses
its relationships to the processes that started it. (This actually is
supposed to be a feature and is relied on for the traditional Unix
daemonizing logic.) Furthermore processes can freely change their names
with PR_SETNAME or by patching argv[0], thus making
it harder to recognize them. In fact they can play hide-and-seek with
the administrator pretty nicely this way.

In systemd we place every process that is spawned in a control
group named after its service. Control groups (or cgroups)
at their most basic are simply groups of processes that can be
arranged in a hierarchy and labelled individually. When processes
spawn other processes these children are automatically made members of
the parents cgroup. Leaving a cgroup is not possible for unprivileged
processes. Thus, cgroups can be used as an effective way to label
processes after the service they belong to and be sure that the
service cannot escape from the label, regardless how often it forks or
renames itself. Furthermore this can be used to safely kill a service
and all processes it created, again with no chance of escaping.

In today’s installment I want to introduce you to two commands you
may use to relate systemd services and processes. The first one, is
the well known ps command which has been updated to show
cgroup information along the other process details. And this is how it
looks:

$ ps xawf -eo pid,user,cgroup,args
PID USER CGROUP COMMAND
2 root – [kthreadd]
3 root – _ [ksoftirqd/0]
[…]
4281 root – _ [flush-8:0]
1 root name=systemd:/systemd-1 /sbin/init
455 root name=systemd:/systemd-1/sysinit.service /sbin/udevd -d
28188 root name=systemd:/systemd-1/sysinit.service _ /sbin/udevd -d
28191 root name=systemd:/systemd-1/sysinit.service _ /sbin/udevd -d
1096 dbus name=systemd:/systemd-1/dbus.service /bin/dbus-daemon –system –address=systemd: –nofork –systemd-activation
1131 root name=systemd:/systemd-1/auditd.service auditd
1133 root name=systemd:/systemd-1/auditd.service _ /sbin/audispd
1135 root name=systemd:/systemd-1/auditd.service _ /usr/sbin/sedispatch
1171 root name=systemd:/systemd-1/NetworkManager.service /usr/sbin/NetworkManager –no-daemon
4028 root name=systemd:/systemd-1/NetworkManager.service _ /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease –
1175 avahi name=systemd:/systemd-1/avahi-daemon.service avahi-daemon: running [epsilon.local]
1194 avahi name=systemd:/systemd-1/avahi-daemon.service _ avahi-daemon: chroot helper
1193 root name=systemd:/systemd-1/rsyslog.service /sbin/rsyslogd -c 4
1195 root name=systemd:/systemd-1/cups.service cupsd -C /etc/cups/cupsd.conf
1207 root name=systemd:/systemd-1/mdmonitor.service mdadm –monitor –scan -f –pid-file=/var/run/mdadm/mdadm.pid
1210 root name=systemd:/systemd-1/irqbalance.service irqbalance
1216 root name=systemd:/systemd-1/dbus.service /usr/sbin/modem-manager
1219 root name=systemd:/systemd-1/dbus.service /usr/libexec/polkit-1/polkitd
1242 root name=systemd:/systemd-1/dbus.service /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
1249 68 name=systemd:/systemd-1/haldaemon.service hald
1250 root name=systemd:/systemd-1/haldaemon.service _ hald-runner
1273 root name=systemd:/systemd-1/haldaemon.service _ hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
1275 root name=systemd:/systemd-1/haldaemon.service _ /usr/libexec/hald-addon-rfkill-killswitch
1284 root name=systemd:/systemd-1/haldaemon.service _ /usr/libexec/hald-addon-leds
1285 root name=systemd:/systemd-1/haldaemon.service _ /usr/libexec/hald-addon-generic-backlight
1287 68 name=systemd:/systemd-1/haldaemon.service _ /usr/libexec/hald-addon-acpi
1317 root name=systemd:/systemd-1/abrtd.service /usr/sbin/abrtd -d -s
1332 root name=systemd:/systemd-1/[email protected]/tty2 /sbin/mingetty tty2
1339 root name=systemd:/systemd-1/[email protected]/tty3 /sbin/mingetty tty3
1342 root name=systemd:/systemd-1/[email protected]/tty5 /sbin/mingetty tty5
1343 root name=systemd:/systemd-1/[email protected]/tty4 /sbin/mingetty tty4
1344 root name=systemd:/systemd-1/crond.service crond
1346 root name=systemd:/systemd-1/[email protected]/tty6 /sbin/mingetty tty6
1362 root name=systemd:/systemd-1/sshd.service /usr/sbin/sshd
1376 root name=systemd:/systemd-1/prefdm.service /usr/sbin/gdm-binary -nodaemon
1391 root name=systemd:/systemd-1/prefdm.service _ /usr/libexec/gdm-simple-slave –display-id /org/gnome/DisplayManager/Display1 –force-active-vt
1394 root name=systemd:/systemd-1/prefdm.service _ /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
1495 root name=systemd:/user/lennart/1 _ pam: gdm-password
1521 lennart name=systemd:/user/lennart/1 _ gnome-session
1621 lennart name=systemd:/user/lennart/1 _ metacity
1635 lennart name=systemd:/user/lennart/1 _ gnome-panel
1638 lennart name=systemd:/user/lennart/1 _ nautilus
1640 lennart name=systemd:/user/lennart/1 _ /usr/libexec/polkit-gnome-authentication-agent-1
1641 lennart name=systemd:/user/lennart/1 _ /usr/bin/seapplet
1644 lennart name=systemd:/user/lennart/1 _ gnome-volume-control-applet
1646 lennart name=systemd:/user/lennart/1 _ /usr/sbin/restorecond -u
1652 lennart name=systemd:/user/lennart/1 _ /usr/bin/devilspie
1662 lennart name=systemd:/user/lennart/1 _ nm-applet –sm-disable
1664 lennart name=systemd:/user/lennart/1 _ gnome-power-manager
1665 lennart name=systemd:/user/lennart/1 _ /usr/libexec/gdu-notification-daemon
1670 lennart name=systemd:/user/lennart/1 _ /usr/libexec/evolution/2.32/evolution-alarm-notify
1672 lennart name=systemd:/user/lennart/1 _ /usr/bin/python /usr/share/system-config-printer/applet.py
1674 lennart name=systemd:/user/lennart/1 _ /usr/lib64/deja-dup/deja-dup-monitor
1675 lennart name=systemd:/user/lennart/1 _ abrt-applet
1677 lennart name=systemd:/user/lennart/1 _ bluetooth-applet
1678 lennart name=systemd:/user/lennart/1 _ gpk-update-icon
1408 root name=systemd:/systemd-1/console-kit-daemon.service /usr/sbin/console-kit-daemon –no-daemon
1419 gdm name=systemd:/systemd-1/prefdm.service /usr/bin/dbus-launch –exit-with-session
1453 root name=systemd:/systemd-1/dbus.service /usr/libexec/upowerd
1473 rtkit name=systemd:/systemd-1/rtkit-daemon.service /usr/libexec/rtkit-daemon
1496 root name=systemd:/systemd-1/accounts-daemon.service /usr/libexec/accounts-daemon
1499 root name=systemd:/systemd-1/systemd-logger.service /lib/systemd/systemd-logger
1511 lennart name=systemd:/systemd-1/prefdm.service /usr/bin/gnome-keyring-daemon –daemonize –login
1534 lennart name=systemd:/user/lennart/1 dbus-launch –sh-syntax –exit-with-session
1535 lennart name=systemd:/user/lennart/1 /bin/dbus-daemon –fork –print-pid 5 –print-address 7 –session
1603 lennart name=systemd:/user/lennart/1 /usr/libexec/gconfd-2
1612 lennart name=systemd:/user/lennart/1 /usr/libexec/gnome-settings-daemon
1615 lennart name=systemd:/user/lennart/1 /usr/libexec/gvfsd
1626 lennart name=systemd:/user/lennart/1 /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
1634 lennart name=systemd:/user/lennart/1 /usr/bin/pulseaudio –start –log-target=syslog
1649 lennart name=systemd:/user/lennart/1 _ /usr/libexec/pulse/gconf-helper
1645 lennart name=systemd:/user/lennart/1 /usr/libexec/bonobo-activation-server –ac-activate –ior-output-fd=24
1668 lennart name=systemd:/user/lennart/1 /usr/libexec/im-settings-daemon
1701 lennart name=systemd:/user/lennart/1 /usr/libexec/gvfs-gdu-volume-monitor
1707 lennart name=systemd:/user/lennart/1 /usr/bin/gnote –panel-applet –oaf-activate-iid=OAFIID:GnoteApplet_Factory –oaf-ior-fd=22
1725 lennart name=systemd:/user/lennart/1 /usr/libexec/clock-applet
1727 lennart name=systemd:/user/lennart/1 /usr/libexec/wnck-applet
1729 lennart name=systemd:/user/lennart/1 /usr/libexec/notification-area-applet
1733 root name=systemd:/systemd-1/dbus.service /usr/libexec/udisks-daemon
1747 root name=systemd:/systemd-1/dbus.service _ udisks-daemon: polling /dev/sr0
1759 lennart name=systemd:/user/lennart/1 gnome-screensaver
1780 lennart name=systemd:/user/lennart/1 /usr/libexec/gvfsd-trash –spawner :1.9 /org/gtk/gvfs/exec_spaw/0
1864 lennart name=systemd:/user/lennart/1 /usr/libexec/gvfs-afc-volume-monitor
1874 lennart name=systemd:/user/lennart/1 /usr/libexec/gconf-im-settings-daemon
1903 lennart name=systemd:/user/lennart/1 /usr/libexec/gvfsd-burn –spawner :1.9 /org/gtk/gvfs/exec_spaw/1
1909 lennart name=systemd:/user/lennart/1 gnome-terminal
1913 lennart name=systemd:/user/lennart/1 _ gnome-pty-helper
1914 lennart name=systemd:/user/lennart/1 _ bash
29231 lennart name=systemd:/user/lennart/1 | _ ssh tango
2221 lennart name=systemd:/user/lennart/1 _ bash
4193 lennart name=systemd:/user/lennart/1 | _ ssh tango
2461 lennart name=systemd:/user/lennart/1 _ bash
29219 lennart name=systemd:/user/lennart/1 | _ emacs systemd-for-admins-1.txt
15113 lennart name=systemd:/user/lennart/1 _ bash
27251 lennart name=systemd:/user/lennart/1 _ empathy
29504 lennart name=systemd:/user/lennart/1 _ ps xawf -eo pid,user,cgroup,args
1968 lennart name=systemd:/user/lennart/1 ssh-agent
1994 lennart name=systemd:/user/lennart/1 gpg-agent –daemon –write-env-file
18679 lennart name=systemd:/user/lennart/1 /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
18741 lennart name=systemd:/user/lennart/1 _ /usr/lib64/firefox-3.6/firefox
28900 lennart name=systemd:/user/lennart/1 _ /usr/lib64/nspluginwrapper/npviewer.bin –plugin /usr/lib64/mozilla/plugins/libflashplayer.so –connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
4016 root name=systemd:/systemd-1/sysinit.service /usr/sbin/bluetoothd –udev
4094 smmsp name=systemd:/systemd-1/sendmail.service sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
4096 root name=systemd:/systemd-1/sendmail.service sendmail: accepting connections
4112 ntp name=systemd:/systemd-1/ntpd.service /usr/sbin/ntpd -n -u ntp:ntp -g
27262 lennart name=systemd:/user/lennart/1 /usr/libexec/mission-control-5
27265 lennart name=systemd:/user/lennart/1 /usr/libexec/telepathy-haze
27268 lennart name=systemd:/user/lennart/1 /usr/libexec/telepathy-logger
27270 lennart name=systemd:/user/lennart/1 /usr/libexec/dconf-service
27280 lennart name=systemd:/user/lennart/1 /usr/libexec/notification-daemon
27284 lennart name=systemd:/user/lennart/1 /usr/libexec/telepathy-gabble
27285 lennart name=systemd:/user/lennart/1 /usr/libexec/telepathy-salut
27297 lennart name=systemd:/user/lennart/1 /usr/libexec/geoclue-yahoo

(Note that this output is shortened, I have removed most of the
kernel threads here, since they are not relevant in the context of
this blog story)

In the third column you see the cgroup systemd assigned to each
process. You’ll find that the udev processes are in the
name=systemd:/systemd-1/sysinit.service cgroup, which is
where systemd places all processes started by the
sysinit.service service, which covers early boot.

My personal recommendation is to set the shell alias psc
to the ps command line shown above:

alias psc=’ps xawf -eo pid,user,cgroup,args’

With this service information of processes is just four keypresses
away!

A different way to present the same information is the
systemd-cgls tool we ship with systemd. It shows the cgroup
hierarchy in a pretty tree. Its output looks like this:

$ systemd-cgls
+ 2 [kthreadd]
[…]
+ 4281 [flush-8:0]
+ user
| lennart
| 1
| + 1495 pam: gdm-password
| + 1521 gnome-session
| + 1534 dbus-launch –sh-syntax –exit-with-session
| + 1535 /bin/dbus-daemon –fork –print-pid 5 –print-address 7 –session
| + 1603 /usr/libexec/gconfd-2
| + 1612 /usr/libexec/gnome-settings-daemon
| + 1615 /ushr/libexec/gvfsd
| + 1621 metacity
| + 1626 /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
| + 1634 /usr/bin/pulseaudio –start –log-target=syslog
| + 1635 gnome-panel
| + 1638 nautilus
| + 1640 /usr/libexec/polkit-gnome-authentication-agent-1
| + 1641 /usr/bin/seapplet
| + 1644 gnome-volume-control-applet
| + 1645 /usr/libexec/bonobo-activation-server –ac-activate –ior-output-fd=24
| + 1646 /usr/sbin/restorecond -u
| + 1649 /usr/libexec/pulse/gconf-helper
| + 1652 /usr/bin/devilspie
| + 1662 nm-applet –sm-disable
| + 1664 gnome-power-manager
| + 1665 /usr/libexec/gdu-notification-daemon
| + 1668 /usr/libexec/im-settings-daemon
| + 1670 /usr/libexec/evolution/2.32/evolution-alarm-notify
| + 1672 /usr/bin/python /usr/share/system-config-printer/applet.py
| + 1674 /usr/lib64/deja-dup/deja-dup-monitor
| + 1675 abrt-applet
| + 1677 bluetooth-applet
| + 1678 gpk-update-icon
| + 1701 /usr/libexec/gvfs-gdu-volume-monitor
| + 1707 /usr/bin/gnote –panel-applet –oaf-activate-iid=OAFIID:GnoteApplet_Factory –oaf-ior-fd=22
| + 1725 /usr/libexec/clock-applet
| + 1727 /usr/libexec/wnck-applet
| + 1729 /usr/libexec/notification-area-applet
| + 1759 gnome-screensaver
| + 1780 /usr/libexec/gvfsd-trash –spawner :1.9 /org/gtk/gvfs/exec_spaw/0
| + 1864 /usr/libexec/gvfs-afc-volume-monitor
| + 1874 /usr/libexec/gconf-im-settings-daemon
| + 1882 /usr/libexec/gvfs-gphoto2-volume-monitor
| + 1903 /usr/libexec/gvfsd-burn –spawner :1.9 /org/gtk/gvfs/exec_spaw/1
| + 1909 gnome-terminal
| + 1913 gnome-pty-helper
| + 1914 bash
| + 1968 ssh-agent
| + 1994 gpg-agent –daemon –write-env-file
| + 2221 bash
| + 2461 bash
| + 4193 ssh tango
| + 15113 bash
| + 18679 /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
| + 18741 /usr/lib64/firefox-3.6/firefox
| + 27251 empathy
| + 27262 /usr/libexec/mission-control-5
| + 27265 /usr/libexec/telepathy-haze
| + 27268 /usr/libexec/telepathy-logger
| + 27270 /usr/libexec/dconf-service
| + 27280 /usr/libexec/notification-daemon
| + 27284 /usr/libexec/telepathy-gabble
| + 27285 /usr/libexec/telepathy-salut
| + 27297 /usr/libexec/geoclue-yahoo
| + 28900 /usr/lib64/nspluginwrapper/npviewer.bin –plugin /usr/lib64/mozilla/plugins/libflashplayer.so –connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
| + 29219 emacs systemd-for-admins-1.txt
| + 29231 ssh tango
| 29519 systemd-cgls
systemd-1
+ 1 /sbin/init
+ ntpd.service
| 4112 /usr/sbin/ntpd -n -u ntp:ntp -g
+ systemd-logger.service
| 1499 /lib/systemd/systemd-logger
+ accounts-daemon.service
| 1496 /usr/libexec/accounts-daemon
+ rtkit-daemon.service
| 1473 /usr/libexec/rtkit-daemon
+ console-kit-daemon.service
| 1408 /usr/sbin/console-kit-daemon –no-daemon
+ prefdm.service
| + 1376 /usr/sbin/gdm-binary -nodaemon
| + 1391 /usr/libexec/gdm-simple-slave –display-id /org/gnome/DisplayManager/Display1 –force-active-vt
| + 1394 /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
| + 1419 /usr/bin/dbus-launch –exit-with-session
| 1511 /usr/bin/gnome-keyring-daemon –daemonize –login
+ [email protected]
| + tty6
| | 1346 /sbin/mingetty tty6
| + tty4
| | 1343 /sbin/mingetty tty4
| + tty5
| | 1342 /sbin/mingetty tty5
| + tty3
| | 1339 /sbin/mingetty tty3
| tty2
| 1332 /sbin/mingetty tty2
+ abrtd.service
| 1317 /usr/sbin/abrtd -d -s
+ crond.service
| 1344 crond
+ sshd.service
| 1362 /usr/sbin/sshd
+ sendmail.service
| + 4094 sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
| 4096 sendmail: accepting connections
+ haldaemon.service
| + 1249 hald
| + 1250 hald-runner
| + 1273 hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
| + 1275 /usr/libexec/hald-addon-rfkill-killswitch
| + 1284 /usr/libexec/hald-addon-leds
| + 1285 /usr/libexec/hald-addon-generic-backlight
| 1287 /usr/libexec/hald-addon-acpi
+ irqbalance.service
| 1210 irqbalance
+ avahi-daemon.service
| + 1175 avahi-daemon: running [epsilon.local]
+ NetworkManager.service
| + 1171 /usr/sbin/NetworkManager –no-daemon
| 4028 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease -cf /var/run/nm-dhclient-wlan0.conf wlan0
+ rsyslog.service
| 1193 /sbin/rsyslogd -c 4
+ mdmonitor.service
| 1207 mdadm –monitor –scan -f –pid-file=/var/run/mdadm/mdadm.pid
+ cups.service
| 1195 cupsd -C /etc/cups/cupsd.conf
+ auditd.service
| + 1131 auditd
| + 1133 /sbin/audispd
| 1135 /usr/sbin/sedispatch
+ dbus.service
| + 1096 /bin/dbus-daemon –system –address=systemd: –nofork –systemd-activation
| + 1216 /usr/sbin/modem-manager
| + 1219 /usr/libexec/polkit-1/polkitd
| + 1242 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
| + 1453 /usr/libexec/upowerd
| + 1733 /usr/libexec/udisks-daemon
| + 1747 udisks-daemon: polling /dev/sr0
| 29509 /usr/libexec/packagekitd
+ dev-mqueue.mount
+ dev-hugepages.mount
sysinit.service
+ 455 /sbin/udevd -d
+ 4016 /usr/sbin/bluetoothd –udev
+ 28188 /sbin/udevd -d
28191 /sbin/udevd -d

(This too is shortened, the same way)

As you can see, this command shows the processes by their cgroup
and hence service, as systemd labels the cgroups after the
services. For example, you can easily see that the auditing service
auditd.service spawns three individual processes,
auditd, audisp and sedispatch.

If you look closely you will notice that a number of processes have
been assigned to the cgroup /user/1. At this point let’s
simply leave it at that systemd not only maintains services in cgroups,
but user session processes as well. In a later installment we’ll discuss in
more detail what this about.

So much for now, come back soon for the next installment!

systemd for Administrators, Part II

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/systemd-for-admins-2.html

Here’s the second installment of my ongoing series about systemd for administrators.

Which Service Owns Which Processes?

On most Linux systems the number of processes that are running by
default is substantial. Knowing which process does what and where it
belongs to becomes increasingly difficult. Some services even maintain
a couple of worker processes which clutter the “ps” output with
many additional processes that are often not easy to recognize. This is
further complicated if daemons spawn arbitrary 3rd-party processes, as
Apache does with CGI processes, or cron does with user jobs.

A slight remedy for this is often the process inheritance tree, as
shown by “ps xaf“. However this is usually not reliable, as processes
whose parents die get reparented to PID 1, and hence all information
about inheritance gets lost. If a process “double forks” it hence loses
its relationships to the processes that started it. (This actually is
supposed to be a feature and is relied on for the traditional Unix
daemonizing logic.) Furthermore processes can freely change their names
with PR_SETNAME or by patching argv[0], thus making
it harder to recognize them. In fact they can play hide-and-seek with
the administrator pretty nicely this way.

In systemd we place every process that is spawned in a control
group
named after its service. Control groups (or cgroups)
at their most basic are simply groups of processes that can be
arranged in a hierarchy and labelled individually. When processes
spawn other processes these children are automatically made members of
the parents cgroup. Leaving a cgroup is not possible for unprivileged
processes. Thus, cgroups can be used as an effective way to label
processes after the service they belong to and be sure that the
service cannot escape from the label, regardless how often it forks or
renames itself. Furthermore this can be used to safely kill a service
and all processes it created, again with no chance of escaping.

In today’s installment I want to introduce you to two commands you
may use to relate systemd services and processes. The first one, is
the well known ps command which has been updated to show
cgroup information along the other process details. And this is how it
looks:

$ ps xawf -eo pid,user,cgroup,args
  PID USER     CGROUP                              COMMAND
    2 root     -                                   [kthreadd]
    3 root     -                                    \_ [ksoftirqd/0]
[...]
 4281 root     -                                    \_ [flush-8:0]
    1 root     name=systemd:/systemd-1             /sbin/init
  455 root     name=systemd:/systemd-1/sysinit.service /sbin/udevd -d
28188 root     name=systemd:/systemd-1/sysinit.service  \_ /sbin/udevd -d
28191 root     name=systemd:/systemd-1/sysinit.service  \_ /sbin/udevd -d
 1096 dbus     name=systemd:/systemd-1/dbus.service /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activation
 1131 root     name=systemd:/systemd-1/auditd.service auditd
 1133 root     name=systemd:/systemd-1/auditd.service  \_ /sbin/audispd
 1135 root     name=systemd:/systemd-1/auditd.service      \_ /usr/sbin/sedispatch
 1171 root     name=systemd:/systemd-1/NetworkManager.service /usr/sbin/NetworkManager --no-daemon
 4028 root     name=systemd:/systemd-1/NetworkManager.service  \_ /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease -
 1175 avahi    name=systemd:/systemd-1/avahi-daemon.service avahi-daemon: running [epsilon.local]
 1194 avahi    name=systemd:/systemd-1/avahi-daemon.service  \_ avahi-daemon: chroot helper
 1193 root     name=systemd:/systemd-1/rsyslog.service /sbin/rsyslogd -c 4
 1195 root     name=systemd:/systemd-1/cups.service cupsd -C /etc/cups/cupsd.conf
 1207 root     name=systemd:/systemd-1/mdmonitor.service mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
 1210 root     name=systemd:/systemd-1/irqbalance.service irqbalance
 1216 root     name=systemd:/systemd-1/dbus.service /usr/sbin/modem-manager
 1219 root     name=systemd:/systemd-1/dbus.service /usr/libexec/polkit-1/polkitd
 1242 root     name=systemd:/systemd-1/dbus.service /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
 1249 68       name=systemd:/systemd-1/haldaemon.service hald
 1250 root     name=systemd:/systemd-1/haldaemon.service  \_ hald-runner
 1273 root     name=systemd:/systemd-1/haldaemon.service      \_ hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
 1275 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-rfkill-killswitch
 1284 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-leds
 1285 root     name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-generic-backlight
 1287 68       name=systemd:/systemd-1/haldaemon.service      \_ /usr/libexec/hald-addon-acpi
 1317 root     name=systemd:/systemd-1/abrtd.service /usr/sbin/abrtd -d -s
 1332 root     name=systemd:/systemd-1/[email protected]/tty2 /sbin/mingetty tty2
 1339 root     name=systemd:/systemd-1/[email protected]/tty3 /sbin/mingetty tty3
 1342 root     name=systemd:/systemd-1/[email protected]/tty5 /sbin/mingetty tty5
 1343 root     name=systemd:/systemd-1/[email protected]/tty4 /sbin/mingetty tty4
 1344 root     name=systemd:/systemd-1/crond.service crond
 1346 root     name=systemd:/systemd-1/[email protected]/tty6 /sbin/mingetty tty6
 1362 root     name=systemd:/systemd-1/sshd.service /usr/sbin/sshd
 1376 root     name=systemd:/systemd-1/prefdm.service /usr/sbin/gdm-binary -nodaemon
 1391 root     name=systemd:/systemd-1/prefdm.service  \_ /usr/libexec/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 --force-active-vt
 1394 root     name=systemd:/systemd-1/prefdm.service      \_ /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
 1495 root     name=systemd:/user/lennart/1             \_ pam: gdm-password
 1521 lennart  name=systemd:/user/lennart/1                 \_ gnome-session
 1621 lennart  name=systemd:/user/lennart/1                     \_ metacity
 1635 lennart  name=systemd:/user/lennart/1                     \_ gnome-panel
 1638 lennart  name=systemd:/user/lennart/1                     \_ nautilus
 1640 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/polkit-gnome-authentication-agent-1
 1641 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/seapplet
 1644 lennart  name=systemd:/user/lennart/1                     \_ gnome-volume-control-applet
 1646 lennart  name=systemd:/user/lennart/1                     \_ /usr/sbin/restorecond -u
 1652 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/devilspie
 1662 lennart  name=systemd:/user/lennart/1                     \_ nm-applet --sm-disable
 1664 lennart  name=systemd:/user/lennart/1                     \_ gnome-power-manager
 1665 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/gdu-notification-daemon
 1670 lennart  name=systemd:/user/lennart/1                     \_ /usr/libexec/evolution/2.32/evolution-alarm-notify
 1672 lennart  name=systemd:/user/lennart/1                     \_ /usr/bin/python /usr/share/system-config-printer/applet.py
 1674 lennart  name=systemd:/user/lennart/1                     \_ /usr/lib64/deja-dup/deja-dup-monitor
 1675 lennart  name=systemd:/user/lennart/1                     \_ abrt-applet
 1677 lennart  name=systemd:/user/lennart/1                     \_ bluetooth-applet
 1678 lennart  name=systemd:/user/lennart/1                     \_ gpk-update-icon
 1408 root     name=systemd:/systemd-1/console-kit-daemon.service /usr/sbin/console-kit-daemon --no-daemon
 1419 gdm      name=systemd:/systemd-1/prefdm.service /usr/bin/dbus-launch --exit-with-session
 1453 root     name=systemd:/systemd-1/dbus.service /usr/libexec/upowerd
 1473 rtkit    name=systemd:/systemd-1/rtkit-daemon.service /usr/libexec/rtkit-daemon
 1496 root     name=systemd:/systemd-1/accounts-daemon.service /usr/libexec/accounts-daemon
 1499 root     name=systemd:/systemd-1/systemd-logger.service /lib/systemd/systemd-logger
 1511 lennart  name=systemd:/systemd-1/prefdm.service /usr/bin/gnome-keyring-daemon --daemonize --login
 1534 lennart  name=systemd:/user/lennart/1        dbus-launch --sh-syntax --exit-with-session
 1535 lennart  name=systemd:/user/lennart/1        /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
 1603 lennart  name=systemd:/user/lennart/1        /usr/libexec/gconfd-2
 1612 lennart  name=systemd:/user/lennart/1        /usr/libexec/gnome-settings-daemon
 1615 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd
 1626 lennart  name=systemd:/user/lennart/1        /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
 1634 lennart  name=systemd:/user/lennart/1        /usr/bin/pulseaudio --start --log-target=syslog
 1649 lennart  name=systemd:/user/lennart/1         \_ /usr/libexec/pulse/gconf-helper
 1645 lennart  name=systemd:/user/lennart/1        /usr/libexec/bonobo-activation-server --ac-activate --ior-output-fd=24
 1668 lennart  name=systemd:/user/lennart/1        /usr/libexec/im-settings-daemon
 1701 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfs-gdu-volume-monitor
 1707 lennart  name=systemd:/user/lennart/1        /usr/bin/gnote --panel-applet --oaf-activate-iid=OAFIID:GnoteApplet_Factory --oaf-ior-fd=22
 1725 lennart  name=systemd:/user/lennart/1        /usr/libexec/clock-applet
 1727 lennart  name=systemd:/user/lennart/1        /usr/libexec/wnck-applet
 1729 lennart  name=systemd:/user/lennart/1        /usr/libexec/notification-area-applet
 1733 root     name=systemd:/systemd-1/dbus.service /usr/libexec/udisks-daemon
 1747 root     name=systemd:/systemd-1/dbus.service  \_ udisks-daemon: polling /dev/sr0
 1759 lennart  name=systemd:/user/lennart/1        gnome-screensaver
 1780 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd-trash --spawner :1.9 /org/gtk/gvfs/exec_spaw/0
 1864 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfs-afc-volume-monitor
 1874 lennart  name=systemd:/user/lennart/1        /usr/libexec/gconf-im-settings-daemon
 1903 lennart  name=systemd:/user/lennart/1        /usr/libexec/gvfsd-burn --spawner :1.9 /org/gtk/gvfs/exec_spaw/1
 1909 lennart  name=systemd:/user/lennart/1        gnome-terminal
 1913 lennart  name=systemd:/user/lennart/1         \_ gnome-pty-helper
 1914 lennart  name=systemd:/user/lennart/1         \_ bash
29231 lennart  name=systemd:/user/lennart/1         |   \_ ssh tango
 2221 lennart  name=systemd:/user/lennart/1         \_ bash
 4193 lennart  name=systemd:/user/lennart/1         |   \_ ssh tango
 2461 lennart  name=systemd:/user/lennart/1         \_ bash
29219 lennart  name=systemd:/user/lennart/1         |   \_ emacs systemd-for-admins-1.txt
15113 lennart  name=systemd:/user/lennart/1         \_ bash
27251 lennart  name=systemd:/user/lennart/1             \_ empathy
29504 lennart  name=systemd:/user/lennart/1             \_ ps xawf -eo pid,user,cgroup,args
 1968 lennart  name=systemd:/user/lennart/1        ssh-agent
 1994 lennart  name=systemd:/user/lennart/1        gpg-agent --daemon --write-env-file
18679 lennart  name=systemd:/user/lennart/1        /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
18741 lennart  name=systemd:/user/lennart/1         \_ /usr/lib64/firefox-3.6/firefox
28900 lennart  name=systemd:/user/lennart/1             \_ /usr/lib64/nspluginwrapper/npviewer.bin --plugin /usr/lib64/mozilla/plugins/libflashplayer.so --connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
 4016 root     name=systemd:/systemd-1/sysinit.service /usr/sbin/bluetoothd --udev
 4094 smmsp    name=systemd:/systemd-1/sendmail.service sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
 4096 root     name=systemd:/systemd-1/sendmail.service sendmail: accepting connections
 4112 ntp      name=systemd:/systemd-1/ntpd.service /usr/sbin/ntpd -n -u ntp:ntp -g
27262 lennart  name=systemd:/user/lennart/1        /usr/libexec/mission-control-5
27265 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-haze
27268 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-logger
27270 lennart  name=systemd:/user/lennart/1        /usr/libexec/dconf-service
27280 lennart  name=systemd:/user/lennart/1        /usr/libexec/notification-daemon
27284 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-gabble
27285 lennart  name=systemd:/user/lennart/1        /usr/libexec/telepathy-salut
27297 lennart  name=systemd:/user/lennart/1        /usr/libexec/geoclue-yahoo

(Note that this output is shortened, I have removed most of the
kernel threads here, since they are not relevant in the context of
this blog story)

In the third column you see the cgroup systemd assigned to each
process. You’ll find that the udev processes are in the
name=systemd:/systemd-1/sysinit.service cgroup, which is
where systemd places all processes started by the
sysinit.service service, which covers early boot.

My personal recommendation is to set the shell alias psc
to the ps command line shown above:

alias psc='ps xawf -eo pid,user,cgroup,args'

With this service information of processes is just four keypresses
away!

A different way to present the same information is the
systemd-cgls tool we ship with systemd. It shows the cgroup
hierarchy in a pretty tree. Its output looks like this:

$ systemd-cgls
+    2 [kthreadd]
[...]
+ 4281 [flush-8:0]
+ user
| \ lennart
|   \ 1
|     +  1495 pam: gdm-password
|     +  1521 gnome-session
|     +  1534 dbus-launch --sh-syntax --exit-with-session
|     +  1535 /bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session
|     +  1603 /usr/libexec/gconfd-2
|     +  1612 /usr/libexec/gnome-settings-daemon
|     +  1615 /ushr/libexec/gvfsd
|     +  1621 metacity
|     +  1626 /usr/libexec//gvfs-fuse-daemon /home/lennart/.gvfs
|     +  1634 /usr/bin/pulseaudio --start --log-target=syslog
|     +  1635 gnome-panel
|     +  1638 nautilus
|     +  1640 /usr/libexec/polkit-gnome-authentication-agent-1
|     +  1641 /usr/bin/seapplet
|     +  1644 gnome-volume-control-applet
|     +  1645 /usr/libexec/bonobo-activation-server --ac-activate --ior-output-fd=24
|     +  1646 /usr/sbin/restorecond -u
|     +  1649 /usr/libexec/pulse/gconf-helper
|     +  1652 /usr/bin/devilspie
|     +  1662 nm-applet --sm-disable
|     +  1664 gnome-power-manager
|     +  1665 /usr/libexec/gdu-notification-daemon
|     +  1668 /usr/libexec/im-settings-daemon
|     +  1670 /usr/libexec/evolution/2.32/evolution-alarm-notify
|     +  1672 /usr/bin/python /usr/share/system-config-printer/applet.py
|     +  1674 /usr/lib64/deja-dup/deja-dup-monitor
|     +  1675 abrt-applet
|     +  1677 bluetooth-applet
|     +  1678 gpk-update-icon
|     +  1701 /usr/libexec/gvfs-gdu-volume-monitor
|     +  1707 /usr/bin/gnote --panel-applet --oaf-activate-iid=OAFIID:GnoteApplet_Factory --oaf-ior-fd=22
|     +  1725 /usr/libexec/clock-applet
|     +  1727 /usr/libexec/wnck-applet
|     +  1729 /usr/libexec/notification-area-applet
|     +  1759 gnome-screensaver
|     +  1780 /usr/libexec/gvfsd-trash --spawner :1.9 /org/gtk/gvfs/exec_spaw/0
|     +  1864 /usr/libexec/gvfs-afc-volume-monitor
|     +  1874 /usr/libexec/gconf-im-settings-daemon
|     +  1882 /usr/libexec/gvfs-gphoto2-volume-monitor
|     +  1903 /usr/libexec/gvfsd-burn --spawner :1.9 /org/gtk/gvfs/exec_spaw/1
|     +  1909 gnome-terminal
|     +  1913 gnome-pty-helper
|     +  1914 bash
|     +  1968 ssh-agent
|     +  1994 gpg-agent --daemon --write-env-file
|     +  2221 bash
|     +  2461 bash
|     +  4193 ssh tango
|     + 15113 bash
|     + 18679 /bin/sh /usr/lib64/firefox-3.6/run-mozilla.sh /usr/lib64/firefox-3.6/firefox
|     + 18741 /usr/lib64/firefox-3.6/firefox
|     + 27251 empathy
|     + 27262 /usr/libexec/mission-control-5
|     + 27265 /usr/libexec/telepathy-haze
|     + 27268 /usr/libexec/telepathy-logger
|     + 27270 /usr/libexec/dconf-service
|     + 27280 /usr/libexec/notification-daemon
|     + 27284 /usr/libexec/telepathy-gabble
|     + 27285 /usr/libexec/telepathy-salut
|     + 27297 /usr/libexec/geoclue-yahoo
|     + 28900 /usr/lib64/nspluginwrapper/npviewer.bin --plugin /usr/lib64/mozilla/plugins/libflashplayer.so --connection /org/wrapper/NSPlugins/libflashplayer.so/18741-6
|     + 29219 emacs systemd-for-admins-1.txt
|     + 29231 ssh tango
|     \ 29519 systemd-cgls
\ systemd-1
  + 1 /sbin/init
  + ntpd.service
  | \ 4112 /usr/sbin/ntpd -n -u ntp:ntp -g
  + systemd-logger.service
  | \ 1499 /lib/systemd/systemd-logger
  + accounts-daemon.service
  | \ 1496 /usr/libexec/accounts-daemon
  + rtkit-daemon.service
  | \ 1473 /usr/libexec/rtkit-daemon
  + console-kit-daemon.service
  | \ 1408 /usr/sbin/console-kit-daemon --no-daemon
  + prefdm.service
  | + 1376 /usr/sbin/gdm-binary -nodaemon
  | + 1391 /usr/libexec/gdm-simple-slave --display-id /org/gnome/DisplayManager/Display1 --force-active-vt
  | + 1394 /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-f2KUOh/database -nolisten tcp vt1
  | + 1419 /usr/bin/dbus-launch --exit-with-session
  | \ 1511 /usr/bin/gnome-keyring-daemon --daemonize --login
  + [email protected]
  | + tty6
  | | \ 1346 /sbin/mingetty tty6
  | + tty4
  | | \ 1343 /sbin/mingetty tty4
  | + tty5
  | | \ 1342 /sbin/mingetty tty5
  | + tty3
  | | \ 1339 /sbin/mingetty tty3
  | \ tty2
  |   \ 1332 /sbin/mingetty tty2
  + abrtd.service
  | \ 1317 /usr/sbin/abrtd -d -s
  + crond.service
  | \ 1344 crond
  + sshd.service
  | \ 1362 /usr/sbin/sshd
  + sendmail.service
  | + 4094 sendmail: Queue [email protected]:00:00 for /var/spool/clientmqueue
  | \ 4096 sendmail: accepting connections
  + haldaemon.service
  | + 1249 hald
  | + 1250 hald-runner
  | + 1273 hald-addon-input: Listening on /dev/input/event3 /dev/input/event9 /dev/input/event1 /dev/input/event7 /dev/input/event2 /dev/input/event0 /dev/input/event8
  | + 1275 /usr/libexec/hald-addon-rfkill-killswitch
  | + 1284 /usr/libexec/hald-addon-leds
  | + 1285 /usr/libexec/hald-addon-generic-backlight
  | \ 1287 /usr/libexec/hald-addon-acpi
  + irqbalance.service
  | \ 1210 irqbalance
  + avahi-daemon.service
  | + 1175 avahi-daemon: running [epsilon.local]
  + NetworkManager.service
  | + 1171 /usr/sbin/NetworkManager --no-daemon
  | \ 4028 /sbin/dhclient -d -4 -sf /usr/libexec/nm-dhcp-client.action -pf /var/run/dhclient-wlan0.pid -lf /var/lib/dhclient/dhclient-7d32a784-ede9-4cf6-9ee3-60edc0bce5ff-wlan0.lease -cf /var/run/nm-dhclient-wlan0.conf wlan0
  + rsyslog.service
  | \ 1193 /sbin/rsyslogd -c 4
  + mdmonitor.service
  | \ 1207 mdadm --monitor --scan -f --pid-file=/var/run/mdadm/mdadm.pid
  + cups.service
  | \ 1195 cupsd -C /etc/cups/cupsd.conf
  + auditd.service
  | + 1131 auditd
  | + 1133 /sbin/audispd
  | \ 1135 /usr/sbin/sedispatch
  + dbus.service
  | +  1096 /bin/dbus-daemon --system --address=systemd: --nofork --systemd-activation
  | +  1216 /usr/sbin/modem-manager
  | +  1219 /usr/libexec/polkit-1/polkitd
  | +  1242 /usr/sbin/wpa_supplicant -c /etc/wpa_supplicant/wpa_supplicant.conf -B -u -f /var/log/wpa_supplicant.log -P /var/run/wpa_supplicant.pid
  | +  1453 /usr/libexec/upowerd
  | +  1733 /usr/libexec/udisks-daemon
  | +  1747 udisks-daemon: polling /dev/sr0
  | \ 29509 /usr/libexec/packagekitd
  + dev-mqueue.mount
  + dev-hugepages.mount
  \ sysinit.service
    +   455 /sbin/udevd -d
    +  4016 /usr/sbin/bluetoothd --udev
    + 28188 /sbin/udevd -d
    \ 28191 /sbin/udevd -d

(This too is shortened, the same way)

As you can see, this command shows the processes by their cgroup
and hence service, as systemd labels the cgroups after the
services. For example, you can easily see that the auditing service
auditd.service spawns three individual processes,
auditd, audisp and sedispatch.

If you look closely you will notice that a number of processes have
been assigned to the cgroup /user/1. At this point let’s
simply leave it at that systemd not only maintains services in cgroups,
but user session processes as well. In a later installment we’ll discuss in
more detail what this about.

So much for now, come back soon for the next installment!

May They Make Me Superfluous

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/08/10/may-they-make-me-superfluous.html

The Linux
Foundation announced
today

their own
FLOSS license compliance program
, which included the launch of a
few software
tools under a modified BSD license
. They also have offered
some training
courses
for those that want to learn how to comply.

If this Linux Foundation (LF) program is successful, I may get
something I’ve wished for since the first enforcement I ever worked on
back in late 1998: I’d like to never do GPL enforcement again. I admit
I talk a lot about GPL enforcement. It’s indeed been a major center of
my work for twelve years, but I can’t say I’ve ever
really liked doing it.

By contrast, I have been hoping for years that someone would eventually
come along and “put me out of the enforcement business”.
Someday, I dream of opening up the <[email protected]> folder and
having no new violation reports (BTW, those dreams usually become
real-life nightmares, as I typically get two new violations reports each
week). I also wish for the day that I don’t have a backlogged queue of
200 or more GPL violations where no source nor offer for source has been
provided. I hate that it takes so much time to resolve violations
because of the sheer magnitude that exist.

I got into GPL enforcement so heavily, frankly, because so few others
were doing it. To this day, there are basically three groups even
bothering to enforce GPL on behalf of the
community: Conservancy (with
enforcement efforts led by
me), FSF (with
enforcement efforts led
by Brett Smith),
and gpl-violations.org (with
enforcement efforts led
by Harald Welte).
Generally, GPL enforcement has been a relatively lonely world for a long
time, mainly because it’s boring, tedious and patience-trying work that
only the most dedicated (masochistic?) want to spend their time
doing.

There are a dozen of very important software-freedom-advancing
activities that I’d rather spend my time doing. But as long as people
don’t respect the freedom of software users and ignore the important
protections of copyleft, I have to continue doing GPL enforcement. Any
effort like LF’s is very welcome, provided that it reduces the number of
violations.

Of course, LF (as GPL educators) and Brett, Harald, and I (as GPL
enforcers) will share the biggest obstacle: getting communication going
with the actual violators. Fact is, people who know the LF exists or have
heard of the GPL are likely to already be in compliance. When I find a
new violation, it’s nearly always someone who doesn’t even know what’s
going on, and often doesn’t even realize what their engineering team put
into their firmware. If LF can reach these companies before they end up as
a violation report emailed to me, I’ll be as glad as can be. But it’s a
tall order.

I do have a few minor criticisms of LF’s program. First, I believe
the directory
of FLOSS Compliance Officers
should be made publicly available. I
think FLOSS Compliance Officers at companies should make themselves
publicly known in the software freedom community so they can be
contacted directly. As LF currently has it set up, you have
to make a request of the LF to put you in touch with a company’s
compliance officer.

Second, I admit I’d have liked to have been actively engaged in LF’s
process of forming this program. But, I presume that they wanted as
much distance as possible from the world’s most prolific GPL enforcer,
and I can understand that. (I suppose there’s a good cop/bad cop
metaphor you could make here, but I don’t like to think of myself as the
GPL police.) I did offer to help LF on this back in April when they
announced it at the Linux Collaboration Summit, but they haven’t been in
touch. Nevertheless, I’ll hopefully meet with LF folks on Thursday at
LinuxCon about their program. Also, I was invited a few months ago by
Martin Michlmayr to join one subset of the project, the
SPDX
working group
and I’ve been giving it time whenever I can.

But, as I said, those are only minor complaints. The program as a
whole looks like it might do some good. I hope companies take advantage
of it, and more importantly, I hope LF can reach out to the companies
who don’t know their name yet but have BusyBox/Linux embedded in their
products.

Please, LF, help free me from the grind of GPL enforcement work. I
remain committed to enforcing GPL until there are no violations left,
but if LF can actually bring about an end to GPL violations sooner
rather than later, I’ll be much obliged. In a year, if I have an empty
queue of GPL violations, I’ll call LF’s program a unmitigated
success and gladly move on to other urgent work to advance software
freedom.