Tag Archives: hacking

How to handle "I want to be a security guy" with an easy assignment

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/6gQIEf5LNoM/how-to-handle-i-want-to-be-security-guy.html

As the manager of a security team I’m often approached by technologists who are interested in information security. Their reasons range from a long term interest in the subject to those who simply want a change of pace, or think that the grass may just be greener in infosec.Over the years I’ve developed a simple list of things that I tell people who express an interest:Get a copy of Hacking Exposed. Anything recent will do, and a good alternative is Counterhack Reloaded.Skim the book, and read anything that catches your eye. Don’t try to read it cover to cover, unless you really find that you want to.Come back and talk to me once you’ve done that, and we’ll talk about what you found interesting.It’s a very simple process – but I’ve found it immensely valuable. Those who are really interested, and who will put the time into the effort will buy the book, and will come back with questions and comments. A certain percentage will get the book and will realize that information security isn’t really what they want to do, or they will realize that they need or want to know more before they tackle a career in security. A final group are interested, but not enough to take the step to follow up.Once you have an interested candidate, the conversation or conversations that you can have next are far more interesting. Hopefully, you’ve read the book yourself, as you’ll be answering questions, and often providing references to deeper resources on the topics that interest them. Favorite resources for follow-up activities include:OWASP – particularly WebGoat and MultilldaeInvestigation of vulnerability scanners like Nikto and Nessus andExploration of tools like Metasploit and the BeEF browser exploitation framework using DVL or a similar vulnerable OSSANS courses like SANS 401 and 501A whole range of options exists once you start to have the conversation – but you’re certain you’re having the conversation with someone who is interested enough to follow up, and who has helped you identify what they’ll have some passion for.

_uacct = “UA-1423386-1”;
urchinTracker();

How to handle "I want to be a security guy" with an easy assignment

Post Syndicated from David original http://feedproxy.google.com/~r/DevilsAdvocateSecurity/~3/6gQIEf5LNoM/how-to-handle-i-want-to-be-security-guy.html

As the manager of a security team I’m often approached by technologists who are interested in information security. Their reasons range from a long term interest in the subject to those who simply want a change of pace, or think that the grass may just be greener in infosec.Over the years I’ve developed a simple list of things that I tell people who express an interest:Get a copy of Hacking Exposed. Anything recent will do, and a good alternative is Counterhack Reloaded.Skim the book, and read anything that catches your eye. Don’t try to read it cover to cover, unless you really find that you want to.Come back and talk to me once you’ve done that, and we’ll talk about what you found interesting.It’s a very simple process – but I’ve found it immensely valuable. Those who are really interested, and who will put the time into the effort will buy the book, and will come back with questions and comments. A certain percentage will get the book and will realize that information security isn’t really what they want to do, or they will realize that they need or want to know more before they tackle a career in security. A final group are interested, but not enough to take the step to follow up.Once you have an interested candidate, the conversation or conversations that you can have next are far more interesting. Hopefully, you’ve read the book yourself, as you’ll be answering questions, and often providing references to deeper resources on the topics that interest them. Favorite resources for follow-up activities include:OWASP – particularly WebGoat and MultilldaeInvestigation of vulnerability scanners like Nikto and Nessus andExploration of tools like Metasploit and the BeEF browser exploitation framework using DVL or a similar vulnerable OSSANS courses like SANS 401 and 501A whole range of options exists once you start to have the conversation – but you’re certain you’re having the conversation with someone who is interested enough to follow up, and who has helped you identify what they’ll have some passion for.

_uacct = “UA-1423386-1”;
urchinTracker();

You’re Living in the Past, Dude!

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/08/05/living-in-the-past.html

At the 2000 Usenix
Technical Conference
(which was the primary “generalist”
conference for Free Software developers in those days), I met Miguel De
Icaza for the third time in my life. In those days, he’d just started
Helix Code (anyone else remember what Ximian used to be called?) and was
still president of the GNOME Foundation. To give you some context:
Bonobo was a centerpiece of new and active GNOME development then.

Out of curiosity and a little excitement about GNOME, I asked Miguel if
he could show me how to get the GNOME 1.2 running on my laptop. Miguel
agreed to help, quickly taking control of the keyboard and frantically
typing and editing my sources.list.

Debian potato was the just-becoming-stable release in those days, and
of course, I was still running potato (this was before
my experiment
with running things from testing began
).

After a few minutes hacking on my keyboard, Miguel realized that I
wasn’t running woody, Debian’s development release. Miguel looked at
me, and said: You aren’t running woody; I can’t make GNOME run on
this thing. There’s nothing I can do for you. You’re living in the
past, dude!. (Those who know Miguel IRL can imagine easily how he’d
sound saying this.)

So, I’ve told that story many times for the last eleven years. I
usually tell it for laughs, as it seems an equal-opportunity humorous
anecdote. It pokes some fun at Miguel, at me, at Debian for its release
cycle, and also at GNOME (which has, since its inception, tried
to never live in the past, dude).

Fact is, though, I rather like living in the past, at least
with regard to my computer setup. By way of desktop GUIs, I
used twm well into the
late 1990s, and used fvwm well into
the early 2000s. I switched to sawfish (then sawmill) during the
relatively brief period when GNOME used it as its default window
manager. When Metacity became the default, I never switched because I’d
configured sawfish so heavily.

In fact, the only actual parts of GNOME 2 that I ever used on a daily
basis have been (a) a small unobtrusive panel, (b) dbus (and its related
services), and (c) the Network Manager applet. When GNOME 3 was
released, I had no plans to switch to it, and frankly I still don’t.

I’m not embarrassed that I consistently live in the past; it’s
sort of the point. GNOME 3 isn’t for me; it’s for people who want their
desktop to operate in new and interesting ways. Indeed, it’s (in many
ways) for the people who are tempted to run OSX because its desktop is
different than the usual, traditional, “desktop metaphor”
experience that had been standard since the mid-1990s.

GNOME 3 just wasn’t designed with old-school Unix hackers in mind.
Those of us who don’t believe a computer is any good until we see a
command line aren’t going to be the early adopters who embrace GNOME 3.
For my part, I’ll actually try to avoid it as long as possible, continue
to run my little GNOME 2 panel and sawfish, until slowly, GNOME 3 will
seep into my workflow the way the GNOME 2 panel and sawfish did
when they were current, state-of-the-art GNOME
technologies.

I hope that other old-school geeks will see this distinction: we’re
past the era when every Free Software project is targeted at us hackers
specifically. Failing to notice this will cause us to ignore the deeper
problem software freedom faces. GNOME Foundation’s Executive Director
(and my good friend), Karen
Sandler
, pointed out in
her OSCON
keynote
something that’s bothered her and me for years: the majority
computer at OSCON is Apple hardware running OSX. (In fact, I even
noticed Simon Phipps has one
now!) That’s the world we’re living in now. Users who
actually know about “Open Source” are now regularly
enticed to give up software freedom for shiny things.

Yes, as you just read, I can snicker as quickly as any
old-school command-line geek (just as
Linus
Torvalds did earlier this week
) at the pointlessness of wobbly
windows, desktop cubes, and zoom effects. I could also easily give a
treatise on how I can get work done faster, better, and smarter because
I have the technology of years ago that makes every keystroke
matter.

Notwithstanding that, I’d even love to have the same versatility with
GNOME 3 that I have with sawfish. And, if it turns out GNOME 3’s
embedded Javascript engine will give me the same hackability I prefer
with sawfish, I’ll adopt GNOME 3 happily. But, no matter what, I’ll
always be living in the past, because like every other human, I hate
changing anything, unless it’s strictly necessary or it’s my own
creation and derivation. Humans are like that: no matter who you are,
if it wasn’t your idea, you’re always slow to adopt something new and
change old habits.

Nevertheless, there’s actually nothing wrong with living in the
past — I quite like it myself. However, I’d suggest that care
be taken to not admonish those who make a go at creating the future.
(At this risk of making a conclusion that sounds like a time travel
joke,) don’t forget that their future will eventually
become that very past where I and others would prefer to
live.

Linux Plumbers Conference/Gnome Summit Recap

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lpc2010-recap.html

Last week LPC and GS 2010 took place in Cambridge,
MA. Like the last years, LPC showed again that — at least for me — it is one of
the most relevant Linux conferences in existence, if not the single most
relevant one.

Here’s a terse, incomprehensive report of the different discussions I took
part in with various folks at the conference, in no particular order:

The Boot
and Init
track led by Kay Sievers (Suse) was a great success. We had
exciting talks which I think helped quite a bit in clearing a few things up,
and hopefully helps us in consolidating the full Linux boot process among all
the components involved. We had talks covering everything from the BIOS boot,
to initrds, graphical boot splashes and systemd. Kay
Sievers and I spoke about systemd, also covering the state of it in the Fedora
and openSUSE distributions. Gustavo Barbieri (ProFUSION, Gentoo) and Michael
Biebl (Debian) gave interesting talks about systemd adoption in their
respective distributions. I was particularly interested in the various
statistics Michael showed about SysV/LSB init script usage in Debian, because
this gives an idea how much work we have in front of us in the long run. A
longer discussion about the future of initrds and the logic necessary to find
the root file system on boot was quite enlightening. I think this track was
helpful to increase the unification and consolidation of the way Linux systems
boot up and are maintained during runtime.

Kay and I and some other folks sat down with Arjan van de Ven (Intel), to
talk about the prospects of systemd in Meego. The discussions were very
positive. In particular Arjan hat some great suggestions regarding use of the
Simple
Boot Flag
in systemd (expect this in one of the next versions) and
readahead. Before systemd can find adoption in Meego we’d have to add a short
number of features to systemd first, most of them should be easy to add.

Similarly, I sat down with Martin Pitt and James Hunt (both Canonical) and
discussed systemd in relation to Ubuntu. I think we managed to clear a lot of
things up, and have a good chance to improve cooperation between Ubuntu and
systemd in relation to APIs and maybe even more.

We talked to Thomas Gleixner regarding userspace notifications when the
wallclock time jumps relative to the monotonic clock. This is important to
systemd so that we can schedule calendar jobs similar to cron, but without
having to wake up periodically to check whether the wallclock time changed
relatively to the monotonic clock so that we can recalculate the next
point in time a calendar event is triggered. There has been previous work in
this area in the kernel world, but nothing got merged. Thomas’ suggestion how to
add this facility should be much easier than anything proposed so far.

I also tried to talk Andreas Grünbacher into supporting file system
user extended attributes in various virtual file systems such as procfs,
cgroupfs, sysfs and tmpfs. I hope I convinced him that this would be a good
idea, since this would allow setting externally accessible attributes to all
kinds of kernel objects, such as processes and devices. This would not only
have uses in systemd (where we could easily store all meta information systemd
needs to know about a service in the cgroupfs via xattrs, so that systemd could
even crash or go away at any time and we still can read all runtime information
necessary beyond mere cgrouping from the file system when systemd comes to live
again) but also in the desktop environments, so that we could for example
attach the human readable application name, an icon or a desktop file to the
processes currently running, in a simple way where the data we attach follows
the lifecycle of the process itself.

The Audio track
went really well, too. I was particularly excited about Pierre-Louis Bossart’s
(Intel) plans regarding AC3 (and other codecs) support in PulseAudio, and the simplicity of his
approach. Also great was hearing about Laurent Pinchart’s project to expose
audio and video device routing to userspace. Finally, I really enjoyed David
Henningsson’s and Luke Yelavich’s (both Canonical) talk regarding tracking down audio bugs on
Ubuntu. I was really impressed by the elaborate tools they created to test
audio drivers on users machines. Pretty cool stuff. Maybe this can be extended
into a test suite for driver writers, because the current approach for driver
writers (i.e. “If PulseAudio works correctly, your driver is correct”) doesn’t
really scale (although I like the idea and take it as a compliment…). I also
liked the timechart profiling results Pierre showed me that he generated for
PulseAudio. Seems PulseAudio is behaving quite nicely these days.

Together with Harald Hoyer I got a demo of David Zeuthen’s disk assembly
daemon (stc), which makes RAID/MD/LVM assembly more dynamic. Great stuff, and I
think we convinced him to leave actual mounting of file systems to systemd
instead of doing it himself.

Harald and I also hashed out a few things to make integration between dracut
and systemd nicer (i.e. passing along profiling information between the two,
and information regarding the root fsck).

I also hope I convinced Ray Strode to make Plymouth actively listen to udev
for notifications about DRM devices, so that further synchronization between
udev and plymouth won’t be necessary, which both makes things more robust and a
little bit faster.

Kay and I talked to Greg Kroah-Hartman regarding the brokeness of
VT_WAITEVENT in kernel TTY layer, and discussed what to do about this. After returning from the US Kay now
did the necessary hacking work to provide a minimal sysfs based solution that
allows userspace query to which TTYs /dev/console and
/dev/tty0 currently point, and get notifications when this changes.
This should allow us to greatly simplify ConsoleKit and make it possible to
add console-triggered activation to systemd (think: getty gets started the
moment you switch to its virtual terminal, not already at boot).

I also spent some time discussing the upcoming deadline scheduling kernel
logic with Dario, Dhaval and Tommaso regarding its possible use in PulseAudio.
I believe deadline schedule is a useful tool to hand out real-time scheduling
to applications securely. As an easy path to supporting deadline scheduling in
PulseAudio I suggested patching RealtimeKit to optionally use deadline
scheduling for its clients. This would magically teach PA (and other clients) to
use deadline scheduling without further patching in the clients.

At GNOME Summit I sat down with Ryan Lortie and Will Thompson to discuss the
the future of the D-Bus session bus and how we can move to a machine/user bus
instead in a nice way. We managed to come to a nice agreement here, and this
should enable us to introduce systemd for session management soonishly. Now we
only need to convince the other folks having stakes in D-Bus that what we
discussed is actually a good idea, expect more about this soon on dbus-devel.
Ryan and I also hashed out our remaining differences regarding the exact
semantics of XDG_RUNTIME_DIR, the result of which you can already
see on the XDG mailing list
. Ryan already did the GLib work to introduce
XDG_RUNTIME_DIR and systemd already supports this inofficially since a few
versions.

I quite appreciate how Michael Meeks quoted me in his final
keynote. 😉

There was a lot of other stuff going on at the conference, and what I
wrote above is in no way complete. And of course, besides all the technical
stuff, it was great meeting all the good Linux folks again, especially my
colleagues from Red Hat.

I am still amazed how systemd is received so positively and with open arms
all across the board. It’s particularly amazing that systemd at this point in
time has already been adopted by various companies in the automotive and
aviation industry.

Linux Plumbers Conference/Gnome Summit Recap

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/lpc2010-recap.html

Last week LPC and GS 2010 took place in Cambridge,
MA. Like the last years, LPC showed again that — at least for me — it is one of
the most relevant Linux conferences in existence, if not the single most
relevant one.

Here’s a terse, incomprehensive report of the different discussions I took
part in with various folks at the conference, in no particular order:

The Boot
and Init
track led by Kay Sievers (Suse) was a great success. We had
exciting talks which I think helped quite a bit in clearing a few things up,
and hopefully helps us in consolidating the full Linux boot process among all
the components involved. We had talks covering everything from the BIOS boot,
to initrds, graphical boot splashes and systemd. Kay
Sievers and I spoke about systemd, also covering the state of it in the Fedora
and openSUSE distributions. Gustavo Barbieri (ProFUSION, Gentoo) and Michael
Biebl (Debian) gave interesting talks about systemd adoption in their
respective distributions. I was particularly interested in the various
statistics Michael showed about SysV/LSB init script usage in Debian, because
this gives an idea how much work we have in front of us in the long run. A
longer discussion about the future of initrds and the logic necessary to find
the root file system on boot was quite enlightening. I think this track was
helpful to increase the unification and consolidation of the way Linux systems
boot up and are maintained during runtime.

Kay and I and some other folks sat down with Arjan van de Ven (Intel), to
talk about the prospects of systemd in Meego. The discussions were very
positive. In particular Arjan hat some great suggestions regarding use of the
Simple
Boot Flag
in systemd (expect this in one of the next versions) and
readahead. Before systemd can find adoption in Meego we’d have to add a short
number of features to systemd first, most of them should be easy to add.

Similarly, I sat down with Martin Pitt and James Hunt (both Canonical) and
discussed systemd in relation to Ubuntu. I think we managed to clear a lot of
things up, and have a good chance to improve cooperation between Ubuntu and
systemd in relation to APIs and maybe even more.

We talked to Thomas Gleixner regarding userspace notifications when the
wallclock time jumps relative to the monotonic clock. This is important to
systemd so that we can schedule calendar jobs similar to cron, but without
having to wake up periodically to check whether the wallclock time changed
relatively to the monotonic clock so that we can recalculate the next
point in time a calendar event is triggered. There has been previous work in
this area in the kernel world, but nothing got merged. Thomas’ suggestion how to
add this facility should be much easier than anything proposed so far.

I also tried to talk Andreas Grünbacher into supporting file system
user extended attributes in various virtual file systems such as procfs,
cgroupfs, sysfs and tmpfs. I hope I convinced him that this would be a good
idea, since this would allow setting externally accessible attributes to all
kinds of kernel objects, such as processes and devices. This would not only
have uses in systemd (where we could easily store all meta information systemd
needs to know about a service in the cgroupfs via xattrs, so that systemd could
even crash or go away at any time and we still can read all runtime information
necessary beyond mere cgrouping from the file system when systemd comes to live
again) but also in the desktop environments, so that we could for example
attach the human readable application name, an icon or a desktop file to the
processes currently running, in a simple way where the data we attach follows
the lifecycle of the process itself.

The Audio track
went really well, too. I was particularly excited about Pierre-Louis Bossart’s
(Intel) plans regarding AC3 (and other codecs) support in PulseAudio, and the simplicity of his
approach. Also great was hearing about Laurent Pinchart’s project to expose
audio and video device routing to userspace. Finally, I really enjoyed David
Henningsson’s and Luke Yelavich’s (both Canonical) talk regarding tracking down audio bugs on
Ubuntu. I was really impressed by the elaborate tools they created to test
audio drivers on users machines. Pretty cool stuff. Maybe this can be extended
into a test suite for driver writers, because the current approach for driver
writers (i.e. “If PulseAudio works correctly, your driver is correct”) doesn’t
really scale (although I like the idea and take it as a compliment…). I also
liked the timechart profiling results Pierre showed me that he generated for
PulseAudio. Seems PulseAudio is behaving quite nicely these days.

Together with Harald Hoyer I got a demo of David Zeuthen’s disk assembly
daemon (stc), which makes RAID/MD/LVM assembly more dynamic. Great stuff, and I
think we convinced him to leave actual mounting of file systems to systemd
instead of doing it himself.

Harald and I also hashed out a few things to make integration between dracut
and systemd nicer (i.e. passing along profiling information between the two,
and information regarding the root fsck).

I also hope I convinced Ray Strode to make Plymouth actively listen to udev
for notifications about DRM devices, so that further synchronization between
udev and plymouth won’t be necessary, which both makes things more robust and a
little bit faster.

Kay and I talked to Greg Kroah-Hartman regarding the brokeness of
VT_WAITEVENT in kernel TTY layer, and discussed what to do about this. After returning from the US Kay now
did the necessary hacking work to provide a minimal sysfs based solution that
allows userspace query to which TTYs /dev/console and
/dev/tty0 currently point, and get notifications when this changes.
This should allow us to greatly simplify ConsoleKit and make it possible to
add console-triggered activation to systemd (think: getty gets started the
moment you switch to its virtual terminal, not already at boot).

I also spent some time discussing the upcoming deadline scheduling kernel
logic with Dario, Dhaval and Tommaso regarding its possible use in PulseAudio.
I believe deadline schedule is a useful tool to hand out real-time scheduling
to applications securely. As an easy path to supporting deadline scheduling in
PulseAudio I suggested patching RealtimeKit to optionally use deadline
scheduling for its clients. This would magically teach PA (and other clients) to
use deadline scheduling without further patching in the clients.

At GNOME Summit I sat down with Ryan Lortie and Will Thompson to discuss the
the future of the D-Bus session bus and how we can move to a machine/user bus
instead in a nice way. We managed to come to a nice agreement here, and this
should enable us to introduce systemd for session management soonishly. Now we
only need to convince the other folks having stakes in D-Bus that what we
discussed is actually a good idea, expect more about this soon on dbus-devel.
Ryan and I also hashed out our remaining differences regarding the exact
semantics of XDG_RUNTIME_DIR, the result of which you can already
see on the XDG mailing list
. Ryan already did the GLib work to introduce
XDG_RUNTIME_DIR and systemd already supports this inofficially since a few
versions.

I quite appreciate how Michael Meeks quoted me in his final
keynote. 😉

There was a lot of other stuff going on at the conference, and what I
wrote above is in no way complete. And of course, besides all the technical
stuff, it was great meeting all the good Linux folks again, especially my
colleagues from Red Hat.

I am still amazed how systemd is received so positively and with open arms
all across the board. It’s particularly amazing that systemd at this point in
time has already been adopted by various companies in the automotive and
aviation industry.

At Least Motorola Admits It

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/07/15/motorola-admits.html

I’ve written
before about the software freedom issues inherent with
Android/Linux
. Summarized shortly: the software freedom community
is fortunate that Google released so much code under Free Software
licenses, but since most of the code in the system is Apache-2.0
licensed, we’re going to see a lot of proprietarized,
non-user-upgradable versions. In fact, there’s no Android/Linux
system that’s fully Free Software yet. (That’s
why Aaron Williamson
and I try to keep
the Replicant
project
going. We’ve focused on the HTC Dream and the NexusOne,
since they are the mobile devices closest to working with only Free
Software installed, and because they allow the users to put their own
firmware on the device.)

I was therefore intrigued
to
discover last night

(via mtrausch)
a
February blog post

by Lori Fraleigh
of
Motorola
, wherein Fraleigh clarifies Motorola’s opposition to
software freedom for its Android/Linux users:

We [Motorola] understand there is a community of developers interested in
… Android system development … For these developers, we
highly recommend obtaining either a Google ADP1 developer phone or a Nexus
One … At this time, Motorola Android-based handsets are intended for
use by consumers.

I appreciate the fact that Fraleigh and Motorola are honest in their
disdain for software developers. Unlike Apple — who tries to hide
how developer-unfriendly its mobile platform is — Motorola readily
admits that they seek to leave developers as helpless as possible,
refusing to share the necessary tools that developers need to upgrade
devices and to improve themselves, their community, and their software.
Companies like Motorola and Apple both seek to squelch the healthy hacker
tendency to make technology better for everyone. Now that I’ve seen
Fraleigh’s old blog post, I can at least give Motorola credit for
full honesty about these motives.

I do, however, find the implication of Fraleigh’s words revolting.
People who buy
the devices, in Motorola’s view, don’t deserve the right to improve
their technology. By contrast, I believe that software freedom should
be universal and that no one need be a “mere consumer” of
technology. I believe that every technology user is a potential
developer who might have something to contribute but obviously cannot if
that user isn’t given the tools to do so. Sadly, it seems, Motorola
believes the general public has nothing useful to contribute, so the
public shouldn’t even be given the chance.

But, this attitude is always true for proprietary software companies,
so there are actually no revelations on that point. Of more interest is
how Motorola was able to do this, given that Android/Linux (at least
most of it) is Free Software.

Motorola’s ability to take these actions is a consequence of a few
licensing issues. First, most of the Android system is under the
Apache-2.0 license (or, in some cases, an even more permissive license).
These licenses allow Motorola to make proprietary versions of what
Google released and sell it without source code nor the ability for
users to install modified versions. That license decision is lamentable
(but expected, given Google’s goals for Android).

The even more lamentable licensing issue here is regarding Linux’s
license,
the GPLv2.
Specifically, Fraleigh’s post claims:

The use of open source software, such as the Linux kernel … in a
consumer device does not require the handset running such software to be
open for re-flashing. We comply with the licenses, including GPLv2.

I should note that, other than Fraleigh’s assertion quoted above, I
have no knowledge one way or another if Motorola is compliant
with GPLv2 on its
Android/Linux phones. I don’t own one, have no plans to buy one, and
therefore I’m not in receipt of an offer for source regarding the
devices. I’ve also received no reports from anyone regarding possible
non-compliance. In fact, I’d love to confirm their compliance: please
get in touch if you have a Motorola Android/Linux phone and attempted to
install a newly compiled executable of Linux onto your phone.

I’m specifically interested in the installation issue because GPLv2
requires that any binary distribution of Linux (such as one on telephone
hardware) include both the source code itself and the scripts to
control compilation and installation of the executable. So, if
Motorola wrote any helper programs or other software that installs Linux
onto the phones, then such software, under GPLv2, is a required part of
the complete and corresponding source code of Linux and must be
distributed to each buyer of a Motorola Android/Linux phone.

If you’re surprised by that last paragraph, you’re probably not alone.
I find that many are confused regarding this GPLv2 nuance. I believe
the confusion stems from discussions during the
GPLv3
process about this specific
requirement. GPLv3
does indeed expand the requirement for the scripts to control
compilation and installation of the executable into the concept
of Installation Information. Furthermore,
GPLv3’s Installation Information is much more expansive than
merely requiring helper software programs and the like.
GPLv3’s Installation Information includes any material,
such as an authorization key, that is necessary for installation of a
modified version onto the device.

However, merely because GPLv3 expanded installation
information requirements does not lessen GPLv2’s requirement of
such. In fact, in my reading of GPLv2 in comparison to GPLv3, the only
effective difference between the two on this point relates to
cryptographic device lock-down. I do admit that under GPLv2, if you give
all the required installation scripts, you could still use cryptography
to prevent those scripts from functioning without an authorization key.
Some vendors do this, and that’s precisely why GPLv3 is written the way
that it is: we’d observed such lock-down occurring in the field, and
identified that behavior as a bug in GPLv2 that is now closed with
GPLv3.

However, because of all that hype about GPLv3’s new Installation
Information definition, many simply forgot that the GPLv2 isn’t
silent on the issue. In other words, GPLv3’s verbosity on the subject
led people to minimize the important existing requirements of GPLv2
regarding installation information.

As regular readers of this blog know, I’ve spent much of my time for
the last 12 years doing GPL enforcement. Quite often, I must remind
violators that GPLv2 does indeed require the scripts to control
compilation and installation of the executable, and that candidate
source code releases missing the scripts remain in violation of GPLv2.
I sincerely hope that Android/Linux redistributors haven’t forgotten
this.

I have one final and important point to make regarding Motorola’s
February statement: I’ve often mentioned that the mobile industry’s
opposition to GPLv3 and to user-upgradable devices is for
their own reasons, and nothing to do with regulators or other
outside entities preventing them from releasing such software. In their
blog post, Motorola tells us quite clearly that the community of
developers interested in … experimenting with Android system
development and re-flashing phones … [should obtain] either a
Google ADP1 developer phone or a Nexus One, both of which are intended
for these purposes. In other words, Motorola tacitly admits that
it’s completely legal and reasonable for the community to obtain such
telephones, and that, in fact, Google sells such devices. Motorola was
not required to put lock-down restrictions in place, rather
they made a choice to prohibit users in this way. On this
point, Google chose to treat its users with respect, allowing them to
install modified versions. Motorola, by contrast, chose to make
Android/Linux as close to Apple’s iPhone as they could get away with
legally.

So, the next time a mobile company tries to tell you that they just
can’t abide by GPLv3 because some third party (the FCC is their frequent
scapegoat) prohibits them, you should call them on their
FUD. Point out
that Google sells phones on the open market that provide
all Installation Information that GPLv3 might require. (In other
words, even if Linux were GPLv3’d, Android/Linux on the NexusOne and HTC
Dream would be a GPLv3-compliant distribution.) Meanwhile, at least one
such company, Motorola, has admitted their solitary reason for avoiding
GPLv3: the company just doesn’t believe users deserve the right to
install improved versions of their software. At least they admit their
contempt for their customers.

Update (same day):
jwildeboer
pointed me to
a few
posts
in the custom ROM and jailbreaking communities about their concerns about
Motorola’s new offering, the Droid-X. Some commentors there point out
that eventually, most phones get jailbroken or otherwise allow user
control. However, the key point of
the CrunchGear
User Manifesto
is a clear and good one: no company or person has
the right to tell you that you may not do what you like with your own
property. This is a point akin and perhaps essential to software
freedom. It doesn’t really matter if you can figure out to how
to hack a device; what’s important is that you not give your money to the
company that prohibits such hacking. For goodness sake, people, why don’t
we all use ADP1’s and NexusOne’s and be done with this?

Updated (2010-07-17): It appears
that cryptographic
lock down on the Droid-X is confirmed
(thanks
to rao for the link). I hope
everyone will boycott all Motorola devices because of this, especially
given that there are Android/Linux devices on the market that
aren’t locked down in this way.

BTW, in Motorola’s answer to Engadget on this,
we see they are again subtly sending FUD that the lock-down is somehow
legally required:

Motorola’s primary focus is the security of our end users and protection
of their data, while also meeting carrier, partner and legal requirements.

I agree the carriers and partners probably want such lock down, but I’d
like to see their evidence that there is a legal restriction that requires
that. They present none.

Meanwhile, they also state that such cryptographic lock-down is the
only way they know how to secure their devices:

Checking for a valid software configuration is a common practice within
the industry to protect the user against potential malicious software
threats.

Pity that Motorola engineers aren’t as clueful as the Google and HTC
engineers who designed the ADP1 and Nexus One.

Tagging Audio Streams

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/tagging-audio.html

So you are hacking an audio application and the audio data you are
generating might eventually end up in PulseAudio before it is played. If that’s the case then please make sure
to read this!

Here’s the quick summary for Gtk+ developers:

PulseAudio can enforce all kinds of policy on sounds. For example, starting
in 0.9.15, we will automatically pause your media player while a phone call is
going on. To implement this we however need to know what the stream you
are sending to PulseAudio should be categorized as: is it music? Is it a
movie? Is it game sounds? Is it a phone call stream?

Also, PulseAudio would like to show a nice icon and an application name next
to each stream in the volume control. That requires it to be able to deduce
this data from the stream.

And here’s where you come into the game: please add three lines like the
following next to the beginning of your main() function to your Gtk+
application:


g_set_application_name(_(“Totem Movie Player”));
gtk_window_set_default_icon_name(“totem”);
g_setenv(“PULSE_PROP_media.role”, “video”, TRUE);

If you do this then the PulseAudio client libraries will be able to figure out the rest for you.

There is more meta information (aka “properties”) you can set for your application or for your streams that is useful to PulseAudio. In case you want to know more about them or you are looking for equivalent code to the above example for non-Gtk+ applications, make sure to read the mentioned page.

Thank you!

Oh, and even if your app doesn’t do audio, calling g_set_application_name() and gtk_window_set_default_icon_name() is always a good idea!

Tagging Audio Streams

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/tagging-audio.html

So you are hacking an audio application and the audio data you are
generating might eventually end up in PulseAudio before it is played. If that’s the case then please make sure
to read this!

Here’s the quick summary for Gtk+ developers:

PulseAudio can enforce all kinds of policy on sounds. For example, starting
in 0.9.15, we will automatically pause your media player while a phone call is
going on. To implement this we however need to know what the stream you
are sending to PulseAudio should be categorized as: is it music? Is it a
movie? Is it game sounds? Is it a phone call stream?

Also, PulseAudio would like to show a nice icon and an application name next
to each stream in the volume control. That requires it to be able to deduce
this data from the stream.

And here’s where you come into the game: please add three lines like the
following next to the beginning of your main() function to your Gtk+
application:

...
g_set_application_name(_("Totem Movie Player"));
gtk_window_set_default_icon_name("totem");
g_setenv("PULSE_PROP_media.role", "video", TRUE);
...

If you do this then the PulseAudio client libraries will be able to figure out the rest for you.

There is more meta information (aka “properties”) you can set for your application or for your streams that is useful to PulseAudio. In case you want to know more about them or you are looking for equivalent code to the above example for non-Gtk+ applications, make sure to read the mentioned page.

Thank you!

Oh, and even if your app doesn’t do audio, calling g_set_application_name() and gtk_window_set_default_icon_name() is always a good idea!

Automatic Backtrace Generation

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/automatic-backtrace.html

Ubuntu has Apport. Fedora has nothing. That sucks big time.

Here’s the result of a few minutes of hacking up something similar to Apport based on the awesome (and much underused) Frysk debugging tool kit. It doesn’t post any backtraces on any Internet servers and has no fancy UI — but it automatically dumps a stacktrace of every crashing process on the system to syslog and stores all kinds of data in /tmp/core.*/ for later inspection.

#!/bin/bash
set -e
export PATH=/sbin:/bin:/usr/sbin:/usr/bin
DIR=”/tmp/core.$1.$2″
umask 077
mkdir “$DIR”
cat > “$DIR/core”
exec &> “$DIR/dump.log”
set +e
echo “$1” > “$DIR/pid”
echo “$2” > “$DIR/timestamp”
echo “$3” > “$DIR/uid”
echo “$4” > “$DIR/gid”
echo “$5” > “$DIR/signal”
echo “$6” > “$DIR/hostname”
set -x
fauxv “$DIR/core” > “$DIR/auxv”
fexe “$DIR/core” > “$DIR/exe”
fmaps “$DIR/core” > “$DIR/maps”
PKGS=`/usr/bin/fdebuginfo “$DIR/core” | grep “—” | cut -d ‘ ‘ -f 1 | sort | uniq | grep ‘^/’| xargs rpm -qf | sort | uniq`
[ “x$PKGS” != x ] && debuginfo-install -y $PKGS
fstack -rich “$DIR/core” > “$DIR/fstack”
set +x
(
echo “Application `cat “$DIR/exe”` (pid=$1,uid=$3,gid=$4) crashed with signal $5.”
echo “Stack trace follows:”
cat “$DIR/fstack”
echo “Auxiliary vector:”
cat “$DIR/auxv”
echo “Maps:”
cat “$DIR/maps”
echo “For details check $DIR”
) | logger -p local6.info -t “frysk-core-dump-$1”

Copy that into a file $SOMEWHERE/frysk-core-dump. Then do a chmod +x $SOMEWHERE/frysk-core-dump and a chown root:root $SOMEWHERE/frysk-core-dump. Now, tell the kernel that core dumps should be handed to this script:

# echo “|$SOMEWHERE/frysk-core-dump %p %t %u %g %s %h” > /proc/sys/kernel/core_pattern

Finally, increase RLIMIT_CORE to actually enable core dumps. ulimit -c
unlimited is a good idea. This will enable them only for your shell and
everything it spawns. In /etc/security/limits.conf you can enable
them for all users. I haven’t found out yet how to enable them globally
in Fedora though, i.e. for every single process that is started after boot including system daemons.

You can test this with running sleep 4711 and then dumping core with C-. The stacktrace should appear right-away in /var/log/messages.

This script will automatically try to install the debugging symbols for the crashing application via yum. In some cases it hence might take a while until the backtrace appears in syslog.

Don’t forget to install Frysk before trying this script!

You can’t believe how useful this script is. Something crashed and the backtrace is already waiting for you! It’s a bugfixer’s wet dream.

I am a bit surprised though that noone else came up with this before me. Or maybe I am just too dumb to use Google properly?

Automatic Backtrace Generation

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/automatic-backtrace.html

Ubuntu has Apport. Fedora has nothing. That sucks big time.

Here’s the result of a few minutes of hacking up something similar to Apport based on the awesome (and much underused) Frysk debugging tool kit. It doesn’t post any backtraces on any Internet servers and has no fancy UI — but it automatically dumps a stacktrace of every crashing process on the system to syslog and stores all kinds of data in /tmp/core.*/ for later inspection.

#!/bin/bash
set -e
export PATH=/sbin:/bin:/usr/sbin:/usr/bin
DIR="/tmp/core.$1.$2"
umask 077
mkdir "$DIR"
cat > "$DIR/core"
exec &> "$DIR/dump.log"
set +e
echo "$1" > "$DIR/pid"
echo "$2" > "$DIR/timestamp"
echo "$3" > "$DIR/uid"
echo "$4" > "$DIR/gid"
echo "$5" > "$DIR/signal"
echo "$6" > "$DIR/hostname"
set -x
fauxv "$DIR/core" > "$DIR/auxv"
fexe "$DIR/core" > "$DIR/exe"
fmaps "$DIR/core" > "$DIR/maps"
PKGS=`/usr/bin/fdebuginfo "$DIR/core" | grep "\-\-\-" | cut -d ' ' -f 1 | sort | uniq | grep '^/'| xargs rpm -qf | sort | uniq`
[ "x$PKGS" != x ] && debuginfo-install -y $PKGS
fstack -rich "$DIR/core" > "$DIR/fstack"
set +x
(
	echo "Application `cat "$DIR/exe"` (pid=$1,uid=$3,gid=$4) crashed with signal $5."
	echo "Stack trace follows:"
	cat "$DIR/fstack"
	echo "Auxiliary vector:"
	cat "$DIR/auxv"
	echo "Maps:"
	cat "$DIR/maps"
	echo "For details check $DIR"
) | logger -p local6.info -t "frysk-core-dump-$1"

Copy that into a file $SOMEWHERE/frysk-core-dump. Then do a chmod +x $SOMEWHERE/frysk-core-dump and a chown root:root $SOMEWHERE/frysk-core-dump. Now, tell the kernel that core dumps should be handed to this script:

# echo "|$SOMEWHERE/frysk-core-dump %p %t %u %g %s %h" > /proc/sys/kernel/core_pattern

Finally, increase RLIMIT_CORE to actually enable core dumps. ulimit -c
unlimited
is a good idea. This will enable them only for your shell and
everything it spawns. In /etc/security/limits.conf you can enable
them for all users. I haven’t found out yet how to enable them globally
in Fedora though, i.e. for every single process that is started after boot including system daemons.

You can test this with running sleep 4711 and then dumping core with C-\. The stacktrace should appear right-away in /var/log/messages.

This script will automatically try to install the debugging symbols for the crashing application via yum. In some cases it hence might take a while until the backtrace appears in syslog.

Don’t forget to install Frysk before trying this script!

You can’t believe how useful this script is. Something crashed and the backtrace is already waiting for you! It’s a bugfixer’s wet dream.

I am a bit surprised though that noone else came up with this before me. Or maybe I am just too dumb to use Google properly?

PulseAudio FUD

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/jeffrey-stedfast.html

Jeffrey Stedfast

Jeffrey Stedfast seems to have made it his new hobby
to
bash
PulseAudio.
In a series of very negative blog postings he flamed my software and hence me
in best NotZed-like fashion. Particularly interesting in this case is the
fact that he apologized to me privately on IRC for this behaviour shortly
after his first posting when he was critizised on #gnome-hackers —
only to continue flaming and bashing in more blog posts shortly after. Flaming
is very much part of the Free Software community I guess. A lot of people do
it from time to time (including me). But maybe there are better places for
this than Planet Gnome. And maybe doing it for days is not particularly nice.
And maybe flaming sucks in the first place anyway.

Regardless what I think about Jeffrey and his behaviour on Planet Gnome,
let’s have a look on his trophies, the five “bugs” he posted:

Not directly related to PulseAudio itself. Also, finding errors in code that is related to esd is not exactly the most difficult thing in the world.
The same theme.
Fixed 3 months ago. It is certainly not my fault that this isn’t available in Jeffrey’s distro.
A real, valid bug report. Fixed in git a while back, but not available in any released version. May only be triggered under heavy load or with a bad high-latency scheduler.
A valid bug, but not really in PulseAudio. Mostly caused because the ALSA API and PA API don’t really match 100%.

OK, Jeffrey found a real bug, but I wouldn’t say this is really enough to make all the fuss about. Or is it?

Why PulseAudio?

Jeffrey wrote something about ‘solution looking for a problem’ when
speaking of PulseAudio. While that was certainly not a nice thing to say it
however tells me one thing: I apparently didn’t manage to communicate well
enough why I am doing PulseAudio in the first place. So, why am I doing it then?

There’s so much more a good audio system needs to provide than just the
most basic mixing functionality. Per-application volumes, moving streams
between devices during playback, positional event sounds (i.e. click on the
left side of the screen, have the sound event come out through the left
speakers), secure session-switching support, monitoring of sound playback
levels, rescuing playback streams to other audio devices on hot unplug,
automatic hotplug configuration, automatic up/downmixing stereo/surround,
high-quality resampling, network transparency, sound effects, simultaneous
output to multiple sound devices are all features PA provides right now, and
what you don’t get without it. It also provides the infrastructure for
upcoming features like volume-follows-focus, automatic attenuation of music on
signal on VoIP stream, UPnP media renderer support, Apple RAOP support,
mixing/volume adjustments with dynamic range compression, adaptive volume of
event sounds based on the volume of music streams, jack sensing, switching
between stereo/surround/spdif during runtime, …

And even for the most basic mixing functionality plain ALSA/dmix is not
really everlasting happiness. Due to the way it works all clients are forced
to use the same buffering metrics all the time, that means all clients are
limited in their wakeup/latency settings. You will burn more CPU than
necessary this way, keep the risk of drop-outs unnecessarily high and still
not be able to make clients with low-latency requirements happy. ‘Glitch-Free’
PulseAudio
fixes all this. Quite frankly I believe that ‘glitch-free’
PulseAudio is the single most important killer feature that should be enough
to convince everyone why PulseAudio is the right thing to do. Maybe people
actually don’t know that they want this. But they absolutely do, especially
the embedded people — if used properly it is a must for power-saving during
audio playback. It’s a pity that how awesome this feature is you cannot
directly see from the user interface.[1]

PulseAudio provides compatibility with a lot of sound systems/APIs that bare ALSA
or bare OSS don’t provide.

And last but not least, I love breaking Jeffrey’s audio. It’s just soo much fun, you really have to try it! 😉

If you want to know more about why I think that PulseAudio is an important part of the modern Linux desktop audio stack, please read my slides from FOSS.in 2007.

Misconceptions

Many people (like Jeffrey) wonder why have software mixing at all if you
have hardware mixing? The thing is, hardware mixing is a thing of the past,
modern soundcards don’t do it anymore. Precisely for doing things like mixing
in software SIMD CPU extensions like SSE have been invented. Modern sound
cards these days are kind of “dumbed” down, high-quality DACs. They don’t do
mixing anymore, many modern chips don’t even do volume control anymore.
Remember the days where having a Wavetable chip was a killer feature of a
sound card? Those days are gone, today wavetable synthesizing is done almost
exlcusively in software — and that’s exactly what happened to hardware mixing
too. And it is good that way. In software mixing is is much easier to do
fancier stuff like DRC which will increase quality of mixing. And modern CPUs provide
all the necessary SIMD command sets to implement this efficiently.

Other people believe that JACK would be a better solution for the problem.
This is nonsense. JACK has been designed for a very different purpose. It is
optimized for low latency inter-application communication. It requires
floating point samples, it knows nothing about channel mappings, it depends on
every client to behave correctly. And so on, and so on. It is a sound server
for audio production. For desktop applications it is however not well suited.
For a desktop saving power is very important, one application misbehaving
shouldn’t have an effect on other application’s playback; converting from/to
FP all the time is not going to help battery life either. Please understand
that for the purpose of pro audio you can make completely different
compromises than you can do on the desktop. For example, while having
‘glitch-free’ is great for embedded and desktop use, it makes no sense at all
for pro audio, and would only have a drawback on performance. So, please stop
bringing up JACK again and again. It’s just not the right tool for desktop
audio, and this opinion is shared by the JACK developers themselves.

Jeffrey thinks that audio mixing is nothing for userspace. Which is
basically what OSS4 tries to do: mixing in kernel space. However, the future
of PCM audio is floating points. Mixing them in kernel space is problematic because (at least on Linux) FP in kernel space is a no-no.
Also, the kernel people made clear more than once that maths/decoding/encoding like this
should happen in userspace. Quite honestly, doing the mixing in kernel space
is probably one of the primary reasons why I think that OSS4 is a bad idea.
The fancier your mixing gets (i.e. including resampling, upmixing, downmixing,
DRC, …) the more difficulties you will have to move such a complex,
time-intensive code into the kernel.

Not everytime your audio breaks it is alone PulseAudio’s fault. For
example, the original flame of Jeffrey’s was about the low volume that he
experienced when running PA. This is mostly due to the suckish way we
initialize the default volumes of ALSA sound cards. Most distributions have
simple scripts that initialize ALSA sound card volumes to fixed values like
75% of the available range, without understanding what the range or the
controls actually mean. This is actually a very bad thing to do. Integrated
USB speakers for example tend export the full amplification range via the
mixer controls. 75% for them is incredibly loud. For other hardware (like
apparently Jeffrey’s) it is too low in volume. How to fix this has been
discussed on the ALSA mailing list, but no final solution has been presented
yet. Nonetheless, the fact that the volume was too low, is completely
unrelated to PulseAudio.

PulseAudio interfaces with lower-level technologies like ALSA on one hand,
and with high-level applications on the other hand. Those systems are not
perfect. Especially closed-source applications tend to do very evil things
with the audio APIs (Flash!) that are very hard to support on virtualized
sound systems such as PulseAudio [2]. However, things are getting better. My list of issues I found in
ALSA
is getting shorter. Many applications have already been fixed.

The reflex “my audio is broken it must be PulseAudio’s fault” is certainly
easy to come up with, but it certainly is not always right.

Also note that — like many areas in Free Software — development of the
desktop audio stack on Linux is a bit understaffed. AFAIK there are only two
people working on ALSA full-time and only me working on PulseAudio and other
userspace audio infrastructure, assisted by a few others who supply code and patches
from time to time, some more and some less.

More Breakage to Come

I now tried to explain why the audio experience on systems with PulseAudio
might not be as good as some people hoped, but what about the future? To be
frank: the next version of PulseAudio (0.9.11) will break even more things.
The ‘glitch-free’ stuff mentioned above uses quite a few features of the
underlying ALSA infrastructure that apparently noone has been using before —
and which just don’t work properly yet on all drivers. And there are quite a
few drivers around, and I only have a very limited set of hardware to test
with. Already I know that the some of the most popular drivers (USB and HDA)
do not work entirely correctly with ‘glitch-free’.

So you ask why I plan to release this code knowing that it will break
things? Well, it works on some hardware/drivers properly, and for the others I
know work-arounds to get things to work. And 0.9.11 has been delayed for too
long already. Also I need testing from a bigger audience. And it is not so
much 0.9.11 that is buggy, it is the code it is based on. ‘Glitch-free’ PA
0.9.11 is going to part of Fedora 10. Fedora has always been more bleeding
edge than other other distributions. Picking 0.9.11 just like that for an
‘LTS’ release might however be a not a good idea.

So, please bear with me when I release 0.9.11. Snapshots have already
been available in Rawhide for a while, and hell didn’t freeze over.

The Distributions’ Role in the Game

Some distributions did a better job adopting PulseAudio than others. On the
good side I certainly have to list Mandriva, Debian[3], and
Fedora[4]. OTOH Ubuntu didn’t exactly do a stellar job. They didn’t
do their homework. Adopting PA in a distribution is a fair amount of work,
given that it interfaces with so many different things at so many different
places. The integration with other systems is crucial. The information was all
out there, communicated on the wiki, the mailing lists and on the PA IRC
channel. But if you join and hang around on neither, then you won’t get the
memo. To my surprise when Ubuntu adopted PulseAudio they moved into one of their
‘LTS’ releases rightaway [5]. Which I guess can be called gutsy —
on the background that I work for Red Hat and PulseAudio is not part of RHEL
at this time. I get a lot of flak from Ubuntu users, and I am pretty sure the
vast amount of it is undeserving and not my fault.

Why Jeffrey’s distro of choice (SUSE?) didn’t package pavucontrol 0.9.6
although it has been released months ago I don’t know. But there’s certainly no reason to whine about
that to me
and bash me for it.

Having said all this — it’s easy to point to other software’s faults or
other people’s failures. So, admitting this, PulseAudio is certainly not
bug-free, far from that. It’s a relatively complex piece of software
(threading, real-time, lock-free, sensitive to timing, …), and every
software has its bugs. In some workloads they might be easier to find than it
others. And I am working on fixing those which are found. I won’t forget any
bug report, but the order and priority I work on them is still mostly up to me
I guess, right? There’s still a lot of work to do in desktop audio, it will
take some time to get things completely right and complete.

Calls for “audio should just work ™” are often heard. But if you don’t
want to stick with a sound system that was state of the art in the 90’s for
all times, then I fear things *will have* to break from time to time. And
Jeffrey, I have no idea what you are actually hacking on. Some people
mentioned something with Evolution. If that’s true, then quite honestly,
“email should just work”, too, shouldn’t it? Evolution is not exactly
famous for it’s legendary bug-freeness and stability, or did I miss something?
Maybe you should be the one to start with making things “just work”, especially since
Evolution has been around for much longer already.

Back to Work

Now that I responded to Jeffrey’s FUD I think we all can go back to work
and end this flamefest! I wish people would actually try to understand
things before writing an insulting rant — without the slightest clue — but
with words like “clusterfuck”. I’d like to thank all the people who commented
on Jeffrey’s blog and basically already said what I wrote here
now.

So, and now I am off hacking a bit on PulseAudio a bit more — or should
I say in Jeffrey’s words: on my clusterfuck that is an epic fail and that no desktop user needs?

Footnotes

[1] BTW ‘glitch-free’ is nothing I invented, other OS have been doing something
like this for quite a while (Vista, Mac OS). On Linux however, PulseAudio is
the first and only implementation (at least to my knowledge).

[2] In fact, Flash 9 can not be made fully working on PulseAudio.
This is because the way Flash destructs it’s driver backends is racy.
Unfixably racy, from external code. Jeffrey complained about Flash instability
in his second post. This is unfair to PulseAudio, because I cannot fix this.
This is like complaining that X crashes when you use binary-only
fglrx.

[3] To Debian’s standards at least. Since development of Debian is
very distributed the integration of such a system as PulseAudio is much more
difficult since in touches so many different packages in the system that are
kind of private property by a lot of different maintainers with different
views on things.

[4] I maintain the Fedora stuff myself, so I might be a bit biased on this one… 😉

[5] I guess Ubuntu sees that this was a bit too much too early, too.
At least that’s how I understood my invitation to UDS in Prague. Since that
summit I haven’t heard anything from them anymore, though.

PulseAudio FUD

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/jeffrey-stedfast.html

Jeffrey Stedfast

Jeffrey Stedfast seems to have made it his new hobby
to
bash
PulseAudio.
In a series of very negative blog postings he flamed my software and hence me
in best NotZed-like fashion. Particularly interesting in this case is the
fact that he apologized to me privately on IRC for this behaviour shortly
after his first posting when he was critizised on #gnome-hackers
only to continue flaming and bashing in more blog posts shortly after. Flaming
is very much part of the Free Software community I guess. A lot of people do
it from time to time (including me). But maybe there are better places for
this than Planet Gnome. And maybe doing it for days is not particularly nice.
And maybe flaming sucks in the first place anyway.

Regardless what I think about Jeffrey and his behaviour on Planet Gnome,
let’s have a look on his trophies, the five “bugs” he posted:

  1. Not directly related to PulseAudio itself. Also, finding errors in code that is related to esd is not exactly the most difficult thing in the world.
  2. The same theme.
  3. Fixed 3 months ago. It is certainly not my fault that this isn’t available in Jeffrey’s distro.
  4. A real, valid bug report. Fixed in git a while back, but not available in any released version. May only be triggered under heavy load or with a bad high-latency scheduler.
  5. A valid bug, but not really in PulseAudio. Mostly caused because the ALSA API and PA API don’t really match 100%.

OK, Jeffrey found a real bug, but I wouldn’t say this is really enough to make all the fuss about. Or is it?

Why PulseAudio?

Jeffrey wrote something about ‘solution looking for a problem‘ when
speaking of PulseAudio. While that was certainly not a nice thing to say it
however tells me one thing: I apparently didn’t manage to communicate well
enough why I am doing PulseAudio in the first place. So, why am I doing it then?

  • There’s so much more a good audio system needs to provide than just the
    most basic mixing functionality. Per-application volumes, moving streams
    between devices during playback, positional event sounds (i.e. click on the
    left side of the screen, have the sound event come out through the left
    speakers), secure session-switching support, monitoring of sound playback
    levels, rescuing playback streams to other audio devices on hot unplug,
    automatic hotplug configuration, automatic up/downmixing stereo/surround,
    high-quality resampling, network transparency, sound effects, simultaneous
    output to multiple sound devices are all features PA provides right now, and
    what you don’t get without it. It also provides the infrastructure for
    upcoming features like volume-follows-focus, automatic attenuation of music on
    signal on VoIP stream, UPnP media renderer support, Apple RAOP support,
    mixing/volume adjustments with dynamic range compression, adaptive volume of
    event sounds based on the volume of music streams, jack sensing, switching
    between stereo/surround/spdif during runtime, …
  • And even for the most basic mixing functionality plain ALSA/dmix is not
    really everlasting happiness. Due to the way it works all clients are forced
    to use the same buffering metrics all the time, that means all clients are
    limited in their wakeup/latency settings. You will burn more CPU than
    necessary this way, keep the risk of drop-outs unnecessarily high and still
    not be able to make clients with low-latency requirements happy. ‘Glitch-Free’
    PulseAudio
    fixes all this. Quite frankly I believe that ‘glitch-free’
    PulseAudio is the single most important killer feature that should be enough
    to convince everyone why PulseAudio is the right thing to do. Maybe people
    actually don’t know that they want this. But they absolutely do, especially
    the embedded people — if used properly it is a must for power-saving during
    audio playback. It’s a pity that how awesome this feature is you cannot
    directly see from the user interface.[1]
  • PulseAudio provides compatibility with a lot of sound systems/APIs that bare ALSA
    or bare OSS don’t provide.
  • And last but not least, I love breaking Jeffrey’s audio. It’s just soo much fun, you really have to try it! 😉

If you want to know more about why I think that PulseAudio is an important part of the modern Linux desktop audio stack, please read my slides from FOSS.in 2007.

Misconceptions

Many people (like Jeffrey) wonder why have software mixing at all if you
have hardware mixing? The thing is, hardware mixing is a thing of the past,
modern soundcards don’t do it anymore. Precisely for doing things like mixing
in software SIMD CPU extensions like SSE have been invented. Modern sound
cards these days are kind of “dumbed” down, high-quality DACs. They don’t do
mixing anymore, many modern chips don’t even do volume control anymore.
Remember the days where having a Wavetable chip was a killer feature of a
sound card? Those days are gone, today wavetable synthesizing is done almost
exlcusively in software — and that’s exactly what happened to hardware mixing
too. And it is good that way. In software mixing is is much easier to do
fancier stuff like DRC which will increase quality of mixing. And modern CPUs provide
all the necessary SIMD command sets to implement this efficiently.

Other people believe that JACK would be a better solution for the problem.
This is nonsense. JACK has been designed for a very different purpose. It is
optimized for low latency inter-application communication. It requires
floating point samples, it knows nothing about channel mappings, it depends on
every client to behave correctly. And so on, and so on. It is a sound server
for audio production. For desktop applications it is however not well suited.
For a desktop saving power is very important, one application misbehaving
shouldn’t have an effect on other application’s playback; converting from/to
FP all the time is not going to help battery life either. Please understand
that for the purpose of pro audio you can make completely different
compromises than you can do on the desktop. For example, while having
‘glitch-free’ is great for embedded and desktop use, it makes no sense at all
for pro audio, and would only have a drawback on performance. So, please stop
bringing up JACK again and again. It’s just not the right tool for desktop
audio, and this opinion is shared by the JACK developers themselves.

Jeffrey thinks that audio mixing is nothing for userspace. Which is
basically what OSS4 tries to do: mixing in kernel space. However, the future
of PCM audio is floating points. Mixing them in kernel space is problematic because (at least on Linux) FP in kernel space is a no-no.
Also, the kernel people made clear more than once that maths/decoding/encoding like this
should happen in userspace. Quite honestly, doing the mixing in kernel space
is probably one of the primary reasons why I think that OSS4 is a bad idea.
The fancier your mixing gets (i.e. including resampling, upmixing, downmixing,
DRC, …) the more difficulties you will have to move such a complex,
time-intensive code into the kernel.

Not everytime your audio breaks it is alone PulseAudio’s fault. For
example, the original flame of Jeffrey’s was about the low volume that he
experienced when running PA. This is mostly due to the suckish way we
initialize the default volumes of ALSA sound cards. Most distributions have
simple scripts that initialize ALSA sound card volumes to fixed values like
75% of the available range, without understanding what the range or the
controls actually mean. This is actually a very bad thing to do. Integrated
USB speakers for example tend export the full amplification range via the
mixer controls. 75% for them is incredibly loud. For other hardware (like
apparently Jeffrey’s) it is too low in volume. How to fix this has been
discussed on the ALSA mailing list, but no final solution has been presented
yet. Nonetheless, the fact that the volume was too low, is completely
unrelated to PulseAudio.

PulseAudio interfaces with lower-level technologies like ALSA on one hand,
and with high-level applications on the other hand. Those systems are not
perfect. Especially closed-source applications tend to do very evil things
with the audio APIs (Flash!) that are very hard to support on virtualized
sound systems such as PulseAudio [2]. However, things are getting better. My list of issues I found in
ALSA
is getting shorter. Many applications have already been fixed.

The reflex “my audio is broken it must be PulseAudio’s fault” is certainly
easy to come up with, but it certainly is not always right.

Also note that — like many areas in Free Software — development of the
desktop audio stack on Linux is a bit understaffed. AFAIK there are only two
people working on ALSA full-time and only me working on PulseAudio and other
userspace audio infrastructure, assisted by a few others who supply code and patches
from time to time, some more and some less.

More Breakage to Come

I now tried to explain why the audio experience on systems with PulseAudio
might not be as good as some people hoped, but what about the future? To be
frank: the next version of PulseAudio (0.9.11) will break even more things.
The ‘glitch-free’ stuff mentioned above uses quite a few features of the
underlying ALSA infrastructure that apparently noone has been using before —
and which just don’t work properly yet on all drivers. And there are quite a
few drivers around, and I only have a very limited set of hardware to test
with. Already I know that the some of the most popular drivers (USB and HDA)
do not work entirely correctly with ‘glitch-free’.

So you ask why I plan to release this code knowing that it will break
things? Well, it works on some hardware/drivers properly, and for the others I
know work-arounds to get things to work. And 0.9.11 has been delayed for too
long already. Also I need testing from a bigger audience. And it is not so
much 0.9.11 that is buggy, it is the code it is based on. ‘Glitch-free’ PA
0.9.11 is going to part of Fedora 10. Fedora has always been more bleeding
edge than other other distributions. Picking 0.9.11 just like that for an
‘LTS’ release might however be a not a good idea.

So, please bear with me when I release 0.9.11. Snapshots have already
been available in Rawhide for a while, and hell didn’t freeze over.

The Distributions’ Role in the Game

Some distributions did a better job adopting PulseAudio than others. On the
good side I certainly have to list Mandriva, Debian[3], and
Fedora[4]. OTOH Ubuntu didn’t exactly do a stellar job. They didn’t
do their homework. Adopting PA in a distribution is a fair amount of work,
given that it interfaces with so many different things at so many different
places. The integration with other systems is crucial. The information was all
out there, communicated on the wiki, the mailing lists and on the PA IRC
channel. But if you join and hang around on neither, then you won’t get the
memo. To my surprise when Ubuntu adopted PulseAudio they moved into one of their
‘LTS’ releases rightaway [5]. Which I guess can be called gutsy —
on the background that I work for Red Hat and PulseAudio is not part of RHEL
at this time. I get a lot of flak from Ubuntu users, and I am pretty sure the
vast amount of it is undeserving and not my fault.

Why Jeffrey’s distro of choice (SUSE?) didn’t package pavucontrol 0.9.6
although it has been released months ago I don’t know. But there’s certainly no reason to whine about
that to me
and bash me for it.

Having said all this — it’s easy to point to other software’s faults or
other people’s failures. So, admitting this, PulseAudio is certainly not
bug-free, far from that. It’s a relatively complex piece of software
(threading, real-time, lock-free, sensitive to timing, …), and every
software has its bugs. In some workloads they might be easier to find than it
others. And I am working on fixing those which are found. I won’t forget any
bug report, but the order and priority I work on them is still mostly up to me
I guess, right? There’s still a lot of work to do in desktop audio, it will
take some time to get things completely right and complete.

Calls for “audio should just work ™” are often heard. But if you don’t
want to stick with a sound system that was state of the art in the 90’s for
all times, then I fear things *will have* to break from time to time. And
Jeffrey, I have no idea what you are actually hacking on. Some people
mentioned something with Evolution. If that’s true, then quite honestly,
“email should just work”, too, shouldn’t it? Evolution is not exactly
famous for it’s legendary bug-freeness and stability, or did I miss something?
Maybe you should be the one to start with making things “just work”, especially since
Evolution has been around for much longer already.

Back to Work

Now that I responded to Jeffrey’s FUD I think we all can go back to work
and end this flamefest! I wish people would actually try to understand
things before writing an insulting rant — without the slightest clue — but
with words like “clusterfuck”. I’d like to thank all the people who commented
on Jeffrey’s blog and basically already said what I wrote here
now.

So, and now I am off hacking a bit on PulseAudio a bit more — or should
I say in Jeffrey’s words: on my clusterfuck that is an epic fail and that no desktop user needs?

Footnotes

[1] BTW ‘glitch-free’ is nothing I invented, other OS have been doing something
like this for quite a while (Vista, Mac OS). On Linux however, PulseAudio is
the first and only implementation (at least to my knowledge).

[2] In fact, Flash 9 can not be made fully working on PulseAudio.
This is because the way Flash destructs it’s driver backends is racy.
Unfixably racy, from external code. Jeffrey complained about Flash instability
in his second post. This is unfair to PulseAudio, because I cannot fix this.
This is like complaining that X crashes when you use binary-only
fglrx.

[3] To Debian’s standards at least. Since development of Debian is
very distributed the integration of such a system as PulseAudio is much more
difficult since in touches so many different packages in the system that are
kind of private property by a lot of different maintainers with different
views on things.

[4] I maintain the Fedora stuff myself, so I might be a bit biased on this one… 😉

[5] I guess Ubuntu sees that this was a bit too much too early, too.
At least that’s how I understood my invitation to UDS in Prague. Since that
summit I haven’t heard anything from them anymore, though.

Being Smart

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/being-smart.html

Last weekend I set myself the task to write an ATA S.M.A.R.T. (i.e. hard
disk health monitoring) reader and parser. After spending some time reading
all kinds of T13 and T10 docs and a bit of hacking I now present
you the following new software:

libatasmart: a lean, small and clean implementation of an ATA S.M.A.R.T.
reading and parsing library. It’s fairly comprehensive, however I only support
a subset of the full S.M.A.R.T. set of functions: those parts which made sense to
me, not the esoteric stuff. Here’s the API and here’s the README.

skdump: a little tool that produces a similar output to smartctl
-a, but uses libatasmart.

sktest: a little tool for starting/aborting S.M.A.R.T. self-tests, based on libatasmart

gnome-disk-health-service: a little wrapper around
libatasmart that exports its entire functionality via D-Bus, so that
unpriviliged processes can introspect a drive’s health records, including
temperature, number of bad sectors and suchlike. This is written in Vala, which
BTW is awesome for doing D-Bus services. Actually after having done this once now I
really hope I will never have to write a D-Bus server without Vala again. I
also wrote a Vala .vapi file for libatasmart which is shipped in
its tarball.

gnome-disk-health: a little tool that reads the S.M.A.R.T.
data from g-d-h-s and presents it in a pretty dialog. Includes support for
viewing attributes and starting self-tests and stuff. Also written with
Vala.

Why? You might ask what the point of all this stuff is where
smartmontools already
exists. What I’d like to see on future GNOME desktops is that as soon as a
disk starts to fail a notification bubble pops up warning the user about this
fact, and suggesting that he makes backups and replaces the disk. For a tight
integration into the desktop, a S.M.A.R.T. implementation that is small, and not
C++, and a library (i.e. embeddable into other software with a sane interface)
is highly preferable. Also, stuff like distribution installers should link
against libatasmart to warn the user about old, and defective disks
before he even starts the installation on them. (Hey, anaconda developers! That means you! It’s a tiny library, and all you need to do is a single call: int sk_disk_smart_status(SkDisk *d, SkBool *good);)

Please note that I certainly don’t plan to replace smartmontools.
libatasmart will always implement only a subset of S.M.A.R.T. If you want
the full set of functionality then please refer to smartmontools.

Where’s this going? I plan to fully maintain libatasmart
(including skdump and sktest) for the future. However
g-d-h and g-d-h-s will probably just bitrot in my repository
— unless someone else wants to pick this up and maintain it. The reason my
further interest in those tools is rather limited is that for the long run we
will hopefully will see davidz’s DeviceKit-disks (screenhot)
changed to use this library for health monitoring. Then DK-d will export the
S.M.A.R.T. info on the bus, and a separate daemon would not be necessary anymore.
DK-d provides a single interface for all kinds of health parameters for
storage, including RAID health and suchlike. I thus think this is the way
forward and not g-d-h-s. (That should, of course, not hinder anyone to step up
and take up maintainership of g-d-h/g-d-h-s if he wants to. There might be good
reasons for doing so. Maybe because you need something to do, or because you
want a S.M.A.R.T. solution for the desktop now, and not wait until DeviceKit gets
pushed into all the distros).

So, here’s where you can get this stuff:

git://git.0pointer.de/libatasmart.git
git://git.0pointer.de/gnome-disk-health.git

Browse the GIT repos.

I will roll a 0.1 tarball of libatasmart soon. I’d be thankful if people could run
skdump on their disks and check if its output is basically the same as
smartctl -a’s. Especially people with BE machines.

Of course the most important part of a software announcement is always the screenshot:

Smart-Ass!

return -ETOOMANYDOTS;

Being Smart

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/being-smart.html

Last weekend I set myself the task to write an ATA S.M.A.R.T. (i.e. hard
disk health monitoring) reader and parser. After spending some time reading
all kinds of T13 and T10 docs and a bit of hacking I now present
you the following new software:

  • libatasmart: a lean, small and clean implementation of an ATA S.M.A.R.T.
    reading and parsing library. It’s fairly comprehensive, however I only support
    a subset of the full S.M.A.R.T. set of functions: those parts which made sense to
    me, not the esoteric stuff. Here’s the API and here’s the README.
  • skdump: a little tool that produces a similar output to smartctl
    -a
    , but uses libatasmart.
  • sktest: a little tool for starting/aborting S.M.A.R.T. self-tests, based on libatasmart
  • gnome-disk-health-service: a little wrapper around
    libatasmart that exports its entire functionality via D-Bus, so that
    unpriviliged processes can introspect a drive’s health records, including
    temperature, number of bad sectors and suchlike. This is written in Vala, which
    BTW is awesome for doing D-Bus services. Actually after having done this once now I
    really hope I will never have to write a D-Bus server without Vala again. I
    also wrote a Vala .vapi file for libatasmart which is shipped in
    its tarball.
  • gnome-disk-health: a little tool that reads the S.M.A.R.T.
    data from g-d-h-s and presents it in a pretty dialog. Includes support for
    viewing attributes and starting self-tests and stuff. Also written with
    Vala.

Why? You might ask what the point of all this stuff is where
smartmontools already
exists. What I’d like to see on future GNOME desktops is that as soon as a
disk starts to fail a notification bubble pops up warning the user about this
fact, and suggesting that he makes backups and replaces the disk. For a tight
integration into the desktop, a S.M.A.R.T. implementation that is small, and not
C++, and a library (i.e. embeddable into other software with a sane interface)
is highly preferable. Also, stuff like distribution installers should link
against libatasmart to warn the user about old, and defective disks
before he even starts the installation on them. (Hey, anaconda developers! That means you! It’s a tiny library, and all you need to do is a single call: int sk_disk_smart_status(SkDisk *d, SkBool *good);)

Please note that I certainly don’t plan to replace smartmontools.
libatasmart will always implement only a subset of S.M.A.R.T. If you want
the full set of functionality then please refer to smartmontools.

Where’s this going? I plan to fully maintain libatasmart
(including skdump and sktest) for the future. However
g-d-h and g-d-h-s will probably just bitrot in my repository
— unless someone else wants to pick this up and maintain it. The reason my
further interest in those tools is rather limited is that for the long run we
will hopefully will see davidz’s DeviceKit-disks (screenhot)
changed to use this library for health monitoring. Then DK-d will export the
S.M.A.R.T. info on the bus, and a separate daemon would not be necessary anymore.
DK-d provides a single interface for all kinds of health parameters for
storage, including RAID health and suchlike. I thus think this is the way
forward and not g-d-h-s. (That should, of course, not hinder anyone to step up
and take up maintainership of g-d-h/g-d-h-s if he wants to. There might be good
reasons for doing so. Maybe because you need something to do, or because you
want a S.M.A.R.T. solution for the desktop now, and not wait until DeviceKit gets
pushed into all the distros).

So, here’s where you can get this stuff:

git://git.0pointer.de/libatasmart.git

git://git.0pointer.de/gnome-disk-health.git

Browse the GIT repos.

I will roll a 0.1 tarball of libatasmart soon. I’d be thankful if people could run
skdump on their disks and check if its output is basically the same as
smartctl -a‘s. Especially people with BE machines.

Of course the most important part of a software announcement is always the screenshot:

Smart-Ass!

return -ETOOMANYDOTS;

Playing with Cairo

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fring.html

Play around with Cairo: Check!

One thing that has been sitting on my TODO list for a very long
time was playing around with Cairo. No longer! Yesterday I spent a
little time on hacking a Cairo based equivalent of KDE’s Filelight (Which
BTW is one of the two programs that KDE has but GNOME really lacks,
the other being KCacheGrind). The
result after two hours is this:

Fring Screenshot

This screenshot shows the development tree of my Syrep tool.

This tool has definitely nicer anti-aliased graphics than
Filelight, doesn’t it? The source code is here: fring.py. Anyone
interested in turning this into a proper GNOME application?

Playing with Cairo

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/fring.html

Play around with Cairo: Check!

One thing that has been sitting on my TODO list for a very long
time was playing around with Cairo. No longer! Yesterday I spent a
little time on hacking a Cairo based equivalent of KDE’s Filelight (Which
BTW is one of the two programs that KDE has but GNOME really lacks,
the other being KCacheGrind). The
result after two hours is this:

Fring Screenshot

This screenshot shows the development tree of my Syrep tool.

This tool has definitely nicer anti-aliased graphics than
Filelight, doesn’t it? The source code is here: fring.py. Anyone
interested in turning this into a proper GNOME application?

Introducing the Polypaudio Volume Control

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/pavucontrol.html

The result of a few hours of hacking:

pavucontrol screenshot

pavucontrol cannot only control the volume of hardware devices of the Polypaudio sound server but also of all playback streams seperately, much like the new Windows Vista volume control application.

Get the Polypaudio Volume Control while it is hot.

On a side note I released updated versions of both the Polypaudio Volume Meter and the Polypaudio Manager which are compatible with Polypaudio 0.8.

Introducing the Polypaudio Volume Control

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/pavucontrol.html

The result of a few hours of hacking:

pavucontrol screenshot

pavucontrol cannot only control the volume of hardware devices of the Polypaudio sound server but also of all playback streams seperately, much like the new Windows Vista volume control application.

Get the Polypaudio Volume Control while it is hot.

On a side note I released updated versions of both the Polypaudio Volume Meter and the Polypaudio Manager which are compatible with Polypaudio 0.8.

Introducing nss-myhostname

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/nss-myhostname.html

I am doing a lot of embedded Linux work lately. The machines we use configure their hostname depending on some external configuration options. They boot from a CF card, which is mostly mounted read-only. Since the hostname changes often but we wanted to use sudo we had a problem: sudo requires the local host name to be resolvable using gethostbyname(). On Debian this is usually done by patching /etc/hosts correctly. Unfortunately that file resides on a read-only partition. Instead of hacking some ugly symlink based solution I decided to fix it the right way and wrote a tiny NSS module which does nothing more than mapping the hostname to the IP address 127.0.0.2 (and back). (That IP address is on the loopback device, but is not identical to localhost.)

Get nss-myhostname while it is hot!

BTW: This tool I wrote is pretty useful on embedded machines too, and certainly easier to use than setterm -dump 1 -file /dev/stdout | fold -w 80. And it does color too. And looping. And is much cooler anyway.

Introducing nss-myhostname

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/nss-myhostname.html

I am doing a lot of embedded Linux work lately. The machines we use configure their hostname depending on some external configuration options. They boot from a CF card, which is mostly mounted read-only. Since the hostname changes often but we wanted to use sudo we had a problem: sudo requires the local host name to be resolvable using gethostbyname(). On Debian this is usually done by patching /etc/hosts correctly. Unfortunately that file resides on a read-only partition. Instead of hacking some ugly symlink based solution I decided to fix it the right way and wrote a tiny NSS module which does nothing more than mapping the hostname to the IP address 127.0.0.2 (and back). (That IP address is on the loopback device, but is not identical to localhost.)

Get nss-myhostname while it is hot!

BTW: This tool I wrote is pretty useful on embedded machines too, and certainly easier to use than setterm -dump 1 -file /dev/stdout | fold -w 80. And it does color too. And looping. And is much cooler anyway.