Im Zentrum der Macht

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/photos/im-zentrum-der-macht.html

The Government District in Berlin, with the Reichstag and the offices of the members of the Bundestag:

Im Zentrum der Macht

The Diana Temple in the Hofgarten in Munich:

Hofgarten

The Königsplatz in Munich:

Königsplatz

The Residenz in Munich:

Residenz

View from the tower of Old St. Peter in Munich:

St. Peter

Green pastures of Hamburg-Wohldorf:

Wohldorfer Feld

All my panoramic photos. (Warning! Page contains a lot of oversized, badly scaled images.)

Im Zentrum der Macht

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/photos/im-zentrum-der-macht.html

The Government District in Berlin, with the Reichstag and the offices of the members of the Bundestag:

Im Zentrum der Macht

The Diana Temple in the Hofgarten in Munich:

Hofgarten

The Königsplatz in Munich:

Königsplatz

The Residenz in Munich:

Residenz

View from the tower of Old St. Peter in Munich:

St. Peter

Green pastures of Hamburg-Wohldorf:

Wohldorfer Feld

All my panoramic photos. (Warning! Page contains a lot of oversized, badly scaled images.)

Re: Avahi – what happened. on Solaris..?

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/project-indiana-part2.html

In response to Darren Kenny:

On Linux (and FreeBSD) nss-mdns has been
providing decent low-level integration of mDNS at the nsswitch level for
ages. In fact it even predates Avahi by a few months. Porting it to Solaris
would have been almost trivial. And, Sun engineers even asked about nss-mdns,
so I am quite sure that Sun knew about this.

You claim that our C API was internal? I wonder who told you that. I
definitely did not. The API has been available on the Avahi web site for ages
and is relatively well documented [1], I wonder how anyone could
ever come to the opinion that it was “internal”. Regarding API stability: yes, I
said that we make no guarantees about API stability — but I also said it was
a top-priority for us to keep the API compatible. I think that is the best you
can get from any project of the Free Software community. If there is
something in an API that we later learn is irrecoverably broken or stupid by design, then
we take the freedom to replace that or remove it entirely. Oh, and even Sun
does things like that in Java, Just think of the Java 1.x
java.lang.Thread.stop() API.

nss-mdns does not make any use of D-Bus. It never did, it never will.

GNOME never formally made the decision to go Avahi AFAIK. It’s just what
everyone uses because it is available on all distributions. Also, a lot of GNOME software
can also be compiled against HOWL/Bonjour.

Implementing the Avahi API on top of the Bonjour API is just crack. For a
crude comparison: this is like implementing a POSIX compatiblity layer on top
of the DOS API. Crack. Just crack. There is lot of functionality you can
*never* emulate in any reasonable way on top of the current Bonjour API:
properly integrated IPv4+IPv6 support, AVAHI_BROWSER_ALL_FOR_NOW, the fact that the Avahi API is
transaction-based, all the different flag definitions, and a lot more. From a
technical persepective emulating Avahi on top of Bonjour is not feasible, while
the other way round perfectly is.

Let’s also not forget that Avahi comes with a Bonjour compatibility layer,
which gets almost any Bonjour app working on top of Avahi. And in contrast your
Avahi-on-top-of-Bonjour stuff it is not inherently borked. Yes, our Bonjour compatibility layer is
not perfect, but should be very easy to fix if there should still be an
incompatibility left. And the API of that layer is of course as much set in
stone as the upstream Bonjour API. Oh, and you wouldn’t have to run two daemons instead of
just one. And you would only need to ship and maintain a single mDNS package.
Oh, and the compatibility layer would only be needed for the few remaing
applications that still use Bonjour exclusively, and not by the majority of
applications.

So, in effect you chose Bonjour because of its API and added some Avahi’sh
API on top and this all is totally crackish. If you’d have done it the other way round
you would have gotten both APIs as well, but the overall solution would not
have been totally crackish. And let’s not forget that Avahi is much more
complete than Bonjour. (Maybe except wide-area support, Federico!).

Anyway, my original rant was not about the way Sun makes its decision but
just about the fact that your Avahi-to-Bonjour-bridge is … crack! And that
it remains.

Wow, six times crack in a single article.

Footnotes:

[1] For a Free Software API at least.

Re: Avahi – what happened. on Solaris..?

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/project-indiana-part2.html

In response to Darren Kenny:

  • On Linux (and FreeBSD) nss-mdns has been
    providing decent low-level integration of mDNS at the nsswitch level for
    ages. In fact it even predates Avahi by a few months. Porting it to Solaris
    would have been almost trivial. And, Sun engineers even asked about nss-mdns,
    so I am quite sure that Sun knew about this.
  • You claim that our C API was internal? I wonder who told you that. I
    definitely did not. The API has been available on the Avahi web site for ages
    and is relatively well documented [1], I wonder how anyone could
    ever come to the opinion that it was “internal”. Regarding API stability: yes, I
    said that we make no guarantees about API stability — but I also said it was
    a top-priority for us to keep the API compatible. I think that is the best you
    can get from any project of the Free Software community. If there is
    something in an API that we later learn is irrecoverably broken or stupid by design, then
    we take the freedom to replace that or remove it entirely. Oh, and even Sun
    does things like that in Java, Just think of the Java 1.x
    java.lang.Thread.stop() API.
  • nss-mdns does not make any use of D-Bus. It never did, it never will.
  • GNOME never formally made the decision to go Avahi AFAIK. It’s just what
    everyone uses because it is available on all distributions. Also, a lot of GNOME software
    can also be compiled against HOWL/Bonjour.
  • Implementing the Avahi API on top of the Bonjour API is just crack. For a
    crude comparison: this is like implementing a POSIX compatiblity layer on top
    of the DOS API. Crack. Just crack. There is lot of functionality you can
    *never* emulate in any reasonable way on top of the current Bonjour API:
    properly integrated IPv4+IPv6 support, AVAHI_BROWSER_ALL_FOR_NOW, the fact that the Avahi API is
    transaction-based, all the different flag definitions, and a lot more. From a
    technical persepective emulating Avahi on top of Bonjour is not feasible, while
    the other way round perfectly is.

Let’s also not forget that Avahi comes with a Bonjour compatibility layer,
which gets almost any Bonjour app working on top of Avahi. And in contrast your
Avahi-on-top-of-Bonjour stuff it is not inherently borked. Yes, our Bonjour compatibility layer is
not perfect, but should be very easy to fix if there should still be an
incompatibility left. And the API of that layer is of course as much set in
stone as the upstream Bonjour API. Oh, and you wouldn’t have to run two daemons instead of
just one. And you would only need to ship and maintain a single mDNS package.
Oh, and the compatibility layer would only be needed for the few remaing
applications that still use Bonjour exclusively, and not by the majority of
applications.

So, in effect you chose Bonjour because of its API and added some Avahi’sh
API on top and this all is totally crackish. If you’d have done it the other way round
you would have gotten both APIs as well, but the overall solution would not
have been totally crackish. And let’s not forget that Avahi is much more
complete than Bonjour. (Maybe except wide-area support, Federico!).

Anyway, my original rant was not about the way Sun makes its decision but
just about the fact that your Avahi-to-Bonjour-bridge is … crack! And that
it remains.

Wow, six times crack in a single article.

Footnotes:

[1] For a Free Software API at least.

Project Indiana

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/project-indiana.html

Dear Sun Microsystems,

I wonder if the mythical “Project Indiana” consists of patches
like these
which among other strange things make the Avahi daemon just a frontend to the Apple
Bonjour
daemon. Given that Avahi is a superset of
Bonjour in both functionality and API this is just so ridiculuous —
I haven’t seen such a monstrous crack in quite a while.

Sun, you don’t get it, do you? That way you will only reach the
crappiness, bugginess and brokeness of Windows, not the power and
usability of Linux.

Oh, and please rename that “fork” of Avahi to something completely
different — because it actually is exactly that: something completely
different than Avahi.

Love,
     Lennart

Project Indiana

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/project-indiana.html

Dear Sun Microsystems,

I wonder if the mythical “Project Indiana” consists of patches
like these
which among other strange things make the Avahi daemon just a frontend to the Apple
Bonjour
daemon. Given that Avahi is a superset of
Bonjour in both functionality and API this is just so ridiculuous —
I haven’t seen such a monstrous crack in quite a while.

Sun, you don’t get it, do you? That way you will only reach the
crappiness, bugginess and brokeness of Windows, not the power and
usability of Linux.

Oh, and please rename that “fork” of Avahi to something completely
different — because it actually is exactly that: something completely
different than Avahi.

Love,

     Lennart

Virtually Reluctant

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/06/12/virtually-reluctant.html

Way back when User
Mode Linux (UML)
was the “only way” the Free Software
world did anything like virtualization, I was already skeptical.
Those of us who lived through the coming of age of Internet security
— with a remote root exploit for every day of the week —
became obsessed with the chroot and its ultimate limitations. Each
possible upgrade to a better, more robust virtual environment was met
with suspicion on the security front. I joined the many who doubted
that you could truly secure a machine that offered disjoint services
provisioned on the same physical machine. I’ve recently revisited
this position. I won’t say that Xen has completely changed my mind,
but I am open-minded enough again to experiment.

For more than a decade, I have used chroots as a mechanism to segment a
service that needed to run on a given box. In the old days
of ancient BINDs and sendmails, this was often the best we could do
when living with a program we didn’t fully trust to be clean of
remotely exploitable bugs.

I suppose those days gave us all rather strange sense of computer
security. I constantly have the sense that two services running on
the same box always endanger each other in some fundamental way. It
therefore took me a while before I was comfortable with the resurgence
of virtualization.

However, what ultimately drew me in was the simple fact that modern
hardware is just too darn fast. It’s tough to get a machine these
days that isn’t ridiculously overpowered for most tasks you put in
front of it. CPUs sit idle; RAM sits empty. We should make more
efficient use of the hardware we have.

Even with that reality, I might have given up if it wasn’t so easy. I
found a good link
about Debian on Xen
, a useful entry in
the Xen Wiki
, and some good
network
and LVM
examples
. I also quickly learned how to use RAID/LVM
together for disk redundancy inside Xen instances
. I even got bonded
ethernet
working with some help to add
additional network redundancy.

So, one Saturday morning, I headed into the office, and left that
afternoon with two virtual servers running. It helped that Xen 3.0 is
packaged properly for recent Ubuntu versions, and a few obvious
apt-get installs get you what you need on edgy and
feisty. In fact, I only struggled (and only just a bit) with the
network, but quickly discovered two important facts:

VIF network routing in my opinion is a bit easier to configure and
more stable than VIF bridging, even if routing is a bit
slower.

sysctl -w net.ipv4.conf.DEVICE.proxy_arp=1 is needed to
make the network routing down into the instances work
properly.

I’m not completely comfortable yet with the security of virtualization.
Of course, locking down the Dom0 is absolutely essential, because
there lies the keys to your virtual kingdom. I lock it down with
iptables so that only SSH from a few trusted hosts comes
in, and even services as fundamental as DNS can only be had from a few
trusted places. But, I still find myself imagining ways people can
bust through the instance kernels and find their way to the
hypervisor.

I’d really love to see a strong line-by-line code audit of the
hypervisor and related utilities to be sure we’ve got something we can
trust. However, in the meantime, I certainly have been sold on the
value of this approach, and am glad it’s so easy to set up.

DMI-based Autoloading of Linux Kernel Modules

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/dmi-based-module-autoloading.html

So, you’ve always been annoyed by the fact that you have to load all those
laptop, i2c, hwmon, hdaps Linux kernel modules manually without having spiffy
udev doing that work for you automagically? No more! I just sent a patch to LKML which adds DMI/SMBIOS-based
module autoloading to the Linux kernel.

Hopefully this patch will be integrated into Linus’ kernel shortly. As
soon as that happens udev will automatically recognize your laptop/mainboard
model and load the relevant modules.

Module maintainers, please add MODULE_ALIAS lines to your kernel
modules to make sure that they are autoloaded using this new mechanism, as soon
as it gets commited in Linus’ kernel.

For a fully automatically configured system only ACPI-DSDT-based module
autoloading is missing. I.e. load the “battery” module only when an ACPI
battery is actually around.

DMI-based Autoloading of Linux Kernel Modules

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/dmi-based-module-autoloading.html

So, you’ve always been annoyed by the fact that you have to load all those
laptop, i2c, hwmon, hdaps Linux kernel modules manually without having spiffy
udev doing that work for you automagically? No more! I just sent a patch to LKML which adds DMI/SMBIOS-based
module autoloading to the Linux kernel.

Hopefully this patch will be integrated into Linus’ kernel shortly. As
soon as that happens udev will automatically recognize your laptop/mainboard
model and load the relevant modules.

Module maintainers, please add MODULE_ALIAS lines to your kernel
modules to make sure that they are autoloaded using this new mechanism, as soon
as it gets commited in Linus’ kernel.

For a fully automatically configured system only ACPI-DSDT-based module
autoloading is missing. I.e. load the “battery” module only when an ACPI
battery is actually around.

Tools for Investigating Copyright Infringement

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/05/08/infringement.html

Nearly all software developers know that software is covered by
copyright. Many know that copyright covers the expression of an idea
fixed in a medium (such as a series of bytes), and that the copyright
rules govern the copying, modifying and distributing of the work.
However, only a very few have considered the questions that arise when
trying to determine if one work infringes the copyright of
another.

Indeed, in the world of software freedom, copyright is seen as a system
we have little choice but to tolerate. Many Free Software developers
dislike the copyright system we have, so it is little surprise that
developers want to spend minimal time thinking about it.
Nevertheless, the copyright system is the foremost legal framework
that governs software1, and we have to
live within it for the moment.

My fellow developers have asked me for years what constitute copyright
infringement. In turn, for years, I have asked the lawyers I worked
with to give me guidelines to pass on to the Free Software development
community. I’ve discovered that it’s difficult to adequately describe
the nature of copyright infringement to software developers. While it
is easy to give pathological examples of obvious infringement (such as
taking someone’s work, removing their copyright notices and
distributing it as your own), it quickly becomes difficult to give
definitive answers in many real world examples whether some particular
activity constitutes infringement.

In fact, in nearly every GPL enforcement cases that I’ve worked on in
my career, the fact that infringement had occurred was never in
dispute. The typical GPL violator started with a work under GPL, made
some modifications to a small portion of the codebase, and then
distributed the whole work in binary form only. It is virtually
impossible to act in that way and still not infringe the original
copyright.

Usually, the cases of “hazy” copyright infringement come up
the other way around: when a Free Software program is accused of
infringing the copyright of some proprietary work. The most famous
accusation of this nature came from Darl McBride and his colleagues at
SCO, who claimed that something called “Linux” infringed
his company’s rights. We now know that there was no copyright
infringement (BTW, whether McBride meant to accuse the GNU/Linux
operating system or the kernel named Linux, we’ll never actually
know). However, the SCO situation educated the Free Software
community that we must strive to answer quickly and definitively when
such accusations arise. The burden of proof is usually on the
accuser, but being able to make a preemptive response to even the hint
of an allegation is always advantageous when fighting FUD in the court
of public opinion.

Finally, issues of “would-be” infringement detection come
up for companies during due diligence work. Ideally, there should be
an easy way for companies to confirm which parts of their systems are
derivatives of Free Software systems, which would make compliance with
licenses easy. A few proprietary software companies provide this
service; however there should be readily available Free Software tools
(just as there should be for all tasks one might want to perform with a
computer).

It is not so easy to create such tools. Copyright infringement is not
trivially defined; in fact, most non-trivial situations require a
significant amount of both technical and legal judgement. Software
tools cannot make a legal conclusion regarding copyright infringement.
Rather, successful tools will guide an expert’s analysis of a
situation. Such systems will immediately identify the rarely-found
obvious indications of infringement, bring to the forefront facts that
need an exercise of judgement, and leave everything else in the
background.

In this multi-part series of blog entries, I will discuss the state of
the art in these Free Software systems for infringement analysis and
what plans our community should make for the creation Free systems
that address this problem.

1 Copyright is the legal
system that non-lawyers usually identify most readily as governing
software, but the patent system (unfortunately) also governs software
in many countries, and many non-Free Software licenses (and a few of
the stranger Free Software ones) also operate under contract law as
well as copyright law. Trade secrets are often involved with software
as well. Nevertheless, in the Software Freedom world, copyright is
the legal system of primary attention on a daily basis.

Walnut Hills, AP Computer Science, 1998-1999

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2007/05/05/walnut-hills-1998.html

I taught AP Computer Science at Walnut Hills High School in Cincinnati,
OH during the 1998-1999 school year.

I taught this course because:

They were desperate for a teacher. The rather incompetent
teacher who was scheduled to teach the course quit (actually,
frighteningly enough, she got a higher paying and higher ranking job
in a nearby school system) a few weeks before the school year was to
start.

The environment was GNU/Linux
using GCC‘s C++
compiler. I went to the job interview because a mother of someone in
the class begged me to go, but I was going to walk out as soon as I
saw I’d have to teach on Microsoft (which I assumed it would be). My
jaw literally dropped when I saw:

The
students had built their own lab, which even got covered in the
Cincinnati Post
. I was quite amazed that some of
the most brilliant high school students I’ve ever seen were assembled
there in one classroom.

It became quite clear to me that I owed it to these students to teach
the course. They’d discovered Free Software before the boom, and
built their own lab despite the designate CS teacher obviously
knowning a hell of lot less about the field than they did. There
wasn’t a person qualified and available , in my view, in all of
Cincinnati to teach the class. High school teacher wages are
traditionally pathetic. So, I joined the teacher’s union and took
the job.

Doing this work delayed my thesis and graduation from the Master’s
program at University of Cincinnati for yet another year, but it was
worth doing. Even almost a decade later, it ranks in my mind on the
top ten list of great things I’ve done in my life, even despite all
the exciting Free Software work I’ve been involved with in my
positions at the FSF and the Software Freedom Conservancy.

I am exceedingly proud of what my students have accomplished. It’s
clear to me that somehow we assembled an incredibly special group of
Computer Science students; many of them have gone on to make
interesting contributions. I know they didn’t always like that I
brought my Free Software politics into the classroom, but I think we
had a good year, and their excellent results on that AP exam showed
it. Here are a few of my students from that year who have a public
online life:

Ben Barker
Ben Cooper
Coleman Kane
Carl McTague
Bill Nagel
Shimon Rura

If you were my student at Walnut Hills and would like a link here, let
me know and I’ll add one.

On Using Hugin

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/photos/hugin.html

On popular request, here are a few suggestions how to make best use of Hugin for stitching your panoramas. You probably should have read some of the tutorials at Hugin’s web site before reading these suggestions.

Use manual exposure settings in your camera. On Canon cameras this means
you should be using the “M” mode. Make sure choose good exposure times and
aperture so that the entire range you plan to take photos of is well exposed.
If you don’t know how to use the “M” mode of your camera you probably should
be reading an introduction into photography now. The reason for setting
exposure values manually is that you want the same exposing on all photos from
your settings.

Disable automatic white balance mode. You probably should have done that
anyway. “Semi-automatic” white balance mode is probably OK (i.e. selecting
the white balance from one of the pre-defined profiles, such as “Daylight”,
“Cloudy”, …)

Also manually set the ISO level. You probably should be doing that anyway.

Using autofocus is probably OK.

Try not not move around too much while taking the photo series. Hugin doesn’t like that too much. It’s OK to move a little, but you should do all the shots for your panorama from a single point, and not while moving on a circle, line, or even Bezier-line.

When doing 360° panoramas it is almost guaranteed (at least in northern
countries) that you have the sun as back light. That will overexpose the
panorama in that direction and lower the contrast in the area. To work against
this, you might want to choose to do your panorama shots at noon in summer when
sun is in zenith. Gray-scaling the shot and doing some other kind of
post-processing might be a way to ease this problem.

To work against chromatic aberration it is a good idea to use large overlap areas, and doing your shots in “landscape” rather then “portrait” (so that only the center of each image is used in the final image)

Running hugin/enblend on an encrypted $HOME (like I do) won’t make you particularly happy.

Pass -m 256 to enblend. At least on my machine (with limited RAM and dm-crypt) things are a lot faster that way.

Sometimes moving things (e.g. people) show up twice (or even more times) in the resulting panorama. Sometimes that is funny, sometimes it is not. To remove them, open the seperate tif files before feeding them into enblend into Gimp and cut away the things you want to remove from all but one of these images. Then pass that on to enblend.

If regardless how many control points you set in Hugin the images just don’t fit together, you should probably run “Optimize Everything” instead of just “Optimize Positions”.

When doing your shots, make sure to hold the camera all the time at the same height, to avoid having to cut too much of the image away in the final post-processing. This is sometimes quite difficult, especially if you have images with no clear horizon.

Remember that you can set horizontal and vertical lines as control points in Hugin!
Good for straitening things out and making sure that vertical things are
actually vertical in the resulting panorama.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close