Tag Archives: Jobs

I Think I Just Got Patented.

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/02/02/took-our-jobs.html

I could not think of anything but the South Park
quote, They took our jobs! when I read
today Black
Duck’s announcement of their patent, Resolving License
Dependencies For Aggregations of Legally-Protectable Content
.

I’ve read through the patent, from the point of view of someone skilled
in this particular art. In fact, I’m specifically skilled in two
distinct arts related to this patent: computer programming and Free
Software license compatibility analysis. It’s from that perspective
that I took a look at this patent.

(BTW, the thing to always remember about reading patents is that the
really significant part isn’t the abstract, which often contains
pie-in-the-sky prose about what the patent covers. The claims are the
real details of the so-called “invention”.)

So, when I look closely at these claims, I am appalled to discover this
patent claims, as a novel invention, things that I’ve done regularly,
with a mix of my brain and a computer, since at least 1999. I quickly
came to the conclusion that this is yet another stupid patent granted by
the USPTO that it would be better to just ignore.

Indeed, ever since Amazon’s one-click patent, I’ve hated the inundation
of “look what stupid patent was granted today” slashdot
items. I think it’s a waste of time, generally speaking, since the
USPTO is granting many stupid software patents every single day. If we
spend our time gawking and saying how stupid they are, we don’t get any
real work done.

But, the (likely obvious) reason this caught my attention is that the
patent covers activities I’ve done regularly for so long. It gives me
this sick feeling in my stomach to read someone else claiming as an
invention something I’ve done and considered quite obvious for more than
a decade.

I’m not a patent agent (nor do I want to be — spending a week of
my life studying for a silly exam to get some credential hasn’t been
attractive to me since I got my Master’s degree), but honestly, I can’t
see how this patented process isn’t obvious to everyone skilled in the
arts of FLOSS license evaluation and computer programming. Indeed, the
process described is so simple-minded, that it’s a waste of time in my
view to spend time writing a software system to do it. With a few one-off
10-line Perl programs and a few greps, I’ve had a computer assist me
with processes like this one many times since the late 1990s.

I do feel some shame that I’ve now contributed to the “hey,
everyone, let’s gawk at this silly pointless surely-invalid
patent” rant. I guess that I have new sympathy for website
designers who were so personally offended regarding the Amazon one-click
patent. I can now confirm first-hand: it does really feel different
when the patent claims seem close to an activity you’ve engaged in
yourself for many years prior to the patent application. It’s when the
horribleness of the software patent system starts to really hit
home.

The saddest part, though, is that Black Duck again shows itself as a
company whose primary goal is to prey on people’s fear of software
freedom. They make proprietary software and acquire software patents
with the primary goal of scaring people into buying stuff they
probably don’t need. I’ve spent a lot more time working regularly on
FLOSS license compliance than anyone who has ever worked at Black Duck.
Simply put, coming into (and staying in) compliance is a much simpler
process than they say, and can be done easily without the use of
overpriced proprietary analysis of codebases.

Avahi/Zeroconf patch for distcc updated

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/avahi-distcc.html

I finally found them time to sit down and update my venerable Avahi/Zeroconf patch for
distcc
. A patched distcc
automatically discovers suitable compiler servers on the local network, without
the need to manually configure them. (Announcement).

Here’s a quick HOWTO for using a patched distcc like this:

Make sure to start distccd (the server) with the new
–zeroconf switch, This will make it announce its services on the
network.

Edit your $HOME/.distcc/hosts and add +zeroconf. This
magic string will enable Zeroconf support in the client, i.e. will be expanded
to the list of available suitable distcc servers on your LAN.

Now set $CC to distcc gcc globally for your login
sessions. This will tell all well-behaving build systems to use distcc
for compilation (this doesn’t work for the kernel, as one notable exception).
Even better than setting $CC to distcc gcc is setting it to
ccache distcc gcc which will enable ccache in addition to distcc. i.e. stick something like this in your ~/.bash_profile: export CC=”ccache distcc gcc”

And finally use make -j `distcc -j` instead of plain make
to enable parallel building with the right number of concurrent processes.
Setting $MAKEFLAGS properly is an alternative option, however is suboptimal if
the evalutation is only done once at login time.

If this doesn’t work for you than it is a good idea to run distcc
–show-hosts to get a list of discovered distcc servers. If this list
isn’t complete then this is most likely due to mismatching GCC versions or
architectures. To check if that’s the case use avahi-browse -r
_distcc._tcp and compare the values of the cc_machine and
cc_version fields. Please note that different Linux distributions use
different GCC machine strings. Which is expected since GCC is usually patched quite
a bit on the different distributions. This means that a Fedora distcc
(the client) will not find a Debian distccd (the server) and vice
versa. But again: that’s a feature, not a bug.

The new -j and –show-hosts options for distcc are useful for non-zeroconf setups, too.

The patch will automatically discover the number of CPUs on remote machines
and make use of that information to better distribute jobs.

In short: Zeroconf support in distcc is totally hot, everyone should have it!

For more information have a look on the announcement of my original
patch from 2004
(at that time for the historic HOWL Zeroconf daemon), or read the new
announcement linked above.

Distribution packagers! Please merge this new patch into your packages! It
would be a pity to withhold Zeroconf support in distcc from your users any
longer!

Unfortunately, Fedora doesn’t include any distcc packages. Someone should be
changing that (who’s not me ;-)).

You like this patch? Then give me a kudo on ohloh.net. Now that I earned a golden 10 (after kicking Larry Ewing from position 64. Ha, take that Mr. Ewing!), I need to make sure I don’t fall into silver oblivion again. 😉

Avahi/Zeroconf patch for distcc updated

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/avahi-distcc.html

I finally found them time to sit down and update my venerable Avahi/Zeroconf patch for
distcc
. A patched distcc
automatically discovers suitable compiler servers on the local network, without
the need to manually configure them. (Announcement).

Here’s a quick HOWTO for using a patched distcc like this:

  • Make sure to start distccd (the server) with the new
    --zeroconf switch, This will make it announce its services on the
    network.
  • Edit your $HOME/.distcc/hosts and add +zeroconf. This
    magic string will enable Zeroconf support in the client, i.e. will be expanded
    to the list of available suitable distcc servers on your LAN.
  • Now set $CC to distcc gcc globally for your login
    sessions. This will tell all well-behaving build systems to use distcc
    for compilation (this doesn’t work for the kernel, as one notable exception).
    Even better than setting $CC to distcc gcc is setting it to
    ccache distcc gcc which will enable ccache in addition to distcc. i.e. stick something like this in your ~/.bash_profile: export CC="ccache distcc gcc"
  • And finally use make -j `distcc -j` instead of plain make
    to enable parallel building with the right number of concurrent processes.
    Setting $MAKEFLAGS properly is an alternative option, however is suboptimal if
    the evalutation is only done once at login time.

If this doesn’t work for you than it is a good idea to run distcc
--show-hosts
to get a list of discovered distcc servers. If this list
isn’t complete then this is most likely due to mismatching GCC versions or
architectures. To check if that’s the case use avahi-browse -r
_distcc._tcp
and compare the values of the cc_machine and
cc_version fields. Please note that different Linux distributions use
different GCC machine strings. Which is expected since GCC is usually patched quite
a bit on the different distributions. This means that a Fedora distcc
(the client) will not find a Debian distccd (the server) and vice
versa. But again: that’s a feature, not a bug.

The new -j and --show-hosts options for distcc are useful for non-zeroconf setups, too.

The patch will automatically discover the number of CPUs on remote machines
and make use of that information to better distribute jobs.

In short: Zeroconf support in distcc is totally hot, everyone should have it!

For more information have a look on the announcement of my original
patch from 2004
(at that time for the historic HOWL Zeroconf daemon), or read the new
announcement linked above.

Distribution packagers! Please merge this new patch into your packages! It
would be a pity to withhold Zeroconf support in distcc from your users any
longer!

Unfortunately, Fedora doesn’t include any distcc packages. Someone should be
changing that (who’s not me ;-)).

You like this patch? Then give me a kudo on ohloh.net. Now that I earned a golden 10 (after kicking Larry Ewing from position 64. Ha, take that Mr. Ewing!), I need to make sure I don’t fall into silver oblivion again. 😉