At Least Motorola Admits It

Post Syndicated from Bradley M. Kuhn original

I’ve written
before about the software freedom issues inherent with
. Summarized shortly: the software freedom community
is fortunate that Google released so much code under Free Software
licenses, but since most of the code in the system is Apache-2.0
licensed, we’re going to see a lot of proprietarized,
non-user-upgradable versions. In fact, there’s no Android/Linux
system that’s fully Free Software yet. (That’s
why Aaron Williamson
and I try to keep
the Replicant
going. We’ve focused on the HTC Dream and the NexusOne,
since they are the mobile devices closest to working with only Free
Software installed, and because they allow the users to put their own
firmware on the device.)

I was therefore intrigued
discover last night

(via mtrausch)
February blog post

by Lori Fraleigh
, wherein Fraleigh clarifies Motorola’s opposition to
software freedom for its Android/Linux users:

We [Motorola] understand there is a community of developers interested in
… Android system development … For these developers, we
highly recommend obtaining either a Google ADP1 developer phone or a Nexus
One … At this time, Motorola Android-based handsets are intended for
use by consumers.

I appreciate the fact that Fraleigh and Motorola are honest in their
disdain for software developers. Unlike Apple — who tries to hide
how developer-unfriendly its mobile platform is — Motorola readily
admits that they seek to leave developers as helpless as possible,
refusing to share the necessary tools that developers need to upgrade
devices and to improve themselves, their community, and their software.
Companies like Motorola and Apple both seek to squelch the healthy hacker
tendency to make technology better for everyone. Now that I’ve seen
Fraleigh’s old blog post, I can at least give Motorola credit for
full honesty about these motives.

I do, however, find the implication of Fraleigh’s words revolting.
People who buy
the devices, in Motorola’s view, don’t deserve the right to improve
their technology. By contrast, I believe that software freedom should
be universal and that no one need be a “mere consumer” of
technology. I believe that every technology user is a potential
developer who might have something to contribute but obviously cannot if
that user isn’t given the tools to do so. Sadly, it seems, Motorola
believes the general public has nothing useful to contribute, so the
public shouldn’t even be given the chance.

But, this attitude is always true for proprietary software companies,
so there are actually no revelations on that point. Of more interest is
how Motorola was able to do this, given that Android/Linux (at least
most of it) is Free Software.

Motorola’s ability to take these actions is a consequence of a few
licensing issues. First, most of the Android system is under the
Apache-2.0 license (or, in some cases, an even more permissive license).
These licenses allow Motorola to make proprietary versions of what
Google released and sell it without source code nor the ability for
users to install modified versions. That license decision is lamentable
(but expected, given Google’s goals for Android).

The even more lamentable licensing issue here is regarding Linux’s
the GPLv2.
Specifically, Fraleigh’s post claims:

The use of open source software, such as the Linux kernel … in a
consumer device does not require the handset running such software to be
open for re-flashing. We comply with the licenses, including GPLv2.

I should note that, other than Fraleigh’s assertion quoted above, I
have no knowledge one way or another if Motorola is compliant
with GPLv2 on its
Android/Linux phones. I don’t own one, have no plans to buy one, and
therefore I’m not in receipt of an offer for source regarding the
devices. I’ve also received no reports from anyone regarding possible
non-compliance. In fact, I’d love to confirm their compliance: please
get in touch if you have a Motorola Android/Linux phone and attempted to
install a newly compiled executable of Linux onto your phone.

I’m specifically interested in the installation issue because GPLv2
requires that any binary distribution of Linux (such as one on telephone
hardware) include both the source code itself and the scripts to
control compilation and installation of the executable
. So, if
Motorola wrote any helper programs or other software that installs Linux
onto the phones, then such software, under GPLv2, is a required part of
the complete and corresponding source code of Linux and must be
distributed to each buyer of a Motorola Android/Linux phone.

If you’re surprised by that last paragraph, you’re probably not alone.
I find that many are confused regarding this GPLv2 nuance. I believe
the confusion stems from discussions during the
process about this specific
requirement. GPLv3
does indeed expand the requirement for the scripts to control
compilation and installation of the executable
into the concept
of Installation Information. Furthermore,
GPLv3’s Installation Information is much more expansive than
merely requiring helper software programs and the like.
GPLv3’s Installation Information includes any material,
such as an authorization key, that is necessary for installation of a
modified version onto the device.

However, merely because GPLv3 expanded installation
information requirements does not lessen GPLv2’s requirement of
such. In fact, in my reading of GPLv2 in comparison to GPLv3, the only
effective difference between the two on this point relates to
cryptographic device lock-down0. I do admit that under GPLv2, if you give
all the required installation scripts, you could still use cryptography
to prevent those scripts from functioning without an authorization key.
Some vendors do this, and that’s precisely why GPLv3 is written the way
that it is: we’d observed such lock-down occurring in the field, and
identified that behavior as a bug in GPLv2 that is now closed with
(Please see
the footnote as to why I think I
previously erred in that deleted interpretation.)

However, because of all that hype about GPLv3’s new Installation
definition, many simply forgot that the GPLv2 isn’t
silent on the issue. In other words, GPLv3’s verbosity on the subject
led people to minimize the important existing requirements of GPLv2
regarding installation information.

As regular readers of this blog know, I’ve spent much of my time for
the last 12 years doing GPL enforcement. Quite often, I must remind
violators that GPLv2 does indeed require the scripts to control
compilation and installation of the executable
, and that candidate
source code releases missing the scripts remain in violation of GPLv2.
I sincerely hope that Android/Linux redistributors haven’t forgotten

I have one final and important point to make regarding Motorola’s
February statement: I’ve often mentioned that the mobile industry’s
opposition to GPLv3 and to user-upgradable devices is for
their own reasons, and nothing to do with regulators or other
outside entities preventing them from releasing such software. In their
blog post, Motorola tells us quite clearly that the community of
developers interested in … experimenting with Android system
development and re-flashing phones … [should obtain] either a
Google ADP1 developer phone or a Nexus One, both of which are intended
for these purposes
. In other words, Motorola tacitly admits that
it’s completely legal and reasonable for the community to obtain such
telephones, and that, in fact, Google sells such devices. Motorola was
not required to put lock-down restrictions in place, rather
they made a choice to prohibit users in this way. On this
point, Google chose to treat its users with respect, allowing them to
install modified versions. Motorola, by contrast, chose to make
Android/Linux as close to Apple’s iPhone as they could get away with

So, the next time a mobile company tries to tell you that they just
can’t abide by GPLv3 because some third party (the FCC is their frequent
scapegoat) prohibits them, you should call them on their
FUD. Point out
that Google sells phones on the open market that provide
all Installation Information that GPLv3 might require. (In other
words, even if Linux were GPLv3’d, Android/Linux on the NexusOne and HTC
Dream would be a GPLv3-compliant distribution.) Meanwhile, at least one
such company, Motorola, has admitted their solitary reason for avoiding
GPLv3: the company just doesn’t believe users deserve the right to
install improved versions of their software. At least they admit their
contempt for their customers.

Update (same day):
pointed me to
a few
in the custom ROM and jailbreaking communities about their concerns about
Motorola’s new offering, the Droid-X. Some commentors there point out
that eventually, most phones get jailbroken or otherwise allow user
control. However, the key point of
the CrunchGear
User Manifesto
is a clear and good one: no company or person has
the right to tell you that you may not do what you like with your own
This is a point akin and perhaps essential to software
freedom. It doesn’t really matter if you can figure out to how
to hack a device; what’s important is that you not give your money to the
company that prohibits such hacking. For goodness sake, people, why don’t
we all use ADP1’s and NexusOne’s and be done with this?

Updated (2010-07-17): It appears
that cryptographic
lock down on the Droid-X is confirmed
to rao for the link). I hope
everyone will boycott all Motorola devices because of this, especially
given that there are Android/Linux devices on the market that
aren’t locked down in this way.

BTW, in Motorola’s answer to Engadget on this,
we see they are again subtly sending FUD that the lock-down is somehow
legally required:

Motorola’s primary focus is the security of our end users and protection
of their data, while also meeting carrier, partner and legal requirements.

I agree the carriers and partners probably want such lock down, but I’d
like to see their evidence that there is a legal restriction that requires
that. They present none.

Meanwhile, they also state that such cryptographic lock-down is the
only way they know how to secure their devices:

Checking for a valid software configuration is a common practice within
the industry to protect the user against potential malicious software

Pity that Motorola engineers aren’t as clueful as the Google and HTC
engineers who designed the ADP1 and Nexus One.

0 Update on 2020-04-09: At the
time I wrote the text above, I was writing for a specific organization where
I worked at the time, who held this position, and I’d cross-posted the blog
post here. I trusted lawyers I spoke to at the time, who insisted that
GPLv2’s failure to mention cryptography meant that “scripts
used to control compilation and installation of the
executable” necessarily did not include items mentioned
explicitly GPLv3’s Installation Instructions definition. I believed these
lawyers, and shouldn’t have. Lawyers I’ve talked to since making this post
have taught me that the view stated above lacks nuance. The issue of
cryptographic lock-down in GPLv2, and how to interpret “scripts used to
control … installation” in an age of cryptographic lock-down,
remain an open question of GPL interpretation.

Proprietary Software Licensing Produces No New Value In Society

Post Syndicated from Bradley M. Kuhn original

I sought out the quote below when Chris Dodd paraphrased it on Meet
The Press
on 25 April 2010. (I’ve been, BTW, slowly but surely
working on this blog post since that date.) Dodd
was quoting Frank
Rich, who wrote the following, referring to the USA economic
(and its recent collapse):

As many have said — though not many politicians in either party
— something is fundamentally amiss in a financial culture that
thrives on “products” that create nothing and produce nothing
except new ways to make bigger bets and stack the deck in favor of the
house. “At least in an actual casino, the damage is contained to
gamblers,” wrote the financial journalist Roger Lowenstein in The
Times Magazine
last month. This catastrophe cost the economy eight million

I was drawn to this quote for a few reasons. First, as a poker player,
I’ve spend some time thinking about how “empty” the gambling
industry is. Nothing is produced; no value for humans is created; it’s
just exchanging of money for things that don’t actually exist. I’ve
been considering that issue regularly since around 2001 (when I started
playing poker seriously). I ultimately came to a conclusion not too
different from Frank Rich’s point: since there is a certain
“entertainment value”, and since the damage is contained to
those who chose to enter the casino, I’m not categorically against poker
nor gambling in general, nor do I think they are immoral. However, I
also don’t believe gambling has any particular important value in
society, either. In other words, I don’t think people have an
inalienable right to gamble, but I also don’t think there is any moral
reason to prohibit casinos.

Meanwhile, I’ve also spent some time applying this idea of creating
nothing and producing nothing
to the proprietary software
industry. Proprietary licenses, in many ways, are actually not all
that different from these valueless financial transactions.
Initially, there’s no problem: someone writes software and is paid for
it; that’s the way it should be. Creation of new software is an
activity that should absolutely be funded: it creates something new
and valuable for others. However, proprietary licenses are designed
specifically to allow a single act of programming generate new revenue
over and over again. In this aspect, proprietary licensing is akin to
selling financial derivatives: the actual valuable transaction is
buried well below the non-existent financial construction above

I admit that I’m not a student of economics. In fact, I rarely think
of software in terms of economics, because, generally, I don’t want
economic decisions to drive my morality nor that of our society at
large. As such, I don’t approach this question with an academic
economic slant, but rather, from personal economic experience.
Specifically, I learned a simple concept about work when I was young:
workers in our society get paid only for the hours that they
work. To get paid, you have to do something new. You just can’t sit
around and have money magically appear in your bank account for hours
you didn’t work.

I always approached software with this philosophy. I’ve often been
paid for programming, but I’ve been paid directly for the hours I spent
programming. I never even considered it reasonable to be paid again for
programming I did in the past. How is that fair, just, or quite
frankly, even necessary? If I get a job building a house, I can’t get
paid every day someone uses that house. Indeed, even if I built the
house, I shouldn’t get a royalty paid every time the house is resold to
a new owner0. Why
should software work any differently? Indeed, there’s even an argument
that software, since it’s so much more trivial to copy than a
house, should be available gratis to everyone once it’s written the
first time.

I recently heard (for the first time) an old story about a well-known
Open Source company (which no longer exists, in case you’re wondering).
As the company grew larger, the company’s owners were annoyed that
the company could
only bill the clients for the hour they worked. The business
was going well, and they even had more work than they could handle
because of the unique expertise of their developers. The billable rates
covered the cost of the developers’ salaries plus a reasonable
profit margin. Yet, the company executives wanted more; they wanted
to make new money even when everyone was on vacation. In
essence, having all the new, well-paid programming work in the world
wasn’t enough; they wanted the kinds of obscene profits that can only be
made from proprietary licensing. Having learned this story, I’m pretty
glad the company ceased to exist before they could implement
their make money while everyone’s on the beach plan. Indeed, the
first order of business in implementing the company’s new plan was, not
surprisingly, developing some new from-scratch code not covered by GPL
that could be proprietarized. I’m glad they never had time to execute
on that plan.

I’ll just never be fully comfortable with the idea that workers should
get money for work they already did. Work is only valuable if it
produces something new that didn’t exist in the world before the work
started, or solves a problem that had yet to be solved. Proprietary
licensing and financial bets on market derivatives have something
troubling in common: they can make a profit for someone without
requiring that someone to do any new work. Any time a business moves
away from actually producing something new of value for a real human
being, I’ll always question whether the business remains legitimate.

I’ve thus far ignored one key point in the quote that began this post:
“At least in an actual casino, the damage is contained to
gamblers”. Thus, for this “valueless work” idea to
apply to proprietary licensing, I had to consider (a) whether or not the
problem is sufficiently contained, and (b) whether software or not is
akin to the mere entertainment activity, as gambling is.

I’ve pointed out that I’m not opposed to the gambling industry, because
the entertainment value exists and the damage is contained to people who
want that particular entertainment. To avoid the stigma associated with
gambling, I can also make a less politically charged example such as the
local Chuck E. Cheese, a place I quite enjoyed as a child. One’s parent
or guardian goes to Chuck E. Cheese to pay for a child’s entertainment,
and there is some value in that. If someone had issue with Chuck
E. Cheese’s operation, it’d be easy to just ignore it and not take your
children there, finding some other entertainment. So, the question is,
does proprietary software work the same way, and is it therefore not too

I think the excuse doesn’t apply to proprietary software for two
reasons. First, the damage is not sufficiently contained, particularly
for widely used software. It is, for example, roughly impossible to get
a job that doesn’t require the employee to use some proprietary
software. Imagine if we lived in a society where you weren’t allowed to
work for a living if you didn’t agree to play Blackjack with a certain
part of your weekly salary? Of course, this situation is not fully
analogous, but the fundamental principle applies: software is ubiquitous
enough in industrialized society that it’s roughly impossible to avoid
encountering it in daily life. Therefore, the proprietary software
situation is not adequately contained, and is difficult for individuals
to avoid.

Second, software is not merely a diversion. Our society has changed
enough that people cannot work effectively in the society without at
least sometimes using software. Therefore, the
“entertainment” part of the containment theory does not
either. If citizens are de-facto required to use something to live
productively, it must have different rules and control structures around
it than wholly optional diversions.

Thus, this line of reasoning gives me yet another reason to oppose
proprietary software: proprietary licensing is simply a valueless
transaction. It creates a burden on society and gives no benefit, other
than a financial one to those granted the monopoly over that particular
software program. Unfortunately, there nevertheless remain many who
want that level of control, because one fact cannot be denied: the
profits are larger.

example, Mårten
Mikos recently argued in favor of these sorts of large profits
. He
claims that to benefit massively from Open Source (i.e., to get
rich), business
models like “Open Core”
are necessary. Mårten’s
argument, and indeed most pro-Open-Core arguments, rely on this
following fundamental assumption: for FLOSS to be legitimate, it must
allow for the same level of profits as proprietary software. This
assumption, in my view, is faulty. It’s always true that you can make
bigger profits by ignoring morality. Factories can easily make more
money by completely ignoring environmental issues; strip mining is
always very profitable, after all. However, as a society, we’ve decided
that the environment is worth protecting, so we have rules that do limit
profit maximization because a more important goal is served.

Software freedom is another principle of this type. While
you can make a profit with community-respecting FLOSS business
models (such as service, support and freely licensed custom
modifications on contract), it’s admittedly a smaller profit than can be
made with Open Core and proprietary licensing. But that greater profit
potential doesn’t legitimatize such business models, just as it doesn’t
legitimize strip mining or gambling on financial derivatives.

Update: Based on some feedback that I got, I felt it
was important to make clear that I don’t believe this argument alone can
create a unified theory that shows why software freedom should be an
inalienable right for all software users. This factor of lack of value
that proprietary licensing brings to society is just another to consider
in a more complete discussion about software freedom.

Update: Glynn
a blog
post that quoted from this post extensively and made some interesting
on it. There’s some interesting discussion in the blog
comments there on his site; perhaps because so many people hate that I
only do blog comments on (which I do, BTW, because it’s the
only online forum I’m assured that I’ll actually read and respond

realize that some argue that you can buy a house, then rent it to others,
and evict them if they fail to pay. Some might argue further that owners
of software should get this same rental power. The key difference,
though, is that the house owner can’t really make full use of the house
when it’s being rented. The owner’s right to rent it to others,
therefore, is centered around the idea that the owner loses some of their
personal ability to use the house while the renters are present. This
loss of use never happens with software.

might be wondering, Ok, so if it’s pure entertainment software, is it
acceptable for it to be proprietary?
. I have often said: if all
published and deployed software in the world were guaranteed Free
Software except for video games, I wouldn’t work on the
cause of software freedom anymore. Ultimately, I am not particularly
concerned about the control structures in our culture that exist for pure
entertainment. I suppose there’s some line to be drawn between
art/culture and pure entertainment/diversion, but considerations on
differentiating control structures on that issue are beyond the scope of
this blog post.

Post-Bilski Steps for Anti-Software-Patent Advocates

Post Syndicated from Bradley M. Kuhn original

Lots of people are opining about
the USA
Supreme Court’s ruling in the Bilski case
. Yesterday, I participated
a oggcast
with the folks at SFLC
. In that oggcast, Dan Ravicher explained most
of the legal details of Bilski; I could never cover them as well as he
did, and I wouldn’t even try.

Anyway, as a non-lawyer worried about the policy questions, I’m pretty
much only concerned about those forward-looking policy questions.
However, to briefly look back at how our community responded to this
Bilski situation over the last 18 months: it seems similar to what
while the Eldred
was working its way to the Supreme Court. In the months
preceding both Eldred and Bilski, there seemed to be a mass hypnosis that
the Supreme Court would actually change copyright law (Eldred) or patent
law (Bilski) to make it better for freedom of computer users.

In both cases, that didn’t happen. There was admittedly less of that
giddy optimism before Bilski as there was before Eldred, but the ultimate
outcome for computer users is roughly no different in both cases: as we
were with Eldred, we’re left back with the same policy situation we had
before Bilski ever started making its way through the various courts. As
near as I can tell from what I’ve learned, the entire “Bilski
thing” appears to be a no-op. In short, as before, the Patent
Office sometimes can and will deny applications that it determines are
only abstract ideas, and the Supreme Court has now confirmed that the
Patent Office can reject such an application if the Patent Office knows
an abstract idea when it sees it
. Nothing has changed regarding most
patents that are granted every day, including those that read on software.
Those of us that oppose software patents continue to believe that software
algorithms are indeed merely abstract ideas and pure mathematics and
shouldn’t be patentable subject matter. The governmental powers still
seems to disagree with us, or, at least, just won’t comment on that

Looking forward, my largest concern, from a policy
perspective, is that the “patent reform” crowd,
who claim to be the allies of the anti-software-patent folks,
will use this decision to declare that the system works.
Bilski’s patent was ultimately denied, but on grounds that leave us no
closer to abolishing software patents. Patent reformists will
say: Well, invalid patents get denied, leaving space for the valid
ones. Those valid ones
, they will say, do and should include
lots of patents that read on software.
But only the really good
ideas should be patented
, they will insist.

We must not yield to the patent reformists, particularly at a time like
this. (BTW, be sure to read
RMS‘ classic and still relevant essay,
Reform Is Not Enough
, if you haven’t already.)

Since Bilski has given us no new tools for abolishing software patents,
we must redouble efforts with tools we already have to mitigate the
threat patents pose to software freedom. Here are a few suggestions,
which I think are actually all implementable by the average developer,
to will keep up the fight against software patents, or at least,
mitigate their impact:

  • License your software using the
    or Apache-2.0
    Among the copyleft
    licenses, AGPLv3
    and GPLv3 offer the
    best patent
    protections; LGPLv3
    offers the best among the weak copyleft
    licenses; Apache
    License 2.0
    offers the best patent protections among the permissive
    licenses. These are the licenses we should gravitate toward,
    particularly since multiple companies with software patents are
    regularly attacking Free Software. At least when such companies
    contribute code to projects under these licenses, we know those
    particular codebases will be safe from that particular company’s
  • Demand real patent licenses from companies, not mere
    . Patent promises are not
    enough0. The Free Software
    community deserves to know it has real patent licenses from companies
    that hold patents. At the very least, we should demand unilateral
    patent licenses for all their patents perpetually for all
    possible copylefted code
    (i.e., companies should grant, ahead of
    time, the exact same license that the community would get if the
    company had contributed to a yet-to-exist GPLv3’d
    codebase)1. Note
    further that some companies, that claim to be part of the
    FLOSS community, haven’t even given the
    (inadequate-but-better-than-nothing) patent promises.
    For example,
    BlackDuck holds a
    patent related to FLOSS
    , but
    despite saying
    they would consider at least a patent promise
    , have failed to do
    even that minimal effort.
  • Support organizations/efforts that work to oppose and end
    software patents
    . In particular, be sure that the efforts
    you support are not merely “patent reform” efforts hidden
    behind anti-software patent rhetoric. Here are a few initiatives that
    I’ve recently seen doing work regarding complete abolition of software
    patents. I suggest you support them (with your time or dollars):

  • Write your legislators. This never hurts. In the
    USA, it’s unlikely we can convince Congress to change patent law,
    because there are just too many lobbying dollars from those big
    patent-holding companies (e.g., the same ones that wrote
    those nasty
    in Bilski). But, writing your Senators and Congresspeople once a year
    to remind them of your opposition patents that read on software simply
    can’t hurt, and may theoretically help a tiny bit. Now would be a good
    time to do it, since you can mention how the Bilski decision convinced
    you there’s a need for legislative abolition of software patents.
    Meanwhile, remember, it’s even better if you show up at political
    debates during election season and ask these candidates to oppose
    software patents!
  • Explain to your colleagues why software patents should be
    abolished, particularly if you work in computing
    . Software
    patent abolition is actually a broad spectrum issue across the
    computing industry. Only big and powerful companies benefit from
    software patents. The little guy — even the little guy
    proprietary developer — is hurt by software patents.
    Even if you can’t convince your colleagues who write proprietary
    software that they should switch to writing Free Software,
    you can instead convince them that software patents
    are bad for them personally and for their chances to succeed in
    software. Share the film,
    , with them and then discuss the issue with them
    after they’ve viewed it. Blog, tweet, dent, and the like about the
    issue regularly.
  • (added 2010-07-01 on tmarble‘s
    suggestion) Avoid products from pro-software-patent
    . This is tough to do, and it’s why I didn’t call
    for an all-out boycott. Most companies that make computers are
    pro-software-patent, so it’s actually tough to buy a computer (or even
    components for one) without buying from a pro-software-patent company.
    However, avoiding the companies who are most aggressive with patent
    aggression is easy: starting with avoiding Apple products is a good
    first step (there are plenty of other reasons to avoid Apple anyway).
    Microsoft would be next on the list, since they specifically use
    software patents to attack FLOSS projects. Those are likely the big
    two to avoid, but always remember that all large companies with
    proprietary software products actively enforce patents, even if they
    don’t file lawsuits. In other words, go with the little guy if you
    can; it’s more likely to be a patent-free zone.
  • If you have a good idea, publish it and make sure the great
    idea is well described in code comments and documentation, and that
    everything is well archived by date
    . I put this one last on
    my list, because it’s more of a help for the software patent
    reformists than it is for the software patent abolitionists.
    Nevertheless, sometimes, patents will get in the way of Free Software,
    and it will be good if there is strong prior art showing that the idea
    was already thought of, implemented, and put out into the world before
    the patent was filed. But, fact is,
    the “valid”
    software patents with no prior art are a bigger threat to software
    . The stronger the patent, the worst the threat, because
    it’s more likely to be innovative, new technology that we want to
    implement in Free Software.

I sat and thought of what else I could add to this list that
individuals can do to help abolish software patents. I was sad that
these were the only five six things that
I could collect, but that’s all the more reason to do
these five six
things in earnest. The battle for software freedom for all users is not
one we’ll win in our lifetimes. It’s also possible abolition of
software patents will take a generation as well. Those of us that seek
this outcome must be prepared for patience and lifelong, diligent work
so that the right outcome happens, eventually.

0 Update: I
was asked
for a longer write up on software patent licenses as
compared to mere “promises”.
Unfortunately, I
don’t have one, so the best I was able to offer

was the interview I did
on Linux Outlaws, Episode 102, about Microsoft’s patent
. I’ve also added a TODO to write something up more completely
on this particular issue.

1 I am not
leaving my permissively-license-preferring friends out of this issue
without careful consideration. Specifically, I just don’t think it’s
practical or even fair to ask companies to license their patents for
all permissively-licensed code, since that would be the same as
licensing to everyone, including their proprietary software
competitors. An ahead-of-time perpetual license to practice the
teachings of all the company’s patents under AGPLv3 basically makes
sure that code that’s eternally Free Software will also eternally be
patent-licensed from that company, even if the company never
contributes to the AGPLv3’d codebase. Anyone trying to make
proprietary code that infringed the patent wouldn’t have benefit of
the license; only Free Software users, distributors and modifiers
would have the benefit. If a company supports copyleft generally,
then there is no legitimate reason for the company to refuse such a
broad license for copyleft distributions and deployments.

Addendum on the Brokenness of File Locking

Post Syndicated from Lennart Poettering original

I forgot to mention another central problem in my blog story about file locking
on Linux

Different machines have access to different features of the same file
system. Here’s an example: let’s say you have two machines in your home LAN.
You want them to share their $HOME directory, so that you (or your family) can
use either machine and have access to all your (or their) data. So you export
/home on one machine via NFS and mount it from the other machine.

So far so good. But what happens to file locking now? Programs on the first
machine see a fully-featured ext3 or ext4 file system, where all kinds of
locking works (even though the API might suck as mentioned in the earlier blog
story). But what about the other machine? If you set up lockd properly
then POSIX locking will work on both. If you didn’t one machine can use POSIX
locking properly, the other cannot. And it gets even worse: as mentioned recent
NFS implementations on Linux transparently convert client-side BSD locking into
POSIX locking on the server side. Now, if the same application uses BSD locking on both
the client and the server side from two instances they will end up with two
orthogonal locks and although both sides think they have properly acquired a
lock (and they actually did) they will overwrite each other’s data, because
those two locks are independent. (And one wonders why the NFS developers
implemented this brokenness nonetheless…).

This basically means that locking cannot be used unless it is verified that
everyone accessing a file system can make use of the same file system feature
set. If you use file locking on a file system you should do so only if you are
sufficiently sure that nobody using a broken or weird NFS implementation might
want to access and lock those files as well. And practically that is
impossible. Even if fpathconf() was improved so that it could inform
the caller whether it can successfully apply a file lock to a file, this would
still not give any hint if the same is true for everybody else accessing the
file. But that is essential when speaking of advisory (i.e. cooperative) file

And no, this isn’t easy to fix. So again, the recommendation: forget about
file locking on Linux, it’s nothing more than a useless toy.

Also read Jeremy
(Samba) take on POSIX file locking. It’s an interesting read.

On the Brokenness of File Locking

Post Syndicated from Lennart Poettering original

It’s amazing how far Linux has come without providing for proper file
locking that works and is usable from userspace. A little overview why file
locking is still in a very sad state:

To begin with, there’s a plethora of APIs, and all of them are awful:

  • POSIX File locking as available with fcntl(F_SET_LK): the POSIX
    locking API is the most portable one and in theory works across NFS. It can do
    byte-range locking. So much on the good side. On the bad side there’s a lot
    more however: locks are bound to processes, not file descriptors. That means
    that this logic cannot be used in threaded environments unless combined with a
    process-local mutex. This is hard to get right, especially in libraries that do
    not know the environment they are run in, i.e. whether they are used in
    threaded environments or not. The worst part however is that POSIX locks are
    automatically released if a process calls close() on any (!) of
    its open file descriptors for that file. That means that when one part of a
    program locks a file and another by coincidence accesses it too for a short
    time, the first part’s lock will be broken and it won’t be notified about that.
    Modern software tends to load big frameworks (such as Gtk+ or Qt) into memory
    as well as arbitrary modules via mechanisms such as NSS, PAM, gvfs,
    GTK_MODULES, Apache modules, GStreamer modules where one module seldom can
    control what another module in the same process does or accesses. The effect of
    this is that POSIX locks are unusable in any non-trivial program where it
    cannot be ensured that a file that is locked is never accessed by
    any other part of the process at the same time. Example: a user managing
    daemon wants to write /etc/passwd and locks the file for that. At
    the same time in another thread (or from a stack frame further down)
    something calls getpwuid() which internally accesses
    /etc/passwd and causes the lock to be released, the first thread
    (or stack frame) not knowing that. Furthermore should two threads use the
    locking fcntl()s on the same file they will interfere with each other’s locks
    and reset the locking ranges and flags of each other. On top of that locking
    cannot be used on any file that is publicly accessible (i.e. has the R bit set
    for groups/others, i.e. more access bits on than 0600), because that would
    otherwise effectively give arbitrary users a way to indefinitely block
    execution of any process (regardless of the UID it is running under) that wants
    to access and lock the file. This is generally not an acceptable security risk.
    Finally, while POSIX file locks are supposedly NFS-safe they not always really
    are as there are still many NFS implementations around where locking is not properly
    implemented, and NFS tends to be used in heterogenous networks. The biggest
    problem about this is that there is no way to properly detect whether file
    locking works on a specific NFS mount (or any mount) or not.
  • The other API for POSIX file locks: lockf() is another API for the
    same mechanism and suffers by the same problems. One wonders why there are two
    APIs for the same messed up interface.
  • BSD locking based on flock(). The semantics of this kind of
    locking are much nicer than for POSIX locking: locks are bound to file
    descriptors, not processes. This kind of locking can hence be used safely
    between threads and can even be inherited across fork() and
    exec(). Locks are only automatically broken on the close()
    call for the one file descriptor they were created with (or the last duplicate
    of it). On the other hand this kind of locking does not offer byte-range
    locking and suffers by the same security problems as POSIX locking, and works
    on even less cases on NFS than POSIX locking (i.e. on BSD and Linux < 2.6.12
    they were NOPs returning success). And since BSD locking is not as portable as
    POSIX locking this is sometimes an unsafe choice. Some OSes even find it funny
    to make flock() and fcntl(F_SET_LK) control the same locks.
    Linux treats them independently — except for the cases where it doesn’t: on
    Linux NFS they are transparently converted to POSIX locks, too now. What a chaos!
  • Mandatory locking is available too. It’s based on the POSIX locking API but
    not portable in itself. It’s dangerous business and should generally be avoided
    in cleanly written software.
  • Traditional lock file based file locking. This is how things where done
    traditionally, based around known atomicity guarantees of certain basic file
    system operations. It’s a cumbersome thing, and requires polling of the file
    system to get notifications when a lock is released. Also, On Linux NFS < 2.6.5
    it doesn’t work properly, since O_EXCL isn’t atomic there. And of course the
    client cannot really know what the server is running, so again this brokeness
    is not detectable.

The Disappointing Summary

File locking on Linux is just broken. The broken semantics of POSIX locking
show that the designers of this API apparently never have tried to actually use
it in real software. It smells a lot like an interface that kernel people
thought makes sense but in reality doesn’t when you try to use it from

Here’s a list of places where you shouldn’t use file locking due to the
problems shown above: If you want to lock a file in $HOME, forget about it as
$HOME might be NFS and locks generally are not reliable there. The same applies
to every other file system that might be shared across the network. If the file
you want to lock is accessible to more than your own user (i.e. an access mode
> 0700), forget about locking, it would allow others to block your
application indefinitely. If your program is non-trivial or threaded or uses a
framework such as Gtk+ or Qt or any of the module-based APIs such as NSS, PAM,
… forget about about POSIX locking. If you care about portability, don’t use
file locking.

Or to turn this around, the only case where it is kind of safe to use file locking
is in trivial applications where portability is not key and by using BSD
locking on a file system where you can rely that it is local and on files
inaccessible to others. Of course, that doesn’t leave much, except for private
files in /tmp for trivial user applications.

Or in one sentence: in its current state Linux file locking is unusable.

And that is a shame.

Update: Check out the follow-up story on this topic.

On IDs

Post Syndicated from Lennart Poettering original

When programming software that cooperates with software running on behalf of
other users, other sessions or other computers it is often necessary to work with
unique identifiers. These can be bound to various hardware and software objects
as well as lifetimes. Often, when people look for such an ID to use they pick
the wrong one because semantics and lifetime or the IDs are not clear. Here’s a
little incomprehensive list of IDs accessible on Linux and how you should or
should not use them.

Hardware IDs

  1. /sys/class/dmi/id/product_uuid: The main board product UUID, as
    set by the board manufacturer and encoded in the BIOS DMI information. It may
    be used to identify a mainboard and only the mainboard. It changes when the
    user replaces the main board. Also, often enough BIOS manufacturers write bogus
    serials into it. In addition, it is x86-specific. Access for unprivileged users
    is forbidden. Hence it is of little general use.
  2. CPUID/EAX=3 CPU serial number: A CPU UUID, as set by the
    CPU manufacturer and encoded on the CPU chip. It may be used to identify a CPU
    and only a CPU. It changes when the user replaces the CPU. Also, most modern
    CPUs don’t implement this feature anymore, and older computers tend to disable
    this option by default, controllable via a BIOS Setup option. In addition, it
    is x86-specific. Hence this too is of little general use.
  3. /sys/class/net/*/address: One or more network MAC addresses, as
    set by the network adapter manufacturer and encoded on some network card
    EEPROM. It changes when the user replaces the network card. Since network cards
    are optional and there may be more than one the availability if this ID is not
    guaranteed and you might have more than one to choose from. On virtual machines
    the MAC addresses tend to be random. This too is hence of little general use.
  4. /sys/bus/usb/devices/*/serial: Serial numbers of various USB
    devices, as encoded in the USB device EEPROM. Most devices don’t have a serial
    number set, and if they have it is often bogus. If the user replaces his USB
    hardware or plugs it into another machine these IDs may change or appear in
    other machines. This hence too is of little use.

There are various other hardware IDs available, many of which you may
discover via the ID_SERIAL udev property of various devices, such hard disks
and similar. They all have in common that they are bound to specific
(replacable) hardware, not universally available, often filled with bogus data
and random in virtualized environments. Or in other words: don’t use them, don’t
rely on them for identification, unless you really know what you are doing and
in general they do not guarantee what you might hope they guarantee.

Software IDs

  1. /proc/sys/kernel/random/boot_id: A random ID that is regenerated
    on each boot. As such it can be used to identify the local machine’s current
    boot. It’s universally available on any recent Linux kernel. It’s a good and
    safe choice if you need to identify a specific boot on a specific booted
  2. gethostname(), /proc/sys/kernel/hostname: A non-random ID
    configured by the administrator to identify a machine in the network. Often
    this is not set at all or is set to some default value such as
    localhost and not even unique in the local network. In addition it
    might change during runtime, for example because it changes based on updated
    DHCP information. As such it is almost entirely useless for anything but
    presentation to the user. It has very weak semantics and relies on correct
    configuration by the administrator. Don’t use this to identify machines in a
    distributed environment. It won’t work unless centrally administered, which
    makes it useless in a globalized, mobile world. It has no place in
    automatically generated filenames that shall be bound to specific hosts. Just
    don’t use it, please. It’s really not what many people think it is.
    gethostname() is standardized in POSIX and hence portable to other
  3. IP Addresses returned by SIOCGIFCONF or the respective Netlink APIs: These
    tend to be dynamically assigned and often enough only valid on local networks
    or even only the local links (i.e. 192.168.x.x style addresses, or even
    169.254.x.x/IPv4LL). Unfortunately they hence have little use outside of
  4. gethostid(): Returns a supposedly unique 32-bit identifier for the
    current machine. The semantics of this is not clear. On most machines this
    simply returns a value based on a local IPv4 address. On others it is
    administrator controlled via the /etc/hostid file. Since the semantics
    of this ID are not clear and most often is just a value based on the IP address it is
    almost always the wrong choice to use. On top of that 32bit are not
    particularly a lot. On the other hand this is standardized in POSIX and hence
    portable to other Unixes. It’s probably best to ignore this value and if people
    don’t want to ignore it they should probably symlink /etc/hostid to
    /var/lib/dbus/machine-id or something similar.
  5. /var/lib/dbus/machine-id: An ID identifying a specific Linux/Unix
    installation. It does not change if hardware is replaced. It is not unreliable
    in virtualized environments. This value has clear semantics and is considered
    part of the D-Bus API. It is supposedly globally unique and portable to all
    systems that have D-Bus. On Linux, it is universally available, given that
    almost all non-embedded and even a fair share of the embedded machines ship
    D-Bus now. This is the recommended way to identify a machine, possibly with a
    fallback to the host name to cover systems that still lack D-Bus. If your
    application links against libdbus, you may access this ID with
    dbus_get_local_machine_id(), if not you can read it directly from the file system.
  6. /proc/self/sessionid: An ID identifying a specific Linux login
    session. This ID is maintained by the kernel and part of the auditing logic. It
    is uniquely assigned to each login session during a specific system boot,
    shared by each process of a session, even across su/sudo and cannot be changed
    by userspace. Unfortunately some distributions have so far failed to set things
    up properly for this to work (Hey, you, Ubuntu!), and this ID is always
    (uint32_t) -1 for them. But there’s hope they get this fixed
    eventually. Nonetheless it is a good choice for a unique session identifier on
    the local machine and for the current boot. To make this ID globally unique it
    is best combined with /proc/sys/kernel/random/boot_id.
  7. getuid(): An ID identifying a specific Unix/Linux user. This ID is
    usually automatically assigned when a user is created. It is not unique across
    machines and may be reassigned to a different user if the original user was
    deleted. As such it should be used only locally and with the limited validity
    in time in mind. To make this ID globally unique it is not sufficient to
    combine it with /var/lib/dbus/machine-id, because the same ID might be
    used for a different user that is created later with the same UID. Nonetheless
    this combination is often good enough. It is available on all POSIX systems.
  8. ID_FS_UUID: an ID that identifies a specific file system in the
    udev tree. It is not always clear how these serials are generated but this
    tends to be available on almost all modern disk file systems. It is not
    available for NFS mounts or virtual file systems. Nonetheless this is often a
    good way to identify a file system, and in the case of the root directory even
    an installation. However due to the weakly defined generation semantics the
    D-Bus machine ID is generally preferrable.

Generating IDs

Linux offers a kernel interface to generate UUIDs on demand, by reading from
/proc/sys/kernel/random/uuid. This is a very simple interface to
generate UUIDs. That said, the logic behind UUIDs is unnecessarily complex and
often it is a better choice to simply read 16 bytes or so from


And the gist of it all: Use /var/lib/dbus/machine-id! Use
/proc/self/sessionid! Use /proc/sys/kernel/random/boot_id!
Use getuid()! Use /dev/urandom!
And forget about the
rest, in particular the host name, or the hardware IDs such as DMI. And keep in
mind that you may combine the aforementioned IDs in various ways to get
different semantics and validity constraints.

New Ground on Terminology Debate?

Post Syndicated from Bradley M. Kuhn original

(These days, ) I generally try to avoid the well-known terminology
debates in our community. But, if you hang around this FLOSS world of
ours long enough, you just can’t avoid occasionally getting into them.
I found myself in one this afternoon
that spanned
three identica
s. I had some new thoughts that I’ve shared today (and even
previously) on my
. I thought it might be useful to write them up in one place
rather than scattered across a series of microblog statements.

I gained my first new insight into the terminology issues when I had
dinner with Larry Wall in
early 2001 after my Master’s thesis defense. It was first time I talked
with him about these issues of terminology, and he said that it sounded
like a good place to apply what he called the “golden rule of
network protocols”: Always be conservative in what you emit and
liberal in what you accept
I’ve recently
again that’s a good rule to follow regarding terminology.

More recently, I’ve realized that the FLOSS community suffers here,
likely due to our high concentration of software developers and
engineers. Precision in communication is a necessarily component of the
lives of developers, engineers, computer scientists, or anyone in a
highly technical field. In our originating fields, lack of precise and
well-understood terminology can cause bridges to collapse or the wrong
software to get installed and crash mission critical systems.
Calling x by the name y sometimes causes mass confusion
and failure. Indeed, earlier this week, I watched
a PBS special, The
Pluto Files
where Neil
deGrasse Tyson
discussed the intense debate about the planetary
status of Pluto. I was actually somewhat relieved that a subtle point
regarding a categorical naming is just as contentious in another area
outside my chosen field. Watching the “what constitutes a
planet” debate showed me that FLOSS hackers are no different than
most other scientists in this regard. We all take quite a bit of pride
in our careful (sometimes pedantic) care in terminology and word choice;
I know I do, anyway.

However, on the advocacy side of software freedom (the part
that isn’t technical), our biggest confusion sometimes stems
from an assumption that other people’s word choice is as necessarily as
precise as ours. Consider the phrase “open source”, for
example. When I say “open source”, I am referring quite
exactly to a business-focused, apolitical and (frankly)
amoral0 interest in,
adoption of, and contribution to FLOSS. Those who coined the term
“open source” were right about at least one thing: it’s a
term that fits well with for-profit interests who might otherwise see
software freedom as too political.

However, many non-business users and developers that I talk to quite
clearly express that they are into this stuff precisely because there
are principles behind it: namely, that FLOSS seeks to make a better
world by giving important rights to users and programmers. Often, they
are using the phrase “open source” as they express this. I
of course take the opportunity to say: it’s because those principles
are so important that I talk about software freedom
. Yet, it’s
clear they already meant software freedom as a concept, and
just had some sloppy word choice.

Fact is, most of us are just plain sloppy with language. Precision
isn’t everyone’s forte, and as a software freedom advocate (not a
language usage advocate), I see my job as making sure people have the
concepts right even if they use words that don’t make much sense. There
are times when the word choices really do confuse the concepts, and
there are other times when they don’t. Sometimes, it’s tough to
identify which of the two is occurring. I try to figure it out in each
given situation, and if I’m in doubt, I just simplify to the golden rule
of network protocols.

Furthermore, I try to have faith in our community’s intelligence.
Regardless of how people get drawn into FLOSS: be it from the moral
software freedom arguments or the technical-advantage-only open source
ones, I don’t think people stop listening immediately upon their arrival
in our community. I know this even from my own adoption of software
freedom: I came for the Free as in Price, but I stayed for the Free as
in Freedom. It’s only because I couldn’t afford a SCO Unix license in
1992 that I installed GNU/Linux. But, I learned within just a year why
the software freedom was what mattered most.

Surely, others have a similar introduction to the community: either
drawn in by zero-cost availability or the technical benefits first, but
still very interested to learn about software freedom. My goal is to
reach those who have arrived in the community. I therefore try to speak
almost constantly about software freedom, why it’s a moral issue, and
why I work every day to help either reduce the amount of proprietary
software, or increase the amount of Free Software in the world. My hope
is that newer community members will hear my arguments, see my actions,
and be convinced that a moral and ethical commitment to software freedom
is the long lasting principle worth undertaking. In essence, I seek to
lead by example as much as possible.

Old arguments are a bit too comfortable. We already know how to have
them on autopilot. I admit myself that I enjoy having an old argument
with a new person: my extensive practice often yields an oratorical
advantage. But, that crude drive is too much about winning the argument
and not enough about delivering the message of software freedom.
Occasionally, a terminology discussion is part of delivering that
message, but my terminology debate tools box has a “use with
care” written on it.

0 Note that here,
too, I took extreme care with my word choice. I mean specifically
merely an absence of any moral code in particular. I do not, by any
stretch, mean immoral.

Where Are The Bytes?

Post Syndicated from Bradley M. Kuhn original

A few years ago, I was considering starting a Free Software project. I
never did start that one, but I learned something valuable in the
process. When I thought about starting this project, I did what I
usually do: ask someone who knows more about the topic than I do. So I
phoned my friend Loïc Dachary, who
has started many Free Software projects, and asked him for advice.

Before I could even describe the idea, Loïc said: you don’t have a
I was taken aback; I said: but I haven’t started yet.
He said: of course you have, you’re talking to me about it, so
you’ve started already
. The most important thing you can tell
, he said, is Where are the bytes?

Loïc explained further: Most projects don’t succeed. The hardest
part about a software freedom project is carrying it far enough so it
can survive even if its founders quit. Therefore, under Loïc’s
theory, the most important task at the project’s start is to generate
those bytes, in hopes those bytes find their way to the a group of
developers who will help keep the project alive.

But, what does he mean by “bytes”? He means, quite simply,
that you have to core dump your thinking, your code, your plans, your
ideas, just about everything on a public URL that everyone can take a
look at. Push bytes. Push them out every time you generate a few.
It’s the only chance your software freedom project has.

The first goal of a software freedom project is to gain developers. No
project can have long-term success without a diverse developer base.
The problem is, the initial development work and project planning too
often ends up trapped in the head of a few developers. It’s human
nature: How can I spend my time telling everyone about what I’m
doing? If I do that, when will I actually do anything?

Successful software freedom project leaders resist this human urge and
do the seemingly counterintuitive thing: they dump their bytes on the
public, even if it slows them down a bit.

This process is even more essential in the network age. If someone
wants to find a program that does a job, the first tool is a search
engine: to find out if someone else has done it yet. Your project’s
future depends completely that every such search performed helps
developers find your bytes.

In early 2001, I asked Larry
, of all the projects he’d worked on, which was the hardest.
His answer was quick: when I was developing the first version of
Larry said, I felt like I had to code completely alone and
just make it work by myself
. Of course, Larry’s a very talented guy
who can make that happen: generate something by himself that everyone
wanted to use. While I haven’t asked him what he’d do in today’s world
if he was charged with a similar task, I can guess — especially
given at how public the Perl6 process has been — that he’d instead
use the new network tools, such as DVCS, to push his bytes early and
often and seek to get more developers involved

Admittedly, most developers’ first urge is to hide
everything. We’ll release it when it’s ready, is often heard, or
— even worse — Our core team works so well together;
it’ll just slow us down to make things public now
. Truth is, this
is a dangerous mixture of fear and narcissism — the very same
drives that lead proprietary software developers to keep things

Software freedom developers have the opportunity to actually get past
the simple reality of software development: all code sucks, and usually
isn’t complete. Yet, it’s still essential that the community see what’s
going on at ever step, from the empty codebase and beyond. When a
project is seen as active, that draws in developers and gives the
project hope of success.

When I was in college, one of the teams in a software engineering class
crashed and burned; their project failed hopelessly. This happened
despite one of the team members spending about half the semester up long
nights, coding by himself, ignoring the other team members. In their
final evaluation, the professor pointed out: Being a software
developer isn’t like being a fighter pilot
. The student, missing
the point, quipped: Yeah, I know, at least a fighter pilot has a
. Truth is, one person, or two people, or even a small team,
aren’t going to make a software freedom project succeed. It’s only
going to succeed when a large community bolsters it and prevents any
single point of failure.

Nevertheless, most software freedom projects are going to fail. But,
there is no shame in pushing out a bunch of bytes, encouraging people to
take a look, and giving up later if it just doesn’t make it. All of
science works this way, and there’s no reason computer science should be
any different. Keeping your project private assures its failure; the
only benefit is that you can hide that you even tried. As my graduate
advisor told me when I was worried my thesis wasn’t a success: a
negative result can be just as compelling as a positive one
. What’s
important is to make sure all results are published and available for
public scrutiny.

When I
started discussing
this idea a few weeks ago
, some argued that early GNU programs
— the founding software of our community — were developed in
private initially. This much is true, but just because GNU developers
once operated that way doesn’t mean it was the right way. We have the
tools now to easily do development in public, so we should. In my view,
today, it’s not really in the spirit of software freedom until the
project, including its design discussions, plans, and prototypes are all
developed in public. Code (regardless of its license) merely dumped
over the wall on intervals deserves to be forked by a community
committed to public development.

Update (2010-06-12): I completely forgot to mention
The Risks of
Distributed Version Control
by Ben Collins-Sussman
, which
is five years old now but still useful. Ben is making a similar
point to mine, and pointing out how some uses of DVCS can cause the
effects that I’m encouraging developers to avoid. I think DVCS is
like any tool: it can be used wrongly. The usage Ben warns about
should be avoided, and DVCS, when used correctly, assists
in the public software development process.

0Note that pushing code
out to the public in the mid-1990s was substantially more arduous (from a
technological perspective) than it is today. Those of you who don’t
remember shar archives may not realize that. 🙂

Change of Plans

Post Syndicated from Lennart Poettering original

The upcoming week I’ll do two talks at LinuxTag 2010 at the Berlin Fair Grounds. One of them was only
added to the schedule today, about
. Systemd has never been presented in a public talk before, so make
sure to attend this historic moment… ;-). Read about what has been written about systemd
so far
, so that you can ask the sharpest questions during my

My second talk might be about stuff a little less reported in the press, but
still very interesting, about Surround Sound in Gnome.

See you at LinuxTag!


Post Syndicated from RealEnder original

Искам да ви разкажа една история.

Има една група хора, които познават живота. Познават механиката. Знаят как се случват нещата. Да се обадиш на онзи, да почерпиш другия, да бутнеш на третия. Обикновенно, те заемат позиция, от която зависи едно или друго по-голямо решение за живота на другите. Тъй като всички знаят “как стават работите”, всеки елемент от картинката изгражда стена около услугите си. Нарочно оставя задна вратичка, за да може все пак да се изпълни процеса, в който е критична точка. За съответната услуга има и съответното поощрение.

Какво става обаче, когато работещ по горната схема елемент, опита да получи полагаемите му услуги(като гражданин на НРБ?) от останалите крепости? Очевидно, налага му се да търси задните вратички. И не просто му се налага – той само тях познава. Не мисли, че всъщност някой може и да си върши работата, за която му плащат, без да иска нещо допълнително. Така малкото състояние(без значение под какво форма е то) се стопява много по-бързо, от колкото е натрупано и в един момент се излиза на минус. Особено, ако от теб зависят малко неща. Тогава само дължиш на “Заверата”, защото рядко ти се случва да й даваш.

Някои хора казват, че съм малък и не помня много неща, които всъщност си спомянм. Горното е системата на “Второ направление” – а иначе стоката, предназначена за външнотърговския(към и извън СИВ) пазар, продавана “под тезгяха” на вътрешния. На “приятели”. Които ще ти върнат услугата.

Зная колко е трудно да се дефинират, налагат и изпълняват определени бизнес-процеси, в организации, където Второто направление е Първата власт. Нещата не стават с удари по масата, не стават и с внимателни опити за цивилизоване и образоване. Всъщност, ако в опита си не попаднеш в примката на ВН, може да се отчита като умерен успех:)

Истината, поне за мен, е, че всеки, работещ по тези схеми трябва да страда. И то от несъвършенствата на схемата в която е попаднал. Всеки кадърен управленец, осъзнал проблема, може да създаде несъвършенства – понижаване на отговорностите, възлагане на други, промяна на процеса и какво ли още не.

Накрая остана да споменем и другите – тези, които нямат вътрешен човек в крепостта. В зависимост от звяра вътре, те или минават бързо и безропотно, преди да се е стоварила решетката връз главите им, или просто чакат да се вдигне моста. Много от тях мечтаят да бъдат част от системата. Други просто псуват, пушат, пият и правят други неморални и незаконни неща (не задължително в този ред).

А какво правиш ти?

O’Reilly Book Deal – Get Security and Other Ebooks Cheap Today

Post Syndicated from David original

O’Reilly has a coupon available for today only that makes any one ebook in their store $10. If you’re like me and like to have an electronic edition handy, this is a great deal for books that are updated and searchable. Their security books can be found here. You’ll want to use coupon code “FAVFA”.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.