“Open Core” Is the New Shareware

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2009/10/16/open-core-shareware.html

[ I originally wrote this essay below centered around the term
“Open Core”. Despite that even say below that the terms is
somewhat
meaningless, I
later realized this term was so problematic that it should be
abandoned entirely
, for use instead of the clearer term
“proprietary relicensing”. However, since this blog post was widely
linked to, I’ve nevertheless left the text as it originally was in
October 2009. ]

There has been some debate recently about so-called “Open
Core” business models. Throughout the history of Free Software,
companies have loved to come up with “innovative”
proprietary-like ways to use the FLOSS licensing structures.
Proprietary relicensing, a practice that I believe has proved itself to
have serious drawbacks, was probably the first of these, and now Open
Core is the next step in this direction. I believe the users embracing
these codebases may be ignoring a past they’re condemned to repeat.

Like most buzzwords, Open Core has no real agreed-upon meaning. I’m
using it to describe a business model whereby some middleware-ish system
is released by a single, for-profit entity copyright holder, who
requires copyright-assigned changes back to the company, and that
company sells proprietary add-ons and applications that use the
framework. Often, the model further uses the GPL to forbid anyone but
the copyright-holding company to make such proprietary add-on
applications (i.e., everyone else would have to GPL their applications).
In the current debate, some have proposed that a permissive license
structure can be used for the core instead.

Ultimately, “Open Core” is a glorified shareware situation.
As a user, you get some subset of functionality, and may even get the
four freedoms
with regard to that subset. But, when you want the “good
stuff”, you’ve got to take a proprietary license. And, this is
true whether the Core is GPL’d or permissively licensed. In both cases,
the final story is the same: take a proprietary license or be stuck with
cripple-ware.

This fact remains true whether the Open Core is under a copyleft
license or a permissive one. However, I must admit that a permissive
license is more intellectually honest to the users. When users
encounter a permissive license, they know what they are in for: they may
indeed encounter proprietary add-ons and improvements, either from the
original distributor or a third party. For example, Apple users sadly
know this all too well; Apple loves to build on a permissively licensed
core and proprietarize away. Yet, everyone knows what they’re getting
when they buy Apple’s locked down, unmodifiable, and
programmer-unfriendly products.

Meanwhile, in more typical “Open Core” scenarios, the use
of the GPL is actually somewhat insidious. I’ve written before
about how
the copyleft is a tool, not an end in itself
. Like any tool, it can
be misused or abused. I think using the GPL as a tool for corporate
control over users, while legally permissible, is ignoring the spirit of
the license. It creates two classes of users: those precious few that
can proprietarize and subjugate others, and those that can’t.1

This (ab)use of GPL has
led folks
like Matt Aslett to suggest that the permissive licensing solution

would serve this model better. While I’ve admitted such a change would
have some level of increased intellectually honesty, I don’t think it’s
the solution we should strive for to solve the problem. I think Aslett’s
completely right when he argues that GPL’d “Open Core”
became popular because it’s Venture Capitalists’ way of making peace
with freely licensed copyrights. However, heading to an Apple-like
permissive only structure only serves to make more Apple-like companies,
and that’s surely not good for software freedom either. In fact, the
problem is mostly orthogonal to licensing. It’s a community building
problem.

The first move we have to make is simply give up the idea that the best
technology companies are created by VC money. This may be true if your
goal is to create proprietary companies, but the best Free Software
companies are the small ones, 5-10 employees, that do consulting work
and license all their improvements back to a shared codebase. From
low-level technology like Linux and GCC to higher-level technology like
Joomla all show that this project structure yields popular and vibrant
codebases. The GPL was created to inspire business and community models
like these examples. The VC-controlled proprietary relicensing and
“Open Core” models are manipulations of the licensing
system. (For more on this part of my argument, I suggest my discussions
on
Episode
0x14 of the (defunct) Software Freedom Law Show
.)

I realize that it’s challenging for a community to create these sort of
codebases. The best way to start, if you’re a small business, is to
find a codebase that gets you 40% or so toward your goal and start
contributing to the code with your own copyrights, licensed
under GPL. Having something that gets you somewhere will make it easier
to start your business on a consulting basis without VC, and allow you
to be part of one of these communities instead of trying to create an
“Open Core” community you can exploit with proprietary
licensing. Furthermore, the fact that you hold copyright alongside
others will give you a voice that must be heard in decision-making
processes.

Finally, if you find an otherwise useful
single-corporate-copyright-controlled GPL’d codebase from one of these
“Open Core” companies, there is something simple you can
do:

Fork! In essence, don’t give into pressure by these
companies to assign copyright to them. Get a group of community
developers together and maintain a fork of the codebase. Don’t be mean
about it, and use git or another DVCS to keep tracking branches of the
company’s releases. If enough key users do this and refuse to assign
copyright, the good version will eventually become community one rather
than the company-controlled one.

My colleague Carlo
Piana points
out a flaw in this plan, saying the ant cannot drive the
elephant
. While I agree with Carlo generally, I also think that
software freedom has historically been a little bit about ants driving
elephants. These semi-proprietary business models are thriving on the
fundamental principle of a proprietary model: keep users from
cooperating to improve the code on which they all depend. It’s a
prisoner’s dilemma that makes each customer afraid to cooperate with the
other for fear that the other will yield to pressure not to cooperate.
As the fictional computer Joshua points out, this is a strange game.
The only winning move is not to play.

The software freedom world is more complex than it once was. Ten years
ago, we advocates could tell people to look for the GPL label and
know that the software would automatically be part of a
freedom-friendly, software sharing community. Not all GPL’d software is
created equal anymore, and while the right to fork remains firmly in
tact, the realities of whether such forks will survive, and whether the
entity controlling the canonical version can be trusted is another
question entirely. The new advice is: judge the freedom of your
codebase not only on its license, but also on the diversity of the
community that contributes to it.


1I must
put a fine point here that the only way
companies can manipulate the GPL in this example is by
demanding full copyright assignment back to the corporate
entity. The GPL itself protects each individual contributor
from such treatment by other contributors, but when there is
only one contributor, those protections evaporate. I must
further note that for-profit corporate assignment differs
greatly from assignment to a non-profit, as non-profit
copyright assignment paperwork typically includes broad legal
assurances that the software will never be proprietarized, and
furthermore, the non-profit’s very corporate existence hinges
on engaging only in activity that promotes the public
good.

Denouncing vs. Advocating: In Defense of the Occasional Denouncement

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2009/10/11/denouncing-v-advocating.html

For the last decade, I’ve regularly seen complaints when we harder-core
software freedom advocates spend some time criticizing proprietary
software in addition to our normal work preserving, protecting and
promoting software freedom. While I think entire campaigns focused on
criticism are warranted in only extreme cases, I do believe that
denouncement of certain threatening proprietary technologies is a
necessary part of the software freedom movement, when done sparingly.

Denouncements are, of course, negative, and in general, negative
tactics are never as valuable as positive ones. Negative campaigns
alienate some people, and it’s always better to talk about the advantages
of software freedom than focus on the negative of proprietary
software.

The place where negative campaigns that denounce are simply necessary,
in my view, is when the practice either (a) will somehow completely
impeded the creation of FLOSS or (b) has become, or is becoming,
widespread among people who are otherwise supportive of software
freedom.

I can think quickly of two historical examples of the first type: UCITA
and DRM. UCITA was a State/Commonwealth-level law in the USA that was
proposed to make local laws more consistent regarding software
distribution. Because the implications were so bad
for software freedom (details of which are beyond scope of this post but
can be learned at the link)
, and because it was so unlikely that we
could get the UCITA drafts changed, it was necessary to publicly denounce
the law and hope that it didn’t pass. (Fortunately, it only ever passed
in my home state of Maryland and in Virginia. I am still, probably
pointlessly, careful never to distribute software when I visit my
hometown. 🙂

DRM, for its part, posed an even greater threat to software freedom
because its widespread adoption would require proprietarization of all
software that touched any television, movie, music, or book media. There
was also a concerted widespread pro-DRM campaign from USA corporations.
Therefore, grassroots campaigns denouncing DRM are extremely necessary
even despite that they are primarily negative in operation.

The second common need for denouncement when use of a proprietary
software package has become acceptable in the software freedom community.
The most common examples are usually specific proprietary software
programs that have become (or seem about to become) “all but
standard” part of the toolset for Free Software developers and
advocates.

Historically, this category included Java, and that’s why there were
anti-Java campaigns in the Free Software community that ran concurrently
with Free Software Java development efforts. The need for the former is
now gone, of course, because the latter efforts were so successful and we
have a fully FaiF Java system. Similarly, denouncement of Bitkeeper was
historically necessary, but is also now moot because of the advent and
widespread popularity of Mercurial, Git, and Bazaar.

Today, there are still a few proprietary programs that quickly rose to
ranks of “must install on my GNU/Linux system” for all but the
hardest-core Free Software advocates. The key examples are Adobe Flash
and Skype. Indeed, much to my chagrin, nearly all of my co-workers at
SFLC insist on using Adobe Flash, and nearly every Free Software developer
I meet at conferences uses it too. And, despite excellent VoIP technology
available as Free Software, Skype has sadly become widely used in our
community as well.

When a proprietary system becomes as pervasive in our community as
these have (or looks like it might), it’s absolutely time for
denouncement. It’s often very easy to forget that we’re relying more and
more heavily on proprietary software. When a proprietary system
effectively becomes the “default” for use on software freedom
systems, it means fewer people will be inspired to write a
replacement. (BTW, contribute to Gnash!) It means that Free Software
advocates will, in direct contradiction of their primary mission, start to
advocate that users install that proprietary software, because it
seems to make the FaiF platform “more useful”.

Hopefully, by now, most of us in the software freedom community agree
that proprietary software is a long term trap that we want to avoid.
However, in the short term, there is always some new shiny thing.
Something that appeals to our prurient desire for software that
“does something cool”. Something that just seems so
convenient that we convince ourselves we cannot live without it, so we
install it. Over time, short term becomes the long term, and suddenly we
have gaping holes in the Free Software infrastructure that only the very
few notice because the rest just install the proprietary thing. For
example, how many of us bother to install Linux Libre,
even long enough to at least know which of our hardware
components needs proprietary software? Even I have to admit I don’t do
this, and probably should.

An old adage of software development is that software is always better
if the developers of it actually have to use the thing from day to day.
If we agree that our goal is ultimately convincing everyone to run only
Free Software (and for that Free Software to fit their needs), then we
have to trailblaze by avoiding running proprietary software ourselves. If
you do run proprietary software, I hope you won’t celebrate the fact or
encourage others to do so. Skype is particularly insidious here, because
it’s a community application. Encouraging people to call you on Skype is
the same as emailing someone a Microsoft Word document: it’s encouraging
someone to install a proprietary application just to work with you.

Finally, I think the only answer to the FLOSS community
celebrating the arrival of some new proprietary program for
GNU/Linux is to denounce it, as a counterbalance to the fervor that such
an announcement causes. My podcast co-host Karen
often calls me the canary in the software coalmine because I am
usually the first to notice something that is bad for the advancement of
software freedom before anyone else does. In playing this role, I often
end up denouncing a few things here and there, although I can still count
on my two hands the times I’ve done so. I agree that advocacy should be
the norm, but the occasional denouncement is also a necessary part of the
picture.

(Note: this blog is part of an ongoing public discussion of a software
program that is not too popular yet, but was heralded widely as a win for
Free Software in the USA. I didn’t mention it by name mainly because I
don’t want to give it more press than it’s already gotten, as it is one of
this programs that is becoming a standard GNU/Linux user
application (at least in the USA), but hasn’t yet risen to the level of
ubiquity of the other examples I give above. Here’s to hoping that it
doesn’t.)

LPC Audio BoF Notes

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/audio-bof-notes.html

Here are some very short notes from the Audio BoF
at the Linux Plumbers
Conference
in Portland two weeks ago. Sorry for the delay!

Biggest issue discussed was audio routing. On embedded devices this gets
more complex each day, and there are a lot of open questions on the desktop,
too. Different DSP scenarios; how do mixer controls match up with PCM streams
and jack sensing? How do we determine which volume control sliders that are in
the pipeline we are currently interested in? How does that relate to policy
decisions? Format to store audio routing in?

The ALSA scenario subsystem
currently being worked on by Liam Girdwood and the folks at SlimLogic and
currently on its way to being integrated into ALSA proper hopefully helps us,
so that we can strip a lot of complexity related to the routing logic from
PulseAudio and move it into a lower level which naturally knows more about the
hardware’s internal routing.

Does it make sense for some apps to bypass the ALSA userspace layer and
to talk to the kernel drivers via ioctl()s directly?i (i.e. thus not depending on ALSA’s
LISP intepreter, and a lot of other complexities)? Probably yes, but certainly
not in the short term future. Salsa? libsydney?

Should the timing deviation estimation/interpolation be moved from
PulseAudio into the kernel? Might be a good idea. Particularly interesting
when we try to to monitor not only the system and audio clocks, but the video
output and particularly the video input (i.e. video4linux) clocks, too. A
unified kernel-based timing system has advantages in accuracy, allows better
handling of (pseudo-) atomic timing snapshots, and would centralize timing
handling not only between different applications (PA and JACK) but also
between different subsystems. Problem: current timing stuff in PulseAudio
might be a bit too homegrown for moving it 1:1 into the kernel. Also, depends
on FP. Needs someone to push this. Apple does the clock handling in the
kernel. How does this relate to ALSA’s timer API?

Seems Ubuntu is going to kill OSS pretty soon too, following Fedora’s lead. Yay!

And that’s all I have. Should be the biggest points raised. Ping me if I
forgot something.

Latency Control

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/latency-control.html

#nocomments yes

An often asked question is how to properly talk to PulseAudio from within applications where
latency matters. To answer that question once and for all I’ve written this guide in our
Wiki
that should light things up a little. If you are interested in audio
latency in PA, want to know how to minimize CPU usage and power consumption or
how to maximize drop-out safety make sure to read this!

Conferences

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/lpc-bluez-maemo-2009.html

Last week I’ve been at the Linux Plumbers Conference in
Portland. Like last year it kicked ass and proved again being one of the most
relevant Linux developer conferences (if not the most relevant one). I
ran the Audio MC at the conference which was very well attended. The slides
for our four talks in the
track are available online
. (My own slides are probably a bit too terse
for most readers, the interesting stuff was in the talking, not the
reading…) Personally, for me the most interesting part was to see to which
degree Nokia actually adopted PulseAudio
in the N900. While I was aware that Nokia was using it, I wasn’t aware that
their use is as comprehensive as it turned out it is. And the industry
support from other companies is really impressive too. After the main track we
had a BoF session, which notes I’ll post a bit later. Many thanks to Paul,
Jyri, Pierre for their great talks. Unfortunately, Palm, the only manufacturer
who is actually already shipping a phone with PulseAudio didn’t send anyone to
the conference who wanted to talk about that. Let’s hope they’ll eventually
learn that just throwing code over the wall is not how Open Source works.
Maybe they’ll send someone to next year’s LPC in Boston, where I hope to be
able to do the Audio MC again.

Right now I am at the BlueZ Summit in Stuttgart. Among other things we have
been discussing how to improve Bluetooth Audio support in PulseAudio. I guess
one could say thet the Bluetooth support in PulseAudio is already one of its
highlights, in fact working better then the support on other OSes (yay, that’s
an area where Linux Audio really shines!). So up next is better support for
allowing PA to receive A2DP audio, i.e. making PA act as if it was a Headset or
your hifi. Use case: send music from from your mobile to your desktop’s hifi
speakers. (Actually this is already support in current BlueZ/PA versions, but
not easily accessible). Also Bluetooth headsets tend to support AC3 or MP3
decoding natively these days so we should support that in PA too. Codec
handling has been on the TODO list for PA for quite some time, for the SPDIF or
HDMI cases, and Bluetooth Audio is another reason why we really should have
that.

Next week I’ll be at the Maemo Summit in Amsterdam.
Nokia kindly invited me. Unfortunately I was a bit too late to get a proper
talk accepted. That said, I am sure if enough folks are interested we could do
a little ad-hoc BoF and find some place at the venue for it. If you have any
questions regarding PA just talk to me. The N900 uses PulseAudio for all things
audio so I am quite sure we’ll have a lot to talk about.

See you in Amsterdam!

One last thing: Check out Colin’s
work to improve integration of PulseAudio and KDE
!

Skype

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/skype.html

A quick update on Skype: the next Skype version will include native
PulseAudio support. And not only that but they even tag their audio
streams properly
. This enables PulseAudio to do fancy stuff like
automatically pausing your audio playback when you have a phone call. Good job!

In some ways they are now doing a better job with integration in to the modern
audio landscape than some Free Software telephony applications!

Unfortunately they didn’t fix the biggest bug though: it’s still not Free
Software!

More Mutrace

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/mutrace2.html

Here’s a list of quick updates on my mutrace mutex profiler since
my initial announcement two weeks ago:

I added some special support for tracking down use of mutexes in realtime
threads. It’s a very simple extension that — if enabled — checks on each
mutex operation wheter it is executed by a realtime thread or not. (–track-rt) The
output of a test run of this you can find in this announcement on LAD.
Particularly interesting is that you can use this to track down which mutexes
are good candidates for priority inheritance.

The mutrace tarball now also includes a companion tool matrace
that can be used to track down memory allocation operations in realtime
threads. See the same lad announcement as above for example output of this
tool.

With help from Boudewijn Rempt I added some compatibility code for
profiling C++/Qt apps with mutrace, which he already used for some interesting
profiling results
on krita.

Finally, after my comments on the locking hotspots in glib’s type system,
Wim Taymans and Edward Hervey worked on turning the mutex-emulated rwlocks
into OS native ones with quite positive results, for more information see this
bug
.

As soon as my review request is fully processed mutrace will be available
in rawhide.

A snapshot tarball of mutrace you may find here
(despite the name of the tarball that’s just a snapshot, not the real release
0.1), for all those folks who are afraid of git, or don’t have a current
autoconf/automake/libtool installed.

Oh, and they named a unit after me.

Measuring Lock Contention

Post Syndicated from Lennart Poettering original https://0pointer.net/blog/projects/mutrace.html

When naively profiling multi-threaded applications the time spent waiting
for mutexes is not necessarily visible in the generated output. However lock
contention can have a big impact on the runtime behaviour of applications. On
Linux valgrind’s
drd
can be used to track down mutex contention. Unfortunately running
applications under valgrind/drd slows them down massively, often having the
effect of itself generating many of the contentions one is trying to track
down. Also due to its slowness it is very time consuming work.

To improve the situation if have now written a mutex profiler called
mutrace
. In contrast to valgrind/drd it does not virtualize the
CPU instruction set, making it a lot faster. In fact, the hooks mutrace
relies on to profile mutex operations should only minimally influence
application runtime. mutrace is not useful for finding
synchronizations bugs, it is solely useful for profiling locks.

Now, enough of this introductionary blabla. Let’s have a look on the data
mutrace can generate for you. As an example we’ll look at
gedit as a bit of a prototypical Gnome application. Gtk+ and the other
Gnome libraries are not really known for their heavy use of multi-threading,
and the APIs are generally not thread-safe (for a good reason). However,
internally subsytems such as gio do use threading quite extensibly.
And as it turns out there are a few hotspots that can be discovered with
mutrace:

$ LD_PRELOAD=/home/lennart/projects/mutrace/libmutrace.so gedit
mutrace: 0.1 sucessfully initialized.

gedit is now running and its mutex use is being profiled. For this example
I have now opened a file with it, typed a few letters and then quit the program
again without saving. As soon as gedit exits mutrace will print the
profiling data it gathered to stderr. The full output you can see
here.
The most interesting part is at the end of the generated output, a
breakdown of the most contended mutexes:

mutrace: 10 most contended mutexes:

 Mutex #   Locked  Changed    Cont. tot.Time[ms] avg.Time[ms] max.Time[ms]       Type
      35   368268      407      275      120,822        0,000        0,894     normal
       5   234645      100       21       86,855        0,000        0,494     normal
      26   177324       47        4       98,610        0,001        0,150     normal
      19    55758       53        2       23,931        0,000        0,092     normal
      53      106       73        1        0,769        0,007        0,160     normal
      25    15156       70        1        6,633        0,000        0,019     normal
       4      973       10        1        4,376        0,004        0,174     normal
      75       68       62        0        0,038        0,001        0,004     normal
       9     1663       52        0        1,068        0,001        0,412     normal
       3   136553       41        0       61,408        0,000        0,281     normal
     ...      ...      ...      ...          ...          ...          ...        ...

mutrace: Total runtime 9678,142 ms.

(Sorry, LC_NUMERIC was set to de_DE.UTF-8, so if you can’t make sense of
all the commas, think s/,/./g!)

For each mutex a line is printed. The ‘Locked’ column tells how often the
mutex was locked during the entire runtime of about 10s. The ‘Changed’ column
tells us how often the owning thread of the mutex changed. The ‘Cont.’ column
tells us how often the lock was already taken when we tried to take it and we
had to wait. The fifth column tell us for how long during the entire runtime
the lock was locked, the sixth tells us the average lock time, and the seventh
column tells us the longest time the lock was held. Finally, the last column
tells us what kind of mutex this is (recursive, normal or otherwise).

The most contended lock in the example above is #35. 275 times during the
runtime a thread had to wait until another thread released this mutex. All in
all more then 120ms of the entire runtime (about 10s) were spent with this
lock taken!

In the full output we can now look up which mutex #35 actually is:

Mutex #35 (0x0x7f48c7057d28) first referenced by:
	/home/lennart/projects/mutrace/libmutrace.so(pthread_mutex_lock+0x70) [0x7f48c97dc900]
	/lib64/libglib-2.0.so.0(g_static_rw_lock_writer_lock+0x6a) [0x7f48c674a03a]
	/lib64/libgobject-2.0.so.0(g_type_init_with_debug_flags+0x4b) [0x7f48c6e38ddb]
	/usr/lib64/libgdk-x11-2.0.so.0(gdk_pre_parse_libgtk_only+0x8c) [0x7f48c853171c]
	/usr/lib64/libgtk-x11-2.0.so.0(+0x14b31f) [0x7f48c891831f]
	/lib64/libglib-2.0.so.0(g_option_context_parse+0x90) [0x7f48c67308e0]
	/usr/lib64/libgtk-x11-2.0.so.0(gtk_parse_args+0xa1) [0x7f48c8918021]
	/usr/lib64/libgtk-x11-2.0.so.0(gtk_init_check+0x9) [0x7f48c8918079]
	/usr/lib64/libgtk-x11-2.0.so.0(gtk_init+0x9) [0x7f48c89180a9]
	/usr/bin/gedit(main+0x166) [0x427fc6]
	/lib64/libc.so.6(__libc_start_main+0xfd) [0x7f48c5b42b4d]
	/usr/bin/gedit() [0x4276c9]

As it appears in this Gtk+ program the rwlock type_rw_lock
(defined in glib’s gobject/gtype.c) is a hotspot. GLib’s rwlocks are
implemented on top of mutexes, so an obvious attempt in improving this could
be to actually make them use the operating system’s rwlock primitives.

If a mutex is used often but only ever by the same thread it cannot starve
other threads. The ‘Changed.’ column lists how often a specific mutex changed
the owning thread. If the number is high this means the risk of contention is
also high. The ‘Cont.’ column tells you about contention that actually took
place.

Due to the way mutrace works we cannot profile mutexes that are
used internally in glibc, such as those used for synchronizing stdio
and suchlike.

mutrace is implemented entirely in userspace. It
uses all kinds of exotic GCC, glibc and kernel features, so you might have a
hard time compiling and running it on anything but a very recent Linux
distribution. I have tested it on Rawhide but it should work on slightly older
distributions, too.

Make sure to build your application with -rdynamic to make the
backtraces mutrace generates useful.

As of now, mutrace only profiles mutexes. Adding support for
rwlocks should be easy to add though. Patches welcome.

The output mutrace generates can be influenced by various
MUTRACE_xxx environment variables. See the sources for more
information.

And now, please take mutrace and profile and speed up your application!

You may find the sources in my
git repository.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close