Tag Archives: games

Software Freedom Is Elementary, My Dear Watson.

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2011/03/01/watson.html

I’ve watched the game
show, Jeopardy!
, regularly since its Trebek-hosted
relaunch on 1984-09-10. I even remember distinctly the Final Jeopardy
question that night as This date is the first day of the new
millennium. At the age of 11, I got the answer wrong, falling for
the incorrect What is 2000-01-01?, but I recalled this memory
eleven years ago during the
debates
regarding when the millennium turnover happened
.

I had periods of life where I watched Jeopardy! only
rarely, but in recent years (as I’ve become more of a student of games
(in part, because of poker)), I’ve watched Jeopardy! almost
nightly over dinner with my wife. I’ve learned that I’m unlikely to
excel as a Jeopardy! player myself because (a) I read slow
and (b) my recall of facts, while reasonably strong, is not
instantaneous. I thus haven’t tried out for the show, but I’m
nevertheless a fan of strong players.

Jeopardy! isn’t my only spectator game. Right after
college, even though I’m a worse-than-mediocre chess player, I watched
with excitement
as Deep
Blue
played and defeated Kasparov. Kasparov has disputed the
results and how much humans were actually involved, but even so, such
interference was minimal (between matches) and the demonstration still
showed computer algorithmic mastery of chess.

Of course, the core algorithms that Deep Blue used were well known and
often implemented. I learned α-β pruning in my undergraduate
AI course and it was clear that a sufficiently fast computer, given a
few strong heuristics, could beat most any full information game with a
reasonable branching factor. And, computers typically do these days.

I suppose I never really thought about the issues of Deep Blue being
released as Free Software. First, because I was not as involved with
Free Software then as I am now, and also, as near as anyone could tell,
Deep Blue’s software was probably not useful for anything other than
playing chess, and its primary power was in its ability to go very deep
(hence the name, I guess) in the search tree. In short, Deep Blue was
primarily a hardware, not a software, success story.

It was nevertheless, impressive, and last month, I saw the next
installment in this IBM story. I watched with interest
as IBM’s
Watson defeated two champion Jeopardy! players
. Ken
Jennings, for one, even welcomed our new computer overlords.

Watson beating Jeopardy! is, frankly, a lot more
innovative than Deep Blue beating chess. Most don’t know this about me,
but I came very close to focusing my career on PhD work in Natural
Language Processing; I believe fundamentally it’s the area of AI most in
need of attention and research. Watson is a shining example of success
in modern NLP, and I actually believe some of the IBM hype about
how Watson’s
technology can be applied elsewhere, such as medical information
systems
. Indeed, IBM
has announced
a deal with Columbia University Medical Center to adapt the system for
medical diagnostics
. (Perhaps Watson’s next TV appearance will be
on House.)

This all sounds great to most people, but to me, my real concern is the
freedom of the software. We’ve shown in the software freedom community
that to advance software and improve it, sharing the software is
essential. Technology locked up in a vaulted cave doesn’t allow all the
great minds to collaborate. Just as we don’t lock up libraries so that
only the guilded overlords have access, nor should the best software
technology be restricted in proprietariness.

Indeed, Eric
Brown
, at
his Linux
Foundation End User Linux Summit talk
, told us that Watson relied
heavily on the publicly available software freedom codebase, such as
GNU/Linux, Hadoop, and other
FLOSS
components. They clearly couldn’t do their work without building upon the
work we shared with IBM, yet IBM apparently ignores its moral obligation to
reciprocate.

So, I just point-blank asked Brown why Watson is proprietary. Of
course, I long ago learned to never ask a confrontational question from
the crowd at a technical talk without knowing what the answer is likely to
be. Brown answered in the way I expected: We’re working with
Universities to provide a framework for their research. I followed
up asking
when he would actually release the sources and what license
would be. He dodged the question, and instead speculated about what
licenses IBM sometimes like to use when it does chose to release code;
he did not indicate if Watson’s sources will ever be released. In
short, the answer from IBM is clear: Watson’s general ideas
will be shared with academics, but the source code won’t be.

This point is precisely one of the reasons I didn’t pursue a career in
academic Computer Science. Since most jobs — including
professorships at Universities — for PhDs in Computer Science
require that any code written be kept proprietary, most
Computer Science researchers have convinced themselves that code doesn’t
matter; only publishing ideas do. This belief is so pervasive that I
knew something like this would be Brown’s response to my query. (I was
even so sure, I wrote almost this entire blog post before I asked the
question).

I’d easily agree that publishing papers is better than the technology
being only a trade secret. At least we can learn a little bit about the
work. But in all but the pure theoretical areas of Computer
Science, code is written to exemplify, test, and exercise the
ideas. Merely publishing papers and not the code is akin to a chemist
publishing final results but nothing about the methodologies or raw
data. Science, in such cases, is unverifiable and unreproducible. If
we accepted such in fields other than CS, we’d have accepted the idea
that cold
fusion was discovered in 1989
.

I don’t think I’m going to convince IBM to release Watson’s sources as
Free Software. What I do hope is that perhaps this blog post convinces
a few more people that we just shouldn’t accept that Computer Science is
advanced by researchers who give us flashy demos and code-less
research papers. I, for one, welcome our computer overlords…but only
if I can study and modify their source code.

Proprietary Software Licensing Produces No New Value In Society

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2010/07/07/producing-nothing.html

I sought out the quote below when Chris Dodd paraphrased it on Meet
The Press on 25 April 2010. (I’ve been, BTW, slowly but surely
working on this blog post since that date.) Dodd
was quoting Frank
Rich, who wrote the following, referring to the USA economic
system
(and its recent collapse):

As many have said — though not many politicians in either party
— something is fundamentally amiss in a financial culture that
thrives on “products” that create nothing and produce nothing
except new ways to make bigger bets and stack the deck in favor of the
house. “At least in an actual casino, the damage is contained to
gamblers,” wrote the financial journalist Roger Lowenstein in The
Times Magazine last month. This catastrophe cost the economy eight million
jobs.

I was drawn to this quote for a few reasons. First, as a poker player,
I’ve spend some time thinking about how “empty” the gambling
industry is. Nothing is produced; no value for humans is created; it’s
just exchanging of money for things that don’t actually exist. I’ve
been considering that issue regularly since around 2001 (when I started
playing poker seriously). I ultimately came to a conclusion not too
different from Frank Rich’s point: since there is a certain
“entertainment value”, and since the damage is contained to
those who chose to enter the casino, I’m not categorically against poker
nor gambling in general, nor do I think they are immoral. However, I
also don’t believe gambling has any particular important value in
society, either. In other words, I don’t think people have an
inalienable right to gamble, but I also don’t think there is any moral
reason to prohibit casinos.

Meanwhile, I’ve also spent some time applying this idea of creating
nothing and producing nothing to the proprietary software
industry. Proprietary licenses, in many ways, are actually not all
that different from these valueless financial transactions.
Initially, there’s no problem: someone writes software and is paid for
it; that’s the way it should be. Creation of new software is an
activity that should absolutely be funded: it creates something new
and valuable for others. However, proprietary licenses are designed
specifically to allow a single act of programming generate new revenue
over and over again. In this aspect, proprietary licensing is akin to
selling financial derivatives: the actual valuable transaction is
buried well below the non-existent financial construction above
it.

I admit that I’m not a student of economics. In fact, I rarely think
of software in terms of economics, because, generally, I don’t want
economic decisions to drive my morality nor that of our society at
large. As such, I don’t approach this question with an academic
economic slant, but rather, from personal economic experience.
Specifically, I learned a simple concept about work when I was young:
workers in our society get paid only for the hours that they
work. To get paid, you have to do something new. You just can’t sit
around and have money magically appear in your bank account for hours
you didn’t work.

I always approached software with this philosophy. I’ve often been
paid for programming, but I’ve been paid directly for the hours I spent
programming. I never even considered it reasonable to be paid again for
programming I did in the past. How is that fair, just, or quite
frankly, even necessary? If I get a job building a house, I can’t get
paid every day someone uses that house. Indeed, even if I built the
house, I shouldn’t get a royalty paid every time the house is resold to
a new owner0. Why
should software work any differently? Indeed, there’s even an argument
that software, since it’s so much more trivial to copy than a
house, should be available gratis to everyone once it’s written the
first time.

I recently heard (for the first time) an old story about a well-known
Open Source company (which no longer exists, in case you’re wondering).
As the company grew larger, the company’s owners were annoyed that
the company could
only bill the clients for the hour they worked. The business
was going well, and they even had more work than they could handle
because of the unique expertise of their developers. The billable rates
covered the cost of the developers’ salaries plus a reasonable
profit margin. Yet, the company executives wanted more; they wanted
to make new money even when everyone was on vacation. In
essence, having all the new, well-paid programming work in the world
wasn’t enough; they wanted the kinds of obscene profits that can only be
made from proprietary licensing. Having learned this story, I’m pretty
glad the company ceased to exist before they could implement
their make money while everyone’s on the beach plan. Indeed, the
first order of business in implementing the company’s new plan was, not
surprisingly, developing some new from-scratch code not covered by GPL
that could be proprietarized. I’m glad they never had time to execute
on that plan.

I’ll just never be fully comfortable with the idea that workers should
get money for work they already did. Work is only valuable if it
produces something new that didn’t exist in the world before the work
started, or solves a problem that had yet to be solved. Proprietary
licensing and financial bets on market derivatives have something
troubling in common: they can make a profit for someone without
requiring that someone to do any new work. Any time a business moves
away from actually producing something new of value for a real human
being, I’ll always question whether the business remains legitimate.

I’ve thus far ignored one key point in the quote that began this post:
“At least in an actual casino, the damage is contained to
gamblers”. Thus, for this “valueless work” idea to
apply to proprietary licensing, I had to consider (a) whether or not the
problem is sufficiently contained, and (b) whether software or not is
akin to the mere entertainment activity, as gambling is.

I’ve pointed out that I’m not opposed to the gambling industry, because
the entertainment value exists and the damage is contained to people who
want that particular entertainment. To avoid the stigma associated with
gambling, I can also make a less politically charged example such as the
local Chuck E. Cheese, a place I quite enjoyed as a child. One’s parent
or guardian goes to Chuck E. Cheese to pay for a child’s entertainment,
and there is some value in that. If someone had issue with Chuck
E. Cheese’s operation, it’d be easy to just ignore it and not take your
children there, finding some other entertainment. So, the question is,
does proprietary software work the same way, and is it therefore not too
damaging?

I think the excuse doesn’t apply to proprietary software for two
reasons. First, the damage is not sufficiently contained, particularly
for widely used software. It is, for example, roughly impossible to get
a job that doesn’t require the employee to use some proprietary
software. Imagine if we lived in a society where you weren’t allowed to
work for a living if you didn’t agree to play Blackjack with a certain
part of your weekly salary? Of course, this situation is not fully
analogous, but the fundamental principle applies: software is ubiquitous
enough in industrialized society that it’s roughly impossible to avoid
encountering it in daily life. Therefore, the proprietary software
situation is not adequately contained, and is difficult for individuals
to avoid.

Second, software is not merely a diversion. Our society has changed
enough that people cannot work effectively in the society without at
least sometimes using software. Therefore, the
“entertainment” part of the containment theory does not
properly
apply1,
either. If citizens are de-facto required to use something to live
productively, it must have different rules and control structures around
it than wholly optional diversions.

Thus, this line of reasoning gives me yet another reason to oppose
proprietary software: proprietary licensing is simply a valueless
transaction. It creates a burden on society and gives no benefit, other
than a financial one to those granted the monopoly over that particular
software program. Unfortunately, there nevertheless remain many who
want that level of control, because one fact cannot be denied: the
profits are larger.

For
example, Mårten
Mikos recently argued in favor of these sorts of large profits
. He
claims that to benefit massively from Open Source (i.e., to get
really
rich), business
models like “Open Core”
are necessary. Mårten’s
argument, and indeed most pro-Open-Core arguments, rely on this
following fundamental assumption: for FLOSS to be legitimate, it must
allow for the same level of profits as proprietary software. This
assumption, in my view, is faulty. It’s always true that you can make
bigger profits by ignoring morality. Factories can easily make more
money by completely ignoring environmental issues; strip mining is
always very profitable, after all. However, as a society, we’ve decided
that the environment is worth protecting, so we have rules that do limit
profit maximization because a more important goal is served.

Software freedom is another principle of this type. While
you can make a profit with community-respecting FLOSS business
models (such as service, support and freely licensed custom
modifications on contract), it’s admittedly a smaller profit than can be
made with Open Core and proprietary licensing. But that greater profit
potential doesn’t legitimatize such business models, just as it doesn’t
legitimize strip mining or gambling on financial derivatives.

Update: Based on some feedback that I got, I felt it
was important to make clear that I don’t believe this argument alone can
create a unified theory that shows why software freedom should be an
inalienable right for all software users. This factor of lack of value
that proprietary licensing brings to society is just another to consider
in a more complete discussion about software freedom.

Update: Glynn
Moody
wrote
a blog
post that quoted from this post extensively and made some interesting
comments
on it. There’s some interesting discussion in the blog
comments there on his site; perhaps because so many people hate that I
only do blog comments on identi.ca (which I do, BTW, because it’s the
only online forum I’m assured that I’ll actually read and respond
to.)

0I
realize that some argue that you can buy a house, then rent it to others,
and evict them if they fail to pay. Some might argue further that owners
of software should get this same rental power. The key difference,
though, is that the house owner can’t really make full use of the house
when it’s being rented. The owner’s right to rent it to others,
therefore, is centered around the idea that the owner loses some of their
personal ability to use the house while the renters are present. This
loss of use never happens with software.

1You
might be wondering, Ok, so if it’s pure entertainment software, is it
acceptable for it to be proprietary?. I have often said: if all
published and deployed software in the world were guaranteed Free
Software except for video games, I wouldn’t work on the
cause of software freedom anymore. Ultimately, I am not particularly
concerned about the control structures in our culture that exist for pure
entertainment. I suppose there’s some line to be drawn between
art/culture and pure entertainment/diversion, but considerations on
differentiating control structures on that issue are beyond the scope of
this blog post.

PulseAudio and Jack

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/when-pa-and-when-not.html

#nocomments yes

One thing became very clear to me during my trip to the Linux Audio Conference 2010
in Utrecht: even many pro audio folks are not sure what Jack does that PulseAudio doesn’t do and what
PulseAudio does that Jack doesn’t do; why they are not competing, why
you cannot replace one by the other, and why merging them (at least in
the short term) might not make immediate sense. In other words, why
millions of phones on this world run PulseAudio and not Jack, and why
a music studio running PulseAudio is crack.

To light this up a bit and for future reference I’ll try to explain in the
following text why there is this seperation between the two systems and why this isn’t
necessarily bad. This is mostly a written up version of (parts of) my slides
from LAC
, so if you attended that event you might find little new, but I hope
it is interesting nonetheless.

This is mostly written from my perspective as a hacker working on
consumer audio stuff (more specifically having written most of
PulseAudio), but I am sure most pro audio folks would agree with the
points I raise here, and have more things to add. What I explain below
is in no way comprehensive, just a list of a couple of points I think
are the most important, as they touch the very core of both
systems (and we ignore all the toppings here, i.e. sound effects, yadda, yadda).

First of all let’s clear up the background of the sound server use cases here:

Consumer Audio (i.e. PulseAudio)
Pro Audio (i.e. Jack)

Reducing power usage is a defining requirement, most systems are battery powered (Laptops, Cell Phones).
Power usage usually not an issue, power comes out of the wall.

Must support latencies low enough for telephony and
games. Also covers high latency uses, such as movie and music playback
(2s of latency is a good choice). Minimal latencies are a
definining requirement.

System is highly dynamic, with applications starting/stopping, hardware added and removed all the time.
System is usually static in its configuration during operation.

User is usually not proficient in the technologies used.[1]
User is usually a professional and knows audio technology and computers well.

User is not necessarily the administrator of his machine, might have limited access.
User usually administrates his own machines, has root privileges.

Audio is just one use of the system among many, and often just a background job.
Audio is the primary purpose of the system.

Hardware tends to have limited resources and be crappy and cheap.
Hardware is powerful, expensive and high quality.

Of course, things are often not as black and white like this, there are uses
that fall in the middle of these two areas.

From the table above a few conclusions may be drawn:

A consumer sound system must support both low and high latency operation.
Since low latencies mean high CPU load and hence high power
consumption[2] (Heisenberg…), a system should always run with the
highest latency latency possible, but the lowest latency necessary.

Since the consumer system is highly dynamic in its use latencies must be
adjusted dynamically too. That makes a design such as PulseAudio’s timer-based scheduling important.

A pro audio system’s primary optimization target is low latency. Low
power usage, dynamic changeble configuration (i.e. a short drop-out while you
change your pipeline is acceptable) and user-friendliness may be sacrificed for
that.

For large buffer sizes a zero-copy design suggests itself: since data
blocks are large the cache pressure can be considerably reduced by zero-copy
designs. Only for large buffers the cost of passing pointers around is
considerable smaller than the cost of passing around the data itself (or the
other way round: if your audio data has the same size as your pointers, then
passing pointers around is useless extra work).

On a resource constrained system the ideal audio pipeline does not touch
and convert the data passed along it unnecessarily. That makes it important to
support natively the sample types and interleaving modes of the audio source or
destination.

A consumer system needs to simplify the view on the hardware, hide the its
complexity: hide redundant mixer elements, or merge them while making use of
the hardware capabilities, and extending it in software so that the same
functionality is provided on all hardware. A production system should not hide
or simplify the hardware functionality.

A consumer system should not drop-out when a client misbehaves or the
configuration changes (OTOH if it happens in exceptions it is not disastrous
either). A synchronous pipeline is hence not advisable, clients need to supply
their data asynchronously.

In a pro audio system a drop-out during reconfiguration is acceptable,
during operation unacceptable.

In consumer audio we need to make compromises on resource usage,
which pro audio does not have to commit to. Example: a pro audio
system can issue memlock() with little limitations since the
hardware is powerful (i.e. a lot of RAM available) and audio is the
primary purpose. A consumer audio system cannot do that because that
call practically makes memory unavailable to other applications,
increasing their swap pressure. And since audio is not the primary
purpose of the system and resources limited we hence need to find a
different way.

Jack has been designed for low latencies, where synchronous
operation is advisable, meaning that a misbehaving client call stall
the entire pipeline. Changes of the pipeline or latencies usually
result in drop-outs in one way or the other, since the entire pipeline
is reconfigured, from the hardware to the various clients. Jack only
supports FLOAT32 samples and non-interleaved audio channels (and that
is a good thing). Jack does not employ reference-counted zero-copy
buffers. It does not try to simplify the hardware mixer in any
way.

PulseAudio OTOH can deal with varying latancies, dynamically
adjusting to the lowest latencies any of the connected clients
needs
. Client communication is fully asynchronous, a single client
cannot stall the entire pipeline. PulseAudio supports a variety of PCM
formats and channel setups. PulseAudio’s design is heavily based on
reference-counted zero-copy buffers that are passed around, even
between processes, instead of the audio data itself. PulseAudio tries
to simplify the hardware mixer as suggested above.

Now, the two paragraphs above hopefully show how Jack is more
suitable for the pro audio use case and PulseAudio more for the
consumer audio use case. One question asks itself though: can we marry
the two approaches? Yes, we probably can, MacOS has a unified approach
for both uses. However, it is not clear this would be a good
idea. First of all, a system with the complexities introduced by
sample format/channel mapping conversion, as well as dynamically
changing latencies and pipelines, and asynchronous behaviour would
certainly be much less attractive to pro audio developers. In fact,
that Jack limits itself to synchronous, FLOAT32-only,
non-interleaved-only audio streams is one of the big features of its
design. Marrying the two approaches would corrupt that. A merged
solution would probably not have a good stand in the community.

But it goes even further than this: what would the use case for
this be? After all, most of the time, you don’t want your event
sounds, your Youtube, your VoIP and your Rhythmbox mixed into the new
record you are producing. Hence a clear seperation between the two
worlds might even be handy?

Also, let’s not forget that we lack the manpower to even create
such an audio chimera.

So, where to from here? Well, I think we should put the focus on
cooperation instead of amalgamation: teach PulseAudio to go out of the
way as soon as Jack needs access to the device, and optionally make
PulseAudio a normal JACK client while both are running. That way, the
user has the option to use the PulseAudio supplied streams, but
normally does not see them in his pipeline. The first part of this has
already been implemented: Jack2 and PulseAudio do not fight for the
audio device, a friendly handover takes place. Jack takes precedence,
PulseAudio takes the back seat. The second part is still missing: you
still have to manually hookup PulseAudio to Jack if you are interested
in its streams. If both are implemented starting Jack basically has
the effect of replacing PulseAudio’s core with the Jack core, while
still providing full compatibility with PulseAudio clients.

And that I guess is all I have to say on the entire Jack and
PulseAudio story.

Oh, one more thing, while we are at clearing things up: some news
sites claim that PulseAudio’s not necessarily stellar reputation in
some parts of the community comes from Ubuntu and other distributions
having integrated it too early. Well, let me stress here explicitly,
that while they might have made a mistake or two in packaging
PulseAudio and I publicly pointed that out (and probably not in a too
friendly way), I do believe that the point in time they adopted it was
right. Why? Basically, it’s a chicken and egg problem. If it is not
used in the distributions it is not tested, and there is no pressure
to get fixed what then turns out to be broken: in PulseAudio itself,
and in both the layers on top and below of it. Don’t forget that
pushing a new layer into an existing stack will break a lot of
assumptions that the neighboring layers made. Doing this must
break things. Most Free Software projects could probably use more
developers, and that is particularly true for Audio on Linux. And
given that that is how it is, pushing the feature in at that point in
time was the right thing to do. Or in other words, if the features are
right, and things do work correctly as far as the limited test base
the developers control shows, then one day you need to push into the
distributions, even if this might break setups and software that
previously has not been tested, unless you want to stay stuck in your
development indefinitely. So yes, Ubuntu, I think you did well with
adopting PulseAudio when you did.

Footnotes

[1] Side note: yes, consumers tend not to know what dB is, and expect
volume settings in “percentages”, a mostly meaningless unit in
audio. This even spills into projects like VLC or Amarok which expose
linear volume controls (which is a really bad idea).

[2] In case you are wondering why that is the case: if the latency is
low the buffers must be sized smaller. And if the buffers are sized smaller
then the CPU will have to wake up more often to fill them up for the same
playback time. This drives up the CPU load since less actual payload can be
processed for the amount of housekeeping that the CPU has to do during each
buffer iteration. Also, frequent wake-ups make it impossible for the CPU to go
to deeper sleep states. Sleep states are the primary way for modern CPUs
to save power.

PulseAudio and Jack

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/when-pa-and-when-not.html

#nocomments yes

One thing became very clear to me during my trip to the Linux Audio Conference 2010
in Utrecht: even many pro audio folks are not sure what Jack does that PulseAudio doesn’t do and what
PulseAudio does that Jack doesn’t do; why they are not competing, why
you cannot replace one by the other, and why merging them (at least in
the short term) might not make immediate sense. In other words, why
millions of phones on this world run PulseAudio and not Jack, and why
a music studio running PulseAudio is crack.

To light this up a bit and for future reference I’ll try to explain in the
following text why there is this seperation between the two systems and why this isn’t
necessarily bad. This is mostly a written up version of (parts of) my slides
from LAC
, so if you attended that event you might find little new, but I hope
it is interesting nonetheless.

This is mostly written from my perspective as a hacker working on
consumer audio stuff (more specifically having written most of
PulseAudio), but I am sure most pro audio folks would agree with the
points I raise here, and have more things to add. What I explain below
is in no way comprehensive, just a list of a couple of points I think
are the most important, as they touch the very core of both
systems (and we ignore all the toppings here, i.e. sound effects, yadda, yadda).

First of all let’s clear up the background of the sound server use cases here:

Consumer Audio (i.e. PulseAudio)Pro Audio (i.e. Jack)
Reducing power usage is a defining requirement, most systems are battery powered (Laptops, Cell Phones).Power usage usually not an issue, power comes out of the wall.
Must support latencies low enough for telephony and
games. Also covers high latency uses, such as movie and music playback
(2s of latency is a good choice).
Minimal latencies are a
definining requirement.
System is highly dynamic, with applications starting/stopping, hardware added and removed all the time.System is usually static in its configuration during operation.
User is usually not proficient in the technologies used.[1]User is usually a professional and knows audio technology and computers well.
User is not necessarily the administrator of his machine, might have limited access.User usually administrates his own machines, has root privileges.
Audio is just one use of the system among many, and often just a background job.Audio is the primary purpose of the system.
Hardware tends to have limited resources and be crappy and cheap.Hardware is powerful, expensive and high quality.

Of course, things are often not as black and white like this, there are uses
that fall in the middle of these two areas.

From the table above a few conclusions may be drawn:

  • A consumer sound system must support both low and high latency operation.
    Since low latencies mean high CPU load and hence high power
    consumption[2] (Heisenberg…), a system should always run with the
    highest latency latency possible, but the lowest latency necessary.
  • Since the consumer system is highly dynamic in its use latencies must be
    adjusted dynamically too. That makes a design such as PulseAudio’s timer-based scheduling important.
  • A pro audio system’s primary optimization target is low latency. Low
    power usage, dynamic changeble configuration (i.e. a short drop-out while you
    change your pipeline is acceptable) and user-friendliness may be sacrificed for
    that.
  • For large buffer sizes a zero-copy design suggests itself: since data
    blocks are large the cache pressure can be considerably reduced by zero-copy
    designs. Only for large buffers the cost of passing pointers around is
    considerable smaller than the cost of passing around the data itself (or the
    other way round: if your audio data has the same size as your pointers, then
    passing pointers around is useless extra work).
  • On a resource constrained system the ideal audio pipeline does not touch
    and convert the data passed along it unnecessarily. That makes it important to
    support natively the sample types and interleaving modes of the audio source or
    destination.
  • A consumer system needs to simplify the view on the hardware, hide the its
    complexity: hide redundant mixer elements, or merge them while making use of
    the hardware capabilities, and extending it in software so that the same
    functionality is provided on all hardware. A production system should not hide
    or simplify the hardware functionality.
  • A consumer system should not drop-out when a client misbehaves or the
    configuration changes (OTOH if it happens in exceptions it is not disastrous
    either). A synchronous pipeline is hence not advisable, clients need to supply
    their data asynchronously.
  • In a pro audio system a drop-out during reconfiguration is acceptable,
    during operation unacceptable.
  • In consumer audio we need to make compromises on resource usage,
    which pro audio does not have to commit to. Example: a pro audio
    system can issue memlock() with little limitations since the
    hardware is powerful (i.e. a lot of RAM available) and audio is the
    primary purpose. A consumer audio system cannot do that because that
    call practically makes memory unavailable to other applications,
    increasing their swap pressure. And since audio is not the primary
    purpose of the system and resources limited we hence need to find a
    different way.

Jack has been designed for low latencies, where synchronous
operation is advisable, meaning that a misbehaving client call stall
the entire pipeline. Changes of the pipeline or latencies usually
result in drop-outs in one way or the other, since the entire pipeline
is reconfigured, from the hardware to the various clients. Jack only
supports FLOAT32 samples and non-interleaved audio channels (and that
is a good thing). Jack does not employ reference-counted zero-copy
buffers. It does not try to simplify the hardware mixer in any
way.

PulseAudio OTOH can deal with varying latancies, dynamically
adjusting to the lowest latencies any of the connected clients
needs
. Client communication is fully asynchronous, a single client
cannot stall the entire pipeline. PulseAudio supports a variety of PCM
formats and channel setups. PulseAudio’s design is heavily based on
reference-counted zero-copy buffers that are passed around, even
between processes, instead of the audio data itself. PulseAudio tries
to simplify the hardware mixer as suggested above.

Now, the two paragraphs above hopefully show how Jack is more
suitable for the pro audio use case and PulseAudio more for the
consumer audio use case. One question asks itself though: can we marry
the two approaches? Yes, we probably can, MacOS has a unified approach
for both uses. However, it is not clear this would be a good
idea. First of all, a system with the complexities introduced by
sample format/channel mapping conversion, as well as dynamically
changing latencies and pipelines, and asynchronous behaviour would
certainly be much less attractive to pro audio developers. In fact,
that Jack limits itself to synchronous, FLOAT32-only,
non-interleaved-only audio streams is one of the big features of its
design. Marrying the two approaches would corrupt that. A merged
solution would probably not have a good stand in the community.

But it goes even further than this: what would the use case for
this be? After all, most of the time, you don’t want your event
sounds, your Youtube, your VoIP and your Rhythmbox mixed into the new
record you are producing. Hence a clear seperation between the two
worlds might even be handy?

Also, let’s not forget that we lack the manpower to even create
such an audio chimera.

So, where to from here? Well, I think we should put the focus on
cooperation instead of amalgamation: teach PulseAudio to go out of the
way as soon as Jack needs access to the device, and optionally make
PulseAudio a normal JACK client while both are running. That way, the
user has the option to use the PulseAudio supplied streams, but
normally does not see them in his pipeline. The first part of this has
already been implemented: Jack2 and PulseAudio do not fight for the
audio device, a friendly handover takes place. Jack takes precedence,
PulseAudio takes the back seat. The second part is still missing: you
still have to manually hookup PulseAudio to Jack if you are interested
in its streams. If both are implemented starting Jack basically has
the effect of replacing PulseAudio’s core with the Jack core, while
still providing full compatibility with PulseAudio clients.

And that I guess is all I have to say on the entire Jack and
PulseAudio story.

Oh, one more thing, while we are at clearing things up: some news
sites claim that PulseAudio’s not necessarily stellar reputation in
some parts of the community comes from Ubuntu and other distributions
having integrated it too early. Well, let me stress here explicitly,
that while they might have made a mistake or two in packaging
PulseAudio and I publicly pointed that out (and probably not in a too
friendly way), I do believe that the point in time they adopted it was
right. Why? Basically, it’s a chicken and egg problem. If it is not
used in the distributions it is not tested, and there is no pressure
to get fixed what then turns out to be broken: in PulseAudio itself,
and in both the layers on top and below of it. Don’t forget that
pushing a new layer into an existing stack will break a lot of
assumptions that the neighboring layers made. Doing this must
break things. Most Free Software projects could probably use more
developers, and that is particularly true for Audio on Linux. And
given that that is how it is, pushing the feature in at that point in
time was the right thing to do. Or in other words, if the features are
right, and things do work correctly as far as the limited test base
the developers control shows, then one day you need to push into the
distributions, even if this might break setups and software that
previously has not been tested, unless you want to stay stuck in your
development indefinitely. So yes, Ubuntu, I think you did well with
adopting PulseAudio when you did.

Footnotes

[1] Side note: yes, consumers tend not to know what dB is, and expect
volume settings in “percentages”, a mostly meaningless unit in
audio. This even spills into projects like VLC or Amarok which expose
linear volume controls (which is a really bad idea).

[2] In case you are wondering why that is the case: if the latency is
low the buffers must be sized smaller. And if the buffers are sized smaller
then the CPU will have to wake up more often to fill them up for the same
playback time. This drives up the CPU load since less actual payload can be
processed for the amount of housekeeping that the CPU has to do during each
buffer iteration. Also, frequent wake-ups make it impossible for the CPU to go
to deeper sleep states. Sleep states are the primary way for modern CPUs
to save power.

A Guide Through The Linux Sound API Jungle

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/guide-to-sound-apis.html

At the Audio MC at the Linux Plumbers Conference one
thing became very clear: it is very difficult for programmers to
figure out which audio API to use for which purpose and which API not
to use when doing audio programming on Linux. So here’s my try to
guide you through this jungle:

What do you want to do?

I want to write a media-player-like application!
Use GStreamer! (Unless your focus is only KDE in which cases Phonon might be an alternative.)

I want to add event sounds to my application!
Use libcanberra, install your sound files according to the XDG Sound Theming/Naming Specifications! (Unless your focus is only KDE in which case KNotify might be an alternative although it has a different focus.)

I want to do professional audio programming, hard-disk recording, music synthesizing, MIDI interfacing!
Use JACK and/or the full ALSA interface.

I want to do basic PCM audio playback/capturing!
Use the safe ALSA subset.

I want to add sound to my game!
Use the audio API of SDL for full-screen games, libcanberra for simple games with standard UIs such as Gtk+.

I want to write a mixer application!
Use the layer you want to support directly: if you want to support enhanced desktop software mixers, use the PulseAudio volume control APIs. If you want to support hardware mixers, use the ALSA mixer APIs.

I want to write audio software for the plumbing layer!
Use the full ALSA stack.

I want to write audio software for embedded applications!
For technical appliances usually the safe ALSA subset is a good choice, this however depends highly on your use-case.

You want to know more about the different sound APIs?

GStreamer
GStreamer is the de-facto
standard media streaming system for Linux desktops. It supports decoding and
encoding of audio and video streams. You can use it for a wide range of
purposes from simple audio file playback to elaborate network
streaming setups. GStreamer supports a wide range of CODECs and audio
backends. GStreamer is not particularly suited for basic PCM playback
or low-latency/realtime applications. GStreamer is portable and not
limited in its use to Linux. Among the supported backends are ALSA, OSS, PulseAudio. [Programming Manuals and References]

libcanberra
libcanberra
is an abstract event sound API. It implements the XDG
Sound Theme and Naming Specifications
. libcanberra is a blessed
GNOME dependency, but itself has no dependency on GNOME/Gtk/GLib and can be
used with other desktop environments as well. In addition to an easy
interface for playing sound files, libcanberra provides caching
(which is very useful for networked thin clients) and allows passing
of various meta data to the underlying audio system which then can be
used to enhance user experience (such as positional event sounds) and
for improving accessibility. libcanberra supports multiple backends
and is portable beyond Linux. Among the supported backends are ALSA, OSS, PulseAudio, GStreamer. [API Reference]

JACK

JACK is a sound system for
connecting professional audio production applications and hardware
output. It’s focus is low-latency and application interconnection. It
is not useful for normal desktop or embedded use. It is not an API
that is particularly useful if all you want to do is simple PCM
playback. JACK supports multiple backends, although ALSA is best
supported. JACK is portable beyond Linux. Among the supported backends are ALSA, OSS. [API Reference]

Full ALSA

ALSA is the Linux API
for doing PCM playback and recording. ALSA is very focused on
hardware devices, although other backends are supported as well (to a
limit degree, see below). ALSA as a name is used both for the Linux
audio kernel drivers and a user-space library that wraps these. ALSA — the library — is
comprehensive, and portable (to a limited degree). The full ALSA API
can appear very complex and is large. However it supports almost
everything modern sound hardware can provide. Some of the
functionality of the ALSA API is limited in its use to actual hardware
devices supported by the Linux kernel (in contrast to software sound
servers and sound drivers implemented in user-space such as those for
Bluetooth and FireWire audio — among others) and Linux specific
drivers. [API
Reference
]

Safe ALSA

Only a subset of the full ALSA API works on all backends ALSA
supports. It is highly recommended to stick to this safe subset
if you do ALSA programming to keep programs portable, future-proof and
compatible with sound servers, Bluetooth audio and FireWire audio. See
below for more details about which functions of ALSA are considered
safe. The safe ALSA API is a suitable abstraction for basic,
portable PCM playback and recording — not just for ALSA kernel driver
supported devices. Among the supported backends are ALSA kernel driver
devices, OSS, PulseAudio, JACK.

Phonon and KNotify

Phonon is high-level
abstraction for media streaming systems such as GStreamer, but goes a
bit further than that. It supports multiple backends. KNotify is a
system for “notifications”, which goes beyond mere event
sounds. However it does not support the XDG Sound Theming/Naming
Specifications at this point, and also doesn’t support caching or
passing of event meta-data to an underlying sound system. KNotify
supports multiple backends for audio playback via Phonon. Both APIs
are KDE/Qt specific and should not be used outside of KDE/Qt
applications. [Phonon API Reference] [KNotify API Reference]

SDL

SDL is a portable API
primarily used for full-screen game development. Among other stuff it
includes a portable audio interface. Among others SDL support OSS,
PulseAudio, ALSA as backends. [API Reference]

PulseAudio

PulseAudio is a sound system
for Linux desktops and embedded environments that runs in user-space
and (usually) on top of ALSA. PulseAudio supports network
transparency, per-application volumes, spatial events sounds, allows
switching of sound streams between devices on-the-fly, policy
decisions, and many other high-level operations. PulseAudio adds a glitch-free
audio playback model to the Linux audio stack. PulseAudio is not
useful in professional audio production environments. PulseAudio is
portable beyond Linux. PulseAudio has a native API and also supports
the safe subset of ALSA, in addition to limited,
LD_PRELOAD-based OSS compatibility. Among others PulseAudio supports
OSS and ALSA as backends and provides connectivity to JACK. [API
Reference
]

OSS

The Open Sound System is a
low-level PCM API supported by a variety of Unixes including Linux. It
started out as the standard Linux audio system and is supported on
current Linux kernels in the API version 3 as OSS3. OSS3 is considered
obsolete and has been fully replaced by ALSA. A successor to OSS3
called OSS4 is available but plays virtually no role on Linux and is
not supported in standard kernels or by any of the relevant
distributions. The OSS API is very low-level, based around direct
kernel interfacing using ioctl()s. It it is hence awkward to use and
can practically not be virtualized for usage on non-kernel audio
systems like sound servers (such as PulseAudio) or user-space sound
drivers (such as Bluetooth or FireWire audio). OSS3’s timing model
cannot properly be mapped to software sound servers at all, and is
also problematic on non-PCI hardware such as USB audio. Also, OSS does
not do sample type conversion, remapping or resampling if
necessary. This means that clients that properly want to support OSS
need to include a complete set of converters/remappers/resamplers for
the case when the hardware does not natively support the requested
sampling parameters. With modern sound cards it is very common to
support only S32LE samples at 48KHz and nothing else. If an OSS client
assumes it can always play back S16LE samples at 44.1KHz it will thus
fail. OSS3 is portable to other Unix-like systems, various differences
however apply. OSS also doesn’t support surround sound and other
functionality of modern sounds systems properly. OSS should be
considered obsolete and not be used in new applications. ALSA and
PulseAudio have limited LD_PRELOAD-based compatibility with OSS. [Programming Guide]

All sound systems and APIs listed above are supported in all
relevant current distributions. For libcanberra support the newest
development release of your distribution might be necessary.

All sound systems and APIs listed above are suitable for
development for commercial (read: closed source) applications, since
they are licensed under LGPL or more liberal licenses or no client
library is involved.

You want to know why and when you should use a specific sound API?

GStreamer

GStreamer is best used for very high-level needs: i.e. you want to
play an audio file or video stream and do not care about all the tiny
details down to the PCM or codec level.

libcanberra

libcanberra is best used when adding sound feedback to user input
in UIs. It can also be used to play simple sound files for
notification purposes.

JACK

JACK is best used in professional audio production and where interconnecting applications is required.

Full ALSA

The full ALSA interface is best used for software on “plumbing layer” or when you want to make use of very specific hardware features, which might be need for audio production purposes.

Safe ALSA

The safe ALSA interface is best used for software that wants to output/record basic PCM data from hardware devices or software sound systems.

Phonon and KNotify

Phonon and KNotify should only be used in KDE/Qt applications and only for high-level media playback, resp. simple audio notifications.

SDL

SDL is best used in full-screen games.

PulseAudio

For now, the PulseAudio API should be used only for applications
that want to expose sound-server-specific functionality (such as
mixers) or when a PCM output abstraction layer is already available in
your application and it thus makes sense to add an additional backend
to it for PulseAudio to keep the stack of audio layers minimal.

OSS

OSS should not be used for new programs.

You want to know more about the safe ALSA subset?

Here’s a list of DOS and DONTS in the ALSA API if you care about
that you application stays future-proof and works fine with
non-hardware backends or backends for user-space sound drivers such as
Bluetooth and FireWire audio. Some of these recommendations apply for
people using the full ALSA API as well, since some functionality
should be considered obsolete for all cases.

If your application’s code does not follow these rules, you must have
a very good reason for that. Otherwise your code should simply be considered
broken!

DONTS:

Do not use “async handlers”, e.g. via
snd_async_add_pcm_handler() and friends. Asynchronous
handlers are implemented using POSIX signals, which is a very
questionable use of them, especially from libraries and plugins. Even
when you don’t want to limit yourself to the safe ALSA subset
it is highly recommended not to use this functionality. Read
this for a longer explanation why signals for audio IO are
evil.

Do not parse the ALSA configuration file yourself or with
any of the ALSA functions such as snd_config_xxx(). If you
need to enumerate audio devices use snd_device_name_hint()
(and related functions). That
is the only API that also supports enumerating non-hardware audio
devices and audio devices with drivers implemented in userspace.

Do not parse any of the files from
/proc/asound/. Those files only include information about
kernel sound drivers — user-space plugins are not listed there. Also,
the set of kernel devices might differ from the way they are presented
in user-space. (i.e. sub-devices are mapped in different ways to
actual user-space devices such as surround51 an suchlike.

Do not rely on stable device indexes from ALSA. Nowadays
they depend on the initialization order of the drivers during boot-up
time and are thus not stable.

Do not use the snd_card_xxx() APIs. For
enumerating use snd_device_name_hint() (and related
functions). snd_card_xxx() is obsolete. It will only list
kernel hardware devices. User-space devices such as sound servers,
Bluetooth audio are not included. snd_card_load() is
completely obsolete in these days.

Do not hard-code device strings, especially not
hw:0 or plughw:0 or even dmix — these devices define no channel
mapping and are mapped to raw kernel devices. It is highly recommended
to use exclusively default as device string. If specific
channel mappings are required the correct device strings should be
front for stereo, surround40 for Surround 4.0,
surround41, surround51, and so on. Unfortunately at
this point ALSA does not define standard device names with channel
mappings for non-kernel devices. This means default may only
be used safely for mono and stereo streams. You should probably prefix
your device string with plug: to make sure ALSA transparently
reformats/remaps/resamples your PCM stream for you if the
hardware/backend does not support your sampling parameters
natively.

Do not assume that any particular sample type is supported
except the following ones: U8, S16_LE, S16_BE, S32_LE, S32_BE,
FLOAT_LE, FLOAT_BE, MU_LAW, A_LAW.

Do not use snd_pcm_avail_update() for
synchronization purposes. It should be used exclusively to query the
amount of bytes that may be written/read right now. Do not use
snd_pcm_delay() to query the fill level of your playback
buffer. It should be used exclusively for synchronisation
purposes. Make sure you fully understand the difference, and note that
the two functions return values that are not necessarily directly
connected!

Do not assume that the mixer controls always know dB information.

Do not assume that all devices support MMAP style buffer access.

Do not assume that the hardware pointer inside the (possibly mmaped) playback buffer is the actual position of the sample in the DAC. There might be an extra latency involved.

Do not try to recover with your own code from ALSA error conditions such as buffer under-runs. Use snd_pcm_recover() instead.

Do not touch buffering/period metrics unless you have
specific latency needs. Develop defensively, handling correctly the
case when the backend cannot fulfill your buffering metrics
requests. Be aware that the buffering metrics of the playback buffer
only indirectly influence the overall latency in many
cases. i.e. setting the buffer size to a fixed value might actually result in
practical latencies that are much higher.

Do not assume that snd_pcm_rewind() is available and works and to which degree.

Do not assume that the time when a PCM stream can receive
new data is strictly dependant on the sampling and buffering
parameters and the resulting average throughput. Always make sure to
supply new audio data to the device when it asks for it by signalling
“writability” on the fd. (And similarly for capturing)

Do not use the “simple” interface snd_spcm_xxx().

Do not use any of the functions marked as “obsolete”.

Do not use the timer, midi, rawmidi, hwdep subsystems.

DOS:

Use snd_device_name_hint() for enumerating audio devices.

Use snd_smixer_xx() instead of raw snd_ctl_xxx()

For synchronization purposes use snd_pcm_delay().

For checking buffer playback/capture fill level use snd_pcm_update_avail().

Use snd_pcm_recover() to recover from errors returned by any of the ALSA functions.

If possible use the largest buffer sizes the device supports to maximize power saving and drop-out safety. Use snd_pcm_rewind() if you need to react to user input quickly.

FAQ

What about ESD and NAS?

ESD and NAS are obsolete, both as API and as sound daemon. Do not develop for it any further.

ALSA isn’t portable!

That’s not true! Actually the user-space library is relatively portable, it even includes a backend for OSS sound devices. There is no real reason that would disallow using the ALSA libraries on other Unixes as well.

Portability is key to me! What can I do?

Unfortunately no truly portable (i.e. to Win32) PCM API is
available right now that I could truly recommend. The systems shown
above are more or less portable at least to Unix-like operating
systems. That does not mean however that there are suitable backends
for all of them available. If you care about portability to Win32 and
MacOS you probably have to find a solution outside of the
recommendations above, or contribute the necessary
backends/portability fixes. None of the systems (with the exception of
OSS) is truly bound to Linux or Unix-like kernels.

What about PortAudio?

I don’t think that PortAudio is very good API for Unix-like operating systems. I cannot recommend it, but it’s your choice.

Oh, why do you hate OSS4 so much?

I don’t hate anything or anyone. I just don’t think OSS4 is a
serious option, especially not on Linux. On Linux, it is also
completely redundant due to ALSA.

You idiot, you have no clue!

You are right, I totally don’t. But that doesn’t hinder me from recommending things. Ha!

Hey I wrote/know this tiny new project which is an awesome abstraction layer for audio/media!

Sorry, that’s not sufficient. I only list software here that is known to be sufficiently relevant and sufficiently well maintained.

Final Words

Of course these recommendations are very basic and are only intended to
lead into the right direction. For each use-case different necessities
apply and hence options that I did not consider here might become
viable. It’s up to you to decide how much of what I wrote here
actually applies to your application.

This summary only includes software systems that are considered
stable and universally available at the time of writing. In the
future I hope to introduce a more suitable and portable replacement
for the safe ALSA subset of functions. I plan to update this text
from time to time to keep things up-to-date.

If you feel that I forgot a use case or an important API, then
please contact me or leave a comment. However, I think the summary
above is sufficiently comprehensive and if an entry is missing I most
likely deliberately left it out.

(Also note that I am upstream for both PulseAudio and libcanberra and did some minor contributions to ALSA, GStreamer and some other of the systems listed above. Yes, I am biased.)

Oh, and please syndicate this, digg it. I’d like to see this guide to be well-known all around the Linux community. Thank you!

A Guide Through The Linux Sound API Jungle

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/guide-to-sound-apis.html

At the Audio MC at the Linux Plumbers Conference one
thing became very clear: it is very difficult for programmers to
figure out which audio API to use for which purpose and which API not
to use when doing audio programming on Linux. So here’s my try to
guide you through this jungle:

What do you want to do?

I want to write a media-player-like application!
Use GStreamer! (Unless your focus is only KDE in which cases Phonon might be an alternative.)
I want to add event sounds to my application!
Use libcanberra, install your sound files according to the XDG Sound Theming/Naming Specifications! (Unless your focus is only KDE in which case KNotify might be an alternative although it has a different focus.)
I want to do professional audio programming, hard-disk recording, music synthesizing, MIDI interfacing!
Use JACK and/or the full ALSA interface.
I want to do basic PCM audio playback/capturing!
Use the safe ALSA subset.
I want to add sound to my game!
Use the audio API of SDL for full-screen games, libcanberra for simple games with standard UIs such as Gtk+.
I want to write a mixer application!
Use the layer you want to support directly: if you want to support enhanced desktop software mixers, use the PulseAudio volume control APIs. If you want to support hardware mixers, use the ALSA mixer APIs.
I want to write audio software for the plumbing layer!
Use the full ALSA stack.
I want to write audio software for embedded applications!
For technical appliances usually the safe ALSA subset is a good choice, this however depends highly on your use-case.

You want to know more about the different sound APIs?

GStreamer
GStreamer is the de-facto
standard media streaming system for Linux desktops. It supports decoding and
encoding of audio and video streams. You can use it for a wide range of
purposes from simple audio file playback to elaborate network
streaming setups. GStreamer supports a wide range of CODECs and audio
backends. GStreamer is not particularly suited for basic PCM playback
or low-latency/realtime applications. GStreamer is portable and not
limited in its use to Linux. Among the supported backends are ALSA, OSS, PulseAudio. [Programming Manuals and References]
libcanberra
libcanberra
is an abstract event sound API. It implements the XDG
Sound Theme and Naming Specifications
. libcanberra is a blessed
GNOME dependency, but itself has no dependency on GNOME/Gtk/GLib and can be
used with other desktop environments as well. In addition to an easy
interface for playing sound files, libcanberra provides caching
(which is very useful for networked thin clients) and allows passing
of various meta data to the underlying audio system which then can be
used to enhance user experience (such as positional event sounds) and
for improving accessibility. libcanberra supports multiple backends
and is portable beyond Linux. Among the supported backends are ALSA, OSS, PulseAudio, GStreamer. [API Reference]
JACK
JACK is a sound system for
connecting professional audio production applications and hardware
output. It’s focus is low-latency and application interconnection. It
is not useful for normal desktop or embedded use. It is not an API
that is particularly useful if all you want to do is simple PCM
playback. JACK supports multiple backends, although ALSA is best
supported. JACK is portable beyond Linux. Among the supported backends are ALSA, OSS. [API Reference]
Full ALSA
ALSA is the Linux API
for doing PCM playback and recording. ALSA is very focused on
hardware devices, although other backends are supported as well (to a
limit degree, see below). ALSA as a name is used both for the Linux
audio kernel drivers and a user-space library that wraps these. ALSA — the library — is
comprehensive, and portable (to a limited degree). The full ALSA API
can appear very complex and is large. However it supports almost
everything modern sound hardware can provide. Some of the
functionality of the ALSA API is limited in its use to actual hardware
devices supported by the Linux kernel (in contrast to software sound
servers and sound drivers implemented in user-space such as those for
Bluetooth and FireWire audio — among others) and Linux specific
drivers. [API
Reference
]
Safe ALSA
Only a subset of the full ALSA API works on all backends ALSA
supports. It is highly recommended to stick to this safe subset
if you do ALSA programming to keep programs portable, future-proof and
compatible with sound servers, Bluetooth audio and FireWire audio. See
below for more details about which functions of ALSA are considered
safe. The safe ALSA API is a suitable abstraction for basic,
portable PCM playback and recording — not just for ALSA kernel driver
supported devices. Among the supported backends are ALSA kernel driver
devices, OSS, PulseAudio, JACK.
Phonon and KNotify
Phonon is high-level
abstraction for media streaming systems such as GStreamer, but goes a
bit further than that. It supports multiple backends. KNotify is a
system for “notifications”, which goes beyond mere event
sounds. However it does not support the XDG Sound Theming/Naming
Specifications at this point, and also doesn’t support caching or
passing of event meta-data to an underlying sound system. KNotify
supports multiple backends for audio playback via Phonon. Both APIs
are KDE/Qt specific and should not be used outside of KDE/Qt
applications. [Phonon API Reference] [KNotify API Reference]
SDL
SDL is a portable API
primarily used for full-screen game development. Among other stuff it
includes a portable audio interface. Among others SDL support OSS,
PulseAudio, ALSA as backends. [API Reference]
PulseAudio
PulseAudio is a sound system
for Linux desktops and embedded environments that runs in user-space
and (usually) on top of ALSA. PulseAudio supports network
transparency, per-application volumes, spatial events sounds, allows
switching of sound streams between devices on-the-fly, policy
decisions, and many other high-level operations. PulseAudio adds a glitch-free
audio playback model to the Linux audio stack. PulseAudio is not
useful in professional audio production environments. PulseAudio is
portable beyond Linux. PulseAudio has a native API and also supports
the safe subset of ALSA, in addition to limited,
LD_PRELOAD-based OSS compatibility. Among others PulseAudio supports
OSS and ALSA as backends and provides connectivity to JACK. [API
Reference
]
OSS
The Open Sound System is a
low-level PCM API supported by a variety of Unixes including Linux. It
started out as the standard Linux audio system and is supported on
current Linux kernels in the API version 3 as OSS3. OSS3 is considered
obsolete and has been fully replaced by ALSA. A successor to OSS3
called OSS4 is available but plays virtually no role on Linux and is
not supported in standard kernels or by any of the relevant
distributions. The OSS API is very low-level, based around direct
kernel interfacing using ioctl()s. It it is hence awkward to use and
can practically not be virtualized for usage on non-kernel audio
systems like sound servers (such as PulseAudio) or user-space sound
drivers (such as Bluetooth or FireWire audio). OSS3’s timing model
cannot properly be mapped to software sound servers at all, and is
also problematic on non-PCI hardware such as USB audio. Also, OSS does
not do sample type conversion, remapping or resampling if
necessary. This means that clients that properly want to support OSS
need to include a complete set of converters/remappers/resamplers for
the case when the hardware does not natively support the requested
sampling parameters. With modern sound cards it is very common to
support only S32LE samples at 48KHz and nothing else. If an OSS client
assumes it can always play back S16LE samples at 44.1KHz it will thus
fail. OSS3 is portable to other Unix-like systems, various differences
however apply. OSS also doesn’t support surround sound and other
functionality of modern sounds systems properly. OSS should be
considered obsolete and not be used in new applications.
ALSA and
PulseAudio have limited LD_PRELOAD-based compatibility with OSS. [Programming Guide]

All sound systems and APIs listed above are supported in all
relevant current distributions. For libcanberra support the newest
development release of your distribution might be necessary.

All sound systems and APIs listed above are suitable for
development for commercial (read: closed source) applications, since
they are licensed under LGPL or more liberal licenses or no client
library is involved.

You want to know why and when you should use a specific sound API?

GStreamer
GStreamer is best used for very high-level needs: i.e. you want to
play an audio file or video stream and do not care about all the tiny
details down to the PCM or codec level.
libcanberra
libcanberra is best used when adding sound feedback to user input
in UIs. It can also be used to play simple sound files for
notification purposes.
JACK
JACK is best used in professional audio production and where interconnecting applications is required.
Full ALSA
The full ALSA interface is best used for software on “plumbing layer” or when you want to make use of very specific hardware features, which might be need for audio production purposes.
Safe ALSA
The safe ALSA interface is best used for software that wants to output/record basic PCM data from hardware devices or software sound systems.
Phonon and KNotify
Phonon and KNotify should only be used in KDE/Qt applications and only for high-level media playback, resp. simple audio notifications.
SDL
SDL is best used in full-screen games.
PulseAudio
For now, the PulseAudio API should be used only for applications
that want to expose sound-server-specific functionality (such as
mixers) or when a PCM output abstraction layer is already available in
your application and it thus makes sense to add an additional backend
to it for PulseAudio to keep the stack of audio layers minimal.
OSS
OSS should not be used for new programs.

You want to know more about the safe ALSA subset?

Here’s a list of DOS and DONTS in the ALSA API if you care about
that you application stays future-proof and works fine with
non-hardware backends or backends for user-space sound drivers such as
Bluetooth and FireWire audio. Some of these recommendations apply for
people using the full ALSA API as well, since some functionality
should be considered obsolete for all cases.

If your application’s code does not follow these rules, you must have
a very good reason for that. Otherwise your code should simply be considered
broken!

DONTS:

  • Do not use “async handlers”, e.g. via
    snd_async_add_pcm_handler() and friends. Asynchronous
    handlers are implemented using POSIX signals, which is a very
    questionable use of them, especially from libraries and plugins. Even
    when you don’t want to limit yourself to the safe ALSA subset
    it is highly recommended not to use this functionality. Read
    this for a longer explanation why signals for audio IO are
    evil.
  • Do not parse the ALSA configuration file yourself or with
    any of the ALSA functions such as snd_config_xxx(). If you
    need to enumerate audio devices use snd_device_name_hint()
    (and related functions). That
    is the only API that also supports enumerating non-hardware audio
    devices and audio devices with drivers implemented in userspace.
  • Do not parse any of the files from
    /proc/asound/. Those files only include information about
    kernel sound drivers — user-space plugins are not listed there. Also,
    the set of kernel devices might differ from the way they are presented
    in user-space. (i.e. sub-devices are mapped in different ways to
    actual user-space devices such as surround51 an suchlike.
  • Do not rely on stable device indexes from ALSA. Nowadays
    they depend on the initialization order of the drivers during boot-up
    time and are thus not stable.
  • Do not use the snd_card_xxx() APIs. For
    enumerating use snd_device_name_hint() (and related
    functions). snd_card_xxx() is obsolete. It will only list
    kernel hardware devices. User-space devices such as sound servers,
    Bluetooth audio are not included. snd_card_load() is
    completely obsolete in these days.
  • Do not hard-code device strings, especially not
    hw:0 or plughw:0 or even dmix — these devices define no channel
    mapping and are mapped to raw kernel devices. It is highly recommended
    to use exclusively default as device string. If specific
    channel mappings are required the correct device strings should be
    front for stereo, surround40 for Surround 4.0,
    surround41, surround51, and so on. Unfortunately at
    this point ALSA does not define standard device names with channel
    mappings for non-kernel devices. This means default may only
    be used safely for mono and stereo streams. You should probably prefix
    your device string with plug: to make sure ALSA transparently
    reformats/remaps/resamples your PCM stream for you if the
    hardware/backend does not support your sampling parameters
    natively.
  • Do not assume that any particular sample type is supported
    except the following ones: U8, S16_LE, S16_BE, S32_LE, S32_BE,
    FLOAT_LE, FLOAT_BE, MU_LAW, A_LAW.
  • Do not use snd_pcm_avail_update() for
    synchronization purposes. It should be used exclusively to query the
    amount of bytes that may be written/read right now. Do not use
    snd_pcm_delay() to query the fill level of your playback
    buffer. It should be used exclusively for synchronisation
    purposes. Make sure you fully understand the difference, and note that
    the two functions return values that are not necessarily directly
    connected!
  • Do not assume that the mixer controls always know dB information.
  • Do not assume that all devices support MMAP style buffer access.
  • Do not assume that the hardware pointer inside the (possibly mmaped) playback buffer is the actual position of the sample in the DAC. There might be an extra latency involved.
  • Do not try to recover with your own code from ALSA error conditions such as buffer under-runs. Use snd_pcm_recover() instead.
  • Do not touch buffering/period metrics unless you have
    specific latency needs. Develop defensively, handling correctly the
    case when the backend cannot fulfill your buffering metrics
    requests. Be aware that the buffering metrics of the playback buffer
    only indirectly influence the overall latency in many
    cases. i.e. setting the buffer size to a fixed value might actually result in
    practical latencies that are much higher.
  • Do not assume that snd_pcm_rewind() is available and works and to which degree.
  • Do not assume that the time when a PCM stream can receive
    new data is strictly dependant on the sampling and buffering
    parameters and the resulting average throughput. Always make sure to
    supply new audio data to the device when it asks for it by signalling
    “writability” on the fd. (And similarly for capturing)
  • Do not use the “simple” interface snd_spcm_xxx().
  • Do not use any of the functions marked as “obsolete”.
  • Do not use the timer, midi, rawmidi, hwdep subsystems.

DOS:

  • Use snd_device_name_hint() for enumerating audio devices.
  • Use snd_smixer_xx() instead of raw snd_ctl_xxx()
  • For synchronization purposes use snd_pcm_delay().
  • For checking buffer playback/capture fill level use snd_pcm_update_avail().
  • Use snd_pcm_recover() to recover from errors returned by any of the ALSA functions.
  • If possible use the largest buffer sizes the device supports to maximize power saving and drop-out safety. Use snd_pcm_rewind() if you need to react to user input quickly.

FAQ

What about ESD and NAS?
ESD and NAS are obsolete, both as API and as sound daemon. Do not develop for it any further.
ALSA isn’t portable!
That’s not true! Actually the user-space library is relatively portable, it even includes a backend for OSS sound devices. There is no real reason that would disallow using the ALSA libraries on other Unixes as well.
Portability is key to me! What can I do?
Unfortunately no truly portable (i.e. to Win32) PCM API is
available right now that I could truly recommend. The systems shown
above are more or less portable at least to Unix-like operating
systems. That does not mean however that there are suitable backends
for all of them available. If you care about portability to Win32 and
MacOS you probably have to find a solution outside of the
recommendations above, or contribute the necessary
backends/portability fixes. None of the systems (with the exception of
OSS) is truly bound to Linux or Unix-like kernels.
What about PortAudio?
I don’t think that PortAudio is very good API for Unix-like operating systems. I cannot recommend it, but it’s your choice.
Oh, why do you hate OSS4 so much?
I don’t hate anything or anyone. I just don’t think OSS4 is a
serious option, especially not on Linux. On Linux, it is also
completely redundant due to ALSA.
You idiot, you have no clue!
You are right, I totally don’t. But that doesn’t hinder me from recommending things. Ha!
Hey I wrote/know this tiny new project which is an awesome abstraction layer for audio/media!
Sorry, that’s not sufficient. I only list software here that is known to be sufficiently relevant and sufficiently well maintained.

Final Words

Of course these recommendations are very basic and are only intended to
lead into the right direction. For each use-case different necessities
apply and hence options that I did not consider here might become
viable. It’s up to you to decide how much of what I wrote here
actually applies to your application.

This summary only includes software systems that are considered
stable and universally available at the time of writing. In the
future I hope to introduce a more suitable and portable replacement
for the safe ALSA subset of functions. I plan to update this text
from time to time to keep things up-to-date.

If you feel that I forgot a use case or an important API, then
please contact me or leave a comment. However, I think the summary
above is sufficiently comprehensive and if an entry is missing I most
likely deliberately left it out.

(Also note that I am upstream for both PulseAudio and libcanberra and did some minor contributions to ALSA, GStreamer and some other of the systems listed above. Yes, I am biased.)

Oh, and please syndicate this, digg it. I’d like to see this guide to be well-known all around the Linux community. Thank you!

Polypaudio 0.9.0 released

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/polypaudio-0.9.0.html

We are proud to announce Polypaudio
0.9.0
. This is a major step ahead since we decided to freeze the
current API. From now on we will maintain API compability (or at least
try to). To emphasize this starting with this release the shared
library sonames are properly versioned. While Polypaudio 0.9.0 is not
API/ABI compatible with 0.8 it is protocol compatible.

Other notable changes beyond bug fixing, bug fixing and bug fixing
are: a new Open Sound System /dev/dsp wrapper named
padsp and a module module-volume-restore have been
added.

padsp works more or less like that ESOUND tool known as
esddsp. However, it is much cleaner in design and thus works
with many more applications than the original tool. Proper locking is
implemented which allows it to work in multithreaded applications. In
addition to mere /dev/dsp emulation it wraps
/dev/sndstat and /dev/mixer. Proper synchronization
primitives are also available, which enables lip-sync movie playback
using padsp on mplayer. Other applications that are
known to work properly with padsp are aumix,
libao, XMMS, sox. There are some things
padsp doesn’t support (yet): that’s most notably recording,
and mmap() wrapping. Recording will be added in a later
version. mmap() support is available in esddsp but
not in padsp. I am reluctant to add support for this, because
it cannot work properly when it comes to playback latency
handling. However, latency handling this the primary reasoning for
using mmap(). In addition the hack that is included in
esddsp works only for Quake2 and Quake3, both being Free
Software now. It probably makes more sense to fix those two games than
implementing a really dirty hack in padsp. Remember that you
can always use the original esddsp tools since Polypaudio
offers full protocol compatibility with ESOUND.

module-volume-restore is a small module that stores the
volume of all playback streams and restores them when the applications
which created them creates a new stream. If this module is loaded,
Polypaudio will make sure that you Gaim sounds are always played at
low volume, while your XMMS music is always played at full volume.

Besides the new release of Polypaudio itself we released a bunch of
other packages to work with the new release:

gst-polyp
0.9.0
, a Polypaudio plugin for GStreamer 0.10. The
plugin is quite sophisticated. In fact it is probably the only
sink/source plugin for GStreamer that reaches the functionality of the
ALSA plugin that is shipped with upstream. It implements the
GstPropertyProbe and GstImplementsInterface
interfaces, which allow gnome-volume-meter and other
GStreamer tools to control the volume of a Polypaudio server. The sink
element listens for GST_EVENT_TAG events, and can thus use
ID3 tags and other meta data to name the playback stream in the
Polypaudio server. This is useful to identify the stream in the Polypaudio
Volume Control
. In short: Polypaudio 0.9.0 now offers first class
integration into GStreamer.

libao-polyp
0.9.0
, a simple plugin for libao, which is used for audio playback by tools like ogg123 and Gaim, besides others.

xmms-polyp
0.9.0
, an output plugin for XMMS. As special feature it uses the
currently played song name for naming the audio stream in
Polypaudio.

Polypaudio Manager 0.9.0, updated for Polypaudio 0.9.0

Polypaudio Volume Control 0.9.0, updated for Polypaudio 0.9.0

Polypaudio Volume Meter 0.9.0, updated for Polypaudio 0.9.0

A screenshot showing most of this in action:

Polypaudio Screenshot.

This screenshot shows: the Polypaudio Manager, the Polypaudio
Volume Control, the Polypaudio Volume Meter, the XMMS plugin, the
GStreamer plugin used by Rhythmbox and gstreamer-properties,
pacat playing some noise from /dev/urandom,
padsp used on MPlayer. (This screenshot actually shows some
post-0.9.0 work, like the icons used by the application windows)

Polypaudio 0.9.0 released

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/projects/polypaudio-0.9.0.html

We are proud to announce Polypaudio
0.9.0
. This is a major step ahead since we decided to freeze the
current API. From now on we will maintain API compability (or at least
try to). To emphasize this starting with this release the shared
library sonames are properly versioned. While Polypaudio 0.9.0 is not
API/ABI compatible with 0.8 it is protocol compatible.

Other notable changes beyond bug fixing, bug fixing and bug fixing
are: a new Open Sound System /dev/dsp wrapper named
padsp and a module module-volume-restore have been
added.

padsp works more or less like that ESOUND tool known as
esddsp. However, it is much cleaner in design and thus works
with many more applications than the original tool. Proper locking is
implemented which allows it to work in multithreaded applications. In
addition to mere /dev/dsp emulation it wraps
/dev/sndstat and /dev/mixer. Proper synchronization
primitives are also available, which enables lip-sync movie playback
using padsp on mplayer. Other applications that are
known to work properly with padsp are aumix,
libao, XMMS, sox. There are some things
padsp doesn’t support (yet): that’s most notably recording,
and mmap() wrapping. Recording will be added in a later
version. mmap() support is available in esddsp but
not in padsp. I am reluctant to add support for this, because
it cannot work properly when it comes to playback latency
handling. However, latency handling this the primary reasoning for
using mmap(). In addition the hack that is included in
esddsp works only for Quake2 and Quake3, both being Free
Software now. It probably makes more sense to fix those two games than
implementing a really dirty hack in padsp. Remember that you
can always use the original esddsp tools since Polypaudio
offers full protocol compatibility with ESOUND.

module-volume-restore is a small module that stores the
volume of all playback streams and restores them when the applications
which created them creates a new stream. If this module is loaded,
Polypaudio will make sure that you Gaim sounds are always played at
low volume, while your XMMS music is always played at full volume.

Besides the new release of Polypaudio itself we released a bunch of
other packages to work with the new release:

  • gst-polyp
    0.9.0
    , a Polypaudio plugin for GStreamer 0.10. The
    plugin is quite sophisticated. In fact it is probably the only
    sink/source plugin for GStreamer that reaches the functionality of the
    ALSA plugin that is shipped with upstream. It implements the
    GstPropertyProbe and GstImplementsInterface
    interfaces, which allow gnome-volume-meter and other
    GStreamer tools to control the volume of a Polypaudio server. The sink
    element listens for GST_EVENT_TAG events, and can thus use
    ID3 tags and other meta data to name the playback stream in the
    Polypaudio server. This is useful to identify the stream in the Polypaudio
    Volume Control
    . In short: Polypaudio 0.9.0 now offers first class
    integration into GStreamer.
  • libao-polyp
    0.9.0
    , a simple plugin for libao, which is used for audio playback by tools like ogg123 and Gaim, besides others.
  • xmms-polyp
    0.9.0
    , an output plugin for XMMS. As special feature it uses the
    currently played song name for naming the audio stream in
    Polypaudio.
  • Polypaudio Manager 0.9.0, updated for Polypaudio 0.9.0
  • Polypaudio Volume Control 0.9.0, updated for Polypaudio 0.9.0
  • Polypaudio Volume Meter 0.9.0, updated for Polypaudio 0.9.0

A screenshot showing most of this in action:

Polypaudio Screenshot.

This screenshot shows: the Polypaudio Manager, the Polypaudio
Volume Control, the Polypaudio Volume Meter, the XMMS plugin, the
GStreamer plugin used by Rhythmbox and gstreamer-properties,
pacat playing some noise from /dev/urandom,
padsp used on MPlayer. (This screenshot actually shows some
post-0.9.0 work, like the icons used by the application windows)