Some ideas for the aRts-replacement

Scott Wheeler wheeler at
Thu Feb 19 22:00:09 GMT 2004

Hash: SHA1

On Thursday 19 February 2004 21:00, Alexander Neundorf wrote:

> I am arguing against putting everything (OSS handling, ALSA handling -> 
> hardware access, stream mixing, decoding, plugins) into one item.

That's reasonable and I agree.  But this started off more like "let's have the 
application do everything themselves" or something like that -- at least I 
think that if that wasn't the intention it wasn't clear.
> I am for an easy to use API where I can say "play this file now" or "start/
> pause/stop playing this file" with some progress signals etc.
> This implementation should not depend on the implementation of the hardware 
> access, or better the way the decoded samples are transferred to the sound 
> card (over a network or not). It should be possible to choose the "backend" 
> which should always have the same interface.

Also reasonable.  But let's define some precise terms -- "backend" isn't 
meaningful.  Let's say:

"sound server" -- a hardware abstraction, i.e. Jack or ESD

"media framework" -- a library or set of libraries that may or may not 
specifically work with a "sound server" to decode and generally handle *audio 
and video* media.

> Sorry, I don't understand completely, can you please explain a bit more ?
> Is gstreamer simply a decoding software ? Does it also handle OSS/ALSA ? 

GStreamer is a plugin based media framework.  The core handles moving stuff 
around between the input and output plugins and provides a framework for 
stuff like meta data and of course the plugin APIs.

Input plugins would be things like Ogg Vorbis, MPEG Video, MP3 or FLAC.  
Output plugins are i.e. OSS, Xv (video), EST, aRts, Jack and so on.  Then 
there are effects plugins that would implement anything to make changes along 
the "pipeline".

It's a "framework" or a "library" or whatever you want to call it.  There's no 
server or hard coded output method.

This is something similar to what we're talking about, but I really don't 
think the scope is well understood in this discussion.  I don't think this is 
something that will be scraped together in a couple of months.  Just the core 
of GStreamer is about 75,000 lines of code -- the plugins are another 

I don't know anyone in KDE that really wants to work on such a framework or 
deal with writing all of plugins for input and output.  Not to mention that 
we need a video solution and we haven't even scratched the surface of the 
issues that arise with that.  Basically I don't want to see another aRts -- 
something that starts with good intentions but kind of bombs due to lack of 
contributors and a mis-understood scope at the time of design.

> if somebody wants to have the full power of some special sound server, like 
> maybe gstreamer, he might use its native API.

Sorry to be pedantic here, but I think semantics have made discussion more 
confusing than it needs to be.  GStreamer isn't a sound server.  aRts is not 
just a (or even a normal) sound server, etc...

> Most people only want to hear music and from time to time maybe some 
> notifications. For a soundcard with two channels this is no problem, he can 
> even adjust the level for each channel individually :-)

There are a few fundamental flaws with that logic though.

First, not all soundcards support multiple opens on /dev/dsp.

Second, given that the number two is arbitrary (i.e. mine support 16) -- but 
let's play with that number a little bit -- it's not difficult to imagine a 
situation where a "normal" user would want more than two and would be pretty 
confused in the case where sound didn't play because two things were already 
using /dev/dsp.  Just imagine having notifications with one open, a music 
player paused and trying to watch a video.  There's three right there.  I 
don't think that's such an off the wall situation that "normal" users would 
never have that happen.

Third, while my sound card supports 16 opens, the mixer interface doesn't 
support changing the levels of all of those -- in fact there's just one "PCM" 
mixer element.

Fourth, the situation is different on non-Linux platforms -- different drivers 
with different capabilities.  Almost everything we've said has assumed that 
the users are on Linux.

Really what you're saying is that *you* don't normally need more than two, and 
*your* soundcard supports two and that *you* would understand what's going on 
if you had three opens on /dev/dsp and one of them didn't work on your 
*Linux* system.  I don't think it's fair to assume that all of those hold 
true for "normal" users.

- -Scott

- -- 
The world is full of magical things patiently waiting for our wits to grow 
- --Bertrand Russell
Version: GnuPG v1.0.7 (GNU/Linux)


More information about the kde-multimedia mailing list