kai.vehmanen at wakkanet.fi
Wed Feb 26 22:05:04 GMT 2003
On Tue, 25 Feb 2003, Tim Janik wrote:
> and, more importantly, stefan and i wrote a paper going into the details of
> why we think a project like CSL is necessary and what we intend to achieve
> with it:
Ok, I already forwarded a few mails concerning this from lad. I'll add a
few comments of my own:
I think I understand your reasons behing CSL, and I think it (CSL) might
just be the necessary glue to unite KDE and GNOME on the multimedia front.
But what I see as a risk is that you forget the efforts and existing APIs
outside these two desktop projects. In the end, it's the applications that
count. It's certainly possible that you can port all multimedia apps that
come with GNOME and KDE to CSL, but this will never happen for the huge
set of audio apps that are listed at http://www.linuxsound.at. And these
are apps that people (not all, but many) want to use.
A second point is that for many apps, the functionality of CSL is just not
enough. ALSA PCM API is a very large one, but for a reason. Implementing a
flexible capabilities query API is very difficult (example: changing the
active srate affects the range of valid values for other parameters). The
selection of commonly used audio parameters has really grown (>2 channels,
different interleaving settings for channels, 20bit, 24bit, 24-in-4bytes,
24-in-3bytes, 24-in-lower3of4bytes, 32bit, 32bit-float, etc, etc ... these
are becoming more and more common. Then you have functionaliy for
selecting and querying available audio devices and setting up virtual
soundcards composed of multiple individual cards. These are all supported
by ALSA and just not available on other unices. Adding support for all
this into CSL would be a _huge_ task.
Perhaps the most important area of ALSA PCM API are the functions for
handling buffersize, interrupt-frequency and wake-up parameters. In other
words being able to set a buffersize value is not enough when writing
high-performance (low-latency, high-bandwidth) audio applications. You
need more control and this is what ALSA brings you. And it's good to note
that these are not only needed by music creation (or sw for musicians for
lack of a better term) apps, but also for desktop apps. I have myself
written a few desktop'ish audio apps that have needed the added
flexibility of ALSA.
Now JACK, on the other hand, offers completely new types of functionality
for audio apps: audio routing between audio applications, connection
management and transport control. These are all essential for music apps,
but don't make sense in an audio i/o abstraction like CSL.
So to summarize, I really hope that you leave a possibilty for these APIs
(especially ALSA and JACK) in the KDE multimedia architecture, so that it
would be possible to run different apps without the need to completely
bypass other application groups (like is the situation today with
As a more practical suggestion, I see the options as:
1) A front-end API that is part of the KDE devel API
e) ... others?
2) Backend server that is user-selectable (you have a nice GUI
widget for selecting which to use)
a) aRts (current design, again uses OSS/ALSA)
b) JACK (gstreamer already has support for it)
c) ALSA (dmix or aserver)
e) ... others?
All official (part of the base packages) KDE+GNOME apps would use (1), but
3rd party apps could directly interact with (2) if they so wished. If the
required (2) is not running, user can go to the configuration page and
change the audio backend.
Audio software for Linux!
More information about the kde-multimedia