Disabling aRts in knotify
Alexander Neundorf
neundorf at kde.org
Wed Feb 18 21:22:28 GMT 2004
On Wednesday 18 February 2004 21:58, Matthias Kretz wrote:
...
> Looks like your definition for a backend is only the sound output...
I thought this is what we are talking about.
> > The decoding step should also handle
> > things like resampling to the sample rate of the backend, to the number
> > of channels of the backend, etc. And after this is done, the decoded raw
> > data should be playable by any backend (otherwise I'd consider the api of
> > the backend broken).
>
> But what you're getting at here is to create a media framework. We have
> frameworks available with gstreamer, aRts, NMM and MAS. Creating our own
I don't think so.
I think we need a simple KDE api:
"play this file now"
"start/pause/stop playing this file"
And then we need the ability to play the sound via OSS, ALSA, arts, MAS/
gstreamer.
If the backend (e.g. gstreamer) is able to do the decoding itself, we could
check for this and let it do the work (pseudocode):
KDEAudioLib::playFile()
{
if (backend->canDecode(OGG)) //e.g. gstreamer AFAIK
backend->play(encodedOggData);
else
{
decodeAudio(encodedOggData, rawData, backend->samplerate,
backend->channelCount);
backend->playSamples(rawData);
}
}
Am I wrong with this ?
We have everything for this. We have the decoding software, but now it is
integrated in/bound to arts. We can handle OSS and ALSA (in arts).
This has to be split in OSS and ALSA backends and a decoding frontend, IMO.
(Ok, I don't know much about the kde sound system since I always stayed away
from it because of arts (it adds a lot of code/complexity/load to my machine
without any advantages for me))
Bye
Alex
--
Work: alexander.neundorf at jenoptik.com - http://www.jenoptik-los.de
Home: neundorf at kde.org - http://www.kde.org
alex at neundorf.net - http://www.neundorf.net
More information about the kde-core-devel
mailing list