[FreeNX-kNX] Some thoughts on networked sound
Dag Sverre Seljebotn
dss at fredtun.no
Fri Jan 7 22:44:50 UTC 2005
> Do you (mainly Fabian) know when you find time to integrate the sound
> forwarding capability in FreeNX or do you know how much work this will be?
I'm not going to answer the question, I'm merely taking the opportunity
to talk a little about the Networked Sound Problem, mainly from a
thin-client perspective when working with LTSP.
The problem is getting at the sound the applications produce. X, used
for getting at the graphics, is a well-defined protocol where network
was part of the considerations from the beginning - and therefore making
NX was just a matter of improving the network support already there. Not
so with sound.
Sound is a problem because all the applications use sound in different
ways - some access the kernel drivers directly, while others use
different sound servers to coordinate and mix sounds (of which there are
at least a dozen different). KDE apps use artsd, which was recently,
after a long period without changes, officially abandoned and stands
without a maintainer. Some time ago GNOME apps often used esd, though
just as often they accessed a /dev/dsp hardware device directly. Recent
GNOME apps seem to like to use gstreamer (not really a sound server but
you could code a NX sink to get at the sound from it...). Sound
applications seem to like jackd, a sound server specially developed for
low latency and professional features. The list goes on and on. For some
reason, people could just never get around to define a standard sound
API.
So, you can make it work per application by doing things in user-space,
but not for all, and you probably have to manually support each
application (artsd and esd have wrappers that intercept the system calls
done by applications accessing /dev/dsp directly, to trap the
sound ...however these only work on selected applications).
Since there is no standard, and won't be for a few years by the looks of
it, the only thing that would work well is a kernel driver. You then
need one solution per OS instead :-(
If and how NoMachine does it I don't know, it would be interesting to
know.
My personal proposal, for Linux: Write, or find if somebody has already
done so, a user-space ALSA driver. This would be an ALSA sound driver
that would forward sound to a process run by each logged-in user. Then
FreeNX could simply provide such a user-space program forwarding the
sound to the clients. (And the big bonus here is that the user-space
programs would be relatively simple, and you could simply switch it with
something else for NAS sound (for LTSP and other thin-client solutions
using normal X) or even wav writers or whatever.
The concept of a user-space sound driver could easily be transferred to
other systems than Linux as well, tough the kernel part would be
different.
The advantage in using ALSA is that the OSS emulation (/dev/dsp) is
already written, and one could probably also take advantage of software
mixing etc. from ALSA (in NX that is absolutely necesarry to get more
than one sound at the time).
At least one closed-source product (MuNAS) works like this, and it works
great (I'm using it myself, as there are no open source solutions in
this area, at least that I've found).
In the hope of a productive discussion,
// Dag Sverre
More information about the FreeNX-kNX
mailing list