[FreeNX-kNX] Some thoughts on networked sound

Dag Sverre Seljebotn dss at fredtun.no
Mon Jan 24 19:09:10 UTC 2005


Thanks for the response!, glad someone read it.

> The take of NX is to efficiently transport whatever protocol is
> the best for the job and ensure that applications become aware
> of its existence. It seems that both KDE and GNOME are going
> toward a GStreamer based solution. Did you try playing with
> GStreamer? Do you think it is a good option for the needs of a
> MM enabled network computing solution?

GStreamer as such is only an API for applications to code against so
that they don't have to reinvent the wheel regarding sound and video
pipelines. When used for only sound it wouldn't change what I said in my
post. I could find no gstreamer network transport and they'll be
reluctant to create one i think, as GStreamer doesn't target the role of
a sound server but of a MM framework...see the faqs. The biggest
difference between a 'framework' and a 'server' in practice might be
that it might be a long while before the system sounds etc. start using
gstreamer.

In my opinion, gstreamer would be just yet another API against a more
generic network sound transport mechanism. It does save a trip through a
kernel driver, and if GStreamer becomes a de facto standard it may save
people installing such a driver into their kernel..but the problem of
transporting the sound is still left to something else. Of course it is
a nice escape for Linux-centrism, providing a nice cross-platform API.
But application support is not there yet.

The network condition problem is a problem with sound in general. There
are two ways I guess...generic lossy compression and more specific sound
caching. The latter one is suitable for desktop sound effects. In my
opinion one would do the generic (and possibly more bandwidth-demanding)
solution first...users with too low bandwidth to play generic sound
probably don't want sound to eat up precious bandwidth that could have
been used for X just for some desktop sound effects. Long term one could
of course implement the esd and artsd interfaces for this purpose. It
does become a bigger job, but one could take it one step at the time.

NoMachine might have a sound compression technology in place already in
their protocol? Then we'd just forward sound to that piece.

When it comes to video and audio synchronization gstreamer could offer
some additional benefits. But, you would need to transfer the whole
gstreamer pipeline over the network, in addition to the X channel, and
have gstreamer interact with NX's X on the client side, or something
like that. Hardly worth doing, at least before the rest is in place. My
guess is that working on audo/video synch in a more generic way would do
the job just as well with perhaps less work.

// Dag Sverre




More information about the FreeNX-kNX mailing list