make_it_cool: kdelibs/kdemm

Michael Donaghy mdonaghy at cwazy.co.uk
Sat Mar 19 10:51:45 GMT 2005


On Saturday 19 Mar 2005 00:14, Ryan Gammon wrote:
> Matthias Kretz wrote:
> >Hi Ryan,
> >
> >thanks for giving feedback. OK, first to check that I know what you're
> > talking about I try to summarize (the further you get with reading the
> > more I start asking questions rather than telling you what I understood)
>
> Sorry 'bout that :-)
>
> >- you're currently working on extending the Helix DNA Client so that media
> >players using it support alternative media engines
>
> Close... I'm not planning to do this at the client level, though. I'm
> creating something new that will be used in parallel with the helix
> client within the Helix Player and RealPlayer.

Sort of like how kaffeine allows xine or arts or mplayer backends to be 
embedded into it, and can embed itself into other things?
>
> >- you want to have that functionality because the Helix DNA Client media
> > code might not always be the best code to handle some specific format (or
> > it doesn't know how to handle the media format at all)
>
> Right.
>
> >- you want to achieve that by starting dbus services that register as
> > media engine and that can then be instructed by Helix to play some
> > certain files (or will you pipe the whole media data over dbus, or URLs
> > and hope the media engine can handle it?)
>
> I want to define dbus interfaces that define a baseline set of media
> engine functionality.
>
> People would create dbus services that implement such functionality
> using their media engine of choice.
>
> A player wanting to take advantage of this would look on the dbus for
> media player interfaces and generally send commands like
>
> "play this http://url.com in socket_id XEmbed socket"
>
> ... where http://url.com is something the user has entered in the player
> somehow, and where socket_id is a video area (if we're playing back
> video). If the player is audio-only, no XEmbed required.

Sounds good, but makes me wonder, is XEmbed meant to replace KParts?

> >AFAIU you already have that API (not Qt/KDE style, though) and want to
> > extend the implentation behind it to use other engines.
>
> Fundamentally, I'm working on a D-Bus API which could have Qt-style
> bindings.
>
> >For kdemm we want to support more than only one engine because the
> >functionality we ask for in our API should be available with most media
> >frameworks and because some users have "special needs". I think this is
> >especially true if you want to make KDE a good platform for pro audio
> > tools that does not get in the way like it currently does with aRts. You
> > rather want it to integrate. (but I guess I'm getting a little off topic
> > here)
>
> Not so much integrate as abstract common functionality.
>
> One could, eg, still write a pro-audio app to use something like jack
> directly.
>
Sounds good but I wonder about overhead. AIUI you want player clients and 
engines to communicate via DBUS, but at the moment that means translating 
from Gtk style in the helix client to DBUS and then translating that to Gtk 
or Qt style at the engine, at least with the current engines. There'll 
probably be a third layer of translation needed to use xine or mplayer as the 
engine. If the engine is doing its own drawing we won't lose anything 
performance-wise there, but what about latency on things like pausing the 
player?

> >>I was thinking of leaving the configuring of media engines to
> >>distribution integrators & advanced users, punting on the creation of an
> >>easy admin-configurable unified media engine configuration interface for
> >>now.
> >
> >What would be the difference between your "system integrator" and my "KDE
> >admin"? Is it just another name or does he also have another "user task"?
>
> A system integrator is more likely to go around tweaking build settings
> ("--with-polyaudio" or whatever), editing config files, etc.
>
> An admin would be looking for an end-user configuration tool.
>
> >>I was thinking of leaving this unspecified in what I'm doing... If the
> >>user has a sound server installed that can do this, great.
> >
> >Yes, but kdemm is about a DE, so we want the desktop experience to be
> >"perfect" ;-) I found that changing the audio volume of some specific
> > program is a rather common task for me - it's often just too complicated
> > to do, so that I won't bother, but if I could...
> >How to achieve that without a specific sound server? If the soundserver
> > cannot do it, or if the user doesn't use a sound server at all, this can
> > be done using IPC. The kdemm prototype already has this functionality
> > even though a simple way to access all those volume controls is missing.
> > And it's rather cool. Of course this will only work for apps using kdemm.
>
> Right... I guess another way of doing this would be to create a sound
> server with a get_volume / set_volume kind of api that only affects a
> single pcm stream, and maybe a get_master_volume / set_master_volume api.
>
> I'd add support for such a sound server all linux multimedia engines,
> and convince linux distributions to use my sound server.
>
> Linux distributions shipping such a sound server would find that their
> apps had independant volume controls.

Yeah, but there's already too many sound servers around and there seems to be 
a movement against them. Does alsa dmixing support having a different volume 
for different streams which are playing together? If not, could it? Because I 
think that's the only way you'll get it in all distros.
>
> >>Is there any way that the work I'm doing could be useful to you guys?
> >
> >I'm not sure. I guess one day kdemm would like to have a Helix DNA Client
> >backend. But this does not have anything to do with the current
> > discussion...
> >
> >I would be very interested, though, in how exactly you're planning to use
> >dbus. Whether you want to only send instructions are whether you want to
> > send media data over the wire (I guess this is not a task dbus was meant
> > for). And how much communication you think you need to fully integrate
> > the external media engine into your API.
>
> We would just send instructions over the wire, not the media data itself.
>
> Communication would be along the lines of:
> openUrl("http://www.someurl.com")
> play()
> stop()
>
> ... that kind of thing.
>
Provided it doesn't induce too much latency, sounds very good. No more messing 
around o get arts to play my mp3s and xmms my streamed ones and xine my wmas 
and realplay my real streams, just set it up once and then I can use my 
choice of frontend to do all of those, that's the idea right? Players like 
amarok which at the moment has to code separately for arts and gstreamer 
backends could just write one set of code and it would work with all 
backends, even some which haven't been written yet. If it gets popular this 
will be a big step forwards.
Sorry if you've said before, but how do you decide which backend to use for 
which file? Do engines "advertise" what types they can play somehow? Does the 
user have to choose?



More information about the kde-multimedia mailing list