[gst-devel] Re: Comparison: MAS, GStreamer, NMM

Thomas Vander Stichele thomas at apestaart.org
Thu Aug 26 09:55:16 BST 2004


Hi,

> sure we can add multi-media, e.g. audio, video, ..., (how about SMIL? 
>  No, just kidding ;)
> However, as stated in my first email: The audio player is meant as 
> starting point,

Ok, so we have a difference of opinion.  My point is that "starting
point" doesn't tell you enough about what you want to know to get a good
evaluation.  Otherwise we might as well compare the different calls
needed to initialize the framework without doing anything.

>  and: these features are
> simple enough to be provided for all three frameworks.

So is video.  Or, rephrased - if any multimedia framework doesn't play
video, then the framework shouldn't even be a "contestant" in the
comparison.  Video is not something you can "add on later".


> And: if a framework is generic, then setting one capability (e.g. 
> setting the filename of the source plug-in) is handled as setting any 
> other capability.

In a theoretically perfect model and world, yes.  In practice, any
abstraction is always leaky in the real world.  The point to consider is
how these leaks are handled.  Lots of examples of this exist; xvideo
output only allowing one xv port, only allowing a maximum size, webcams
only allowing a fixed set of framerates, v4l devices only allowing
certain width by height, network connections failing, ...

Anyway, like I said before, if you want a level playing field to compare
the three on, you need to get the three frameworks to agree on the level
playing field first.  I personally don't see the merit in writing a
helloworld that just plays an mp3, because I can write that with only
libmad already.

The examples should highlight API and strong or weak points in current
API and design.  There's no point in setting up examples tailored to
show strengths of one of the three frameworks then seeing how the others
match up.

For example, for GStreamer, helloworld 3 would show a weak point in
network transparency, and helloworld 1 would show a strong point in ease
of setting up a decoding pipeline.  That's probably because for end
users ease of decoding is slightly more important than network
transparency in our opinion.  All that would prove is that the examples
were tailored to show something specific, not something the KDE people
want to know.

Anyway, seems to me the KDE people are already figuring out themselves
how to do an evaluation; they're writing an API they'd like and then the
backends should implement this.  Which sounds to me like the correct
test, originating from the right people.

Thomas


Dave/Dina : future TV today ! - http://www.davedina.org/
<-*- thomas (dot) apestaart (dot) org -*->
If only you'd come back to me
If you laid at my side
wouldn't need no mojo pin
to keep me satisfied
<-*- thomas (at) apestaart (dot) org -*->
URGent, best radio on the net - 24/7 ! - http://urgent.fm/



More information about the kde-multimedia mailing list