[gst-devel] Comparison: MAS, GStreamer, NMM

Benjamin Otte in7y118 at public.uni-hamburg.de
Thu Aug 26 09:32:42 BST 2004


Lemme see how small I can get this...

On Wed, 25 Aug 2004, Marco Lohse wrote:

> (1) Allow the playback of an encoded audio file (e.g. MP3). This will
> result in similar setups: a component for reading data from a
> file connected to a component for decoding connected to a component
> for audio output. (Together, this is called "pipeline" or "flow
> graph").
> (2) Set the filename of the file to be read.
> (3) Manually request/setup this functionality, i.e. no automatic setup
> of flow graphs.
> (4) Include some error handling.
>
void
penis_size_contest_1 (gchar *filename)
{
  gint ret = 0;
  gchar *command = g_strdup_printf ("gst-launch filesrc location=\"%s\"" \
      " ! mad ! osssink", filename);
  if (!g_spawn_command_line_sync (command, NULL, NULL, &ret, NULL) \
      || ret != 0) {
    g_print ("an error occured.");
  }
  g_free (command);
}

> In a second step, we would like to extend the helloworld program with
> following feature (helloworld II):
>
> (1) Add a listener that gets notified if the currently playing file
> has ended, i.e. this listener is to be triggered after the last byte
> was played by the audio device.
>
void
penis_size_contest_2 (gchar *filename, void (*eof)())
{
  gint ret = 0;
  gchar *command = g_strdup_printf ("gst-launch filesrc location=\"%s\"" \
      " ! mad ! osssink", filename);
  if (!g_spawn_command_line_sync (command, NULL, NULL, &ret, NULL) \
      || ret != 0) {
    g_print ("an error occured.");
  }
  g_free (command);
  eof ();
}


> In a final step, we would like to extened the helloworld program
> (helloworld I) to allow for distributed playback (helloworld III):
>
> (1) The component for reading data from a file should be located on the
> local host. The component for decoding, and playing the audio data should
> be located on remote host.
>
void
penis_size_contest_3 (gchar *filename, gchar *remote_host)
{
  gchar *command = g_strdup_printf ("gst-launch filesrc location=\"%s\"" \
      " ! tcpclientsink host=%s", filename, remote_host);
  gchar *command_remote = g_strdup_printf ("ssh %s gst-launch " \
      "tcpserversrc ! mad ! osssink", remote_host);
  if (!g_spawn_command_line_async (command_remote, NULL) && \
      !g_spawn_command_line_async (command, NULL)) {
    g_print ("an error occured.");
  }
  g_free (command);
  g_free (command_remote);
}

Note that you can do all of this differently and probably a lot better,
and I think you can save some code lines by using Python or Perl because
you don't require the free or g_spawn stuff there, but I somehow felt the
original questioner wanted C code.
This was the shortest C code I could come up with. Did I win?

> We think that these three examples provide typical features needed for
> developing multimedia applications. Furthermore, these features are
> simple enough to be provided for all three frameworks. Therefore, we
> hope that the developers of MAS and Gstreamer are willing to
> participate in this comparison.
>
I don't think these are typical features needed for developing multimedia
applications. I obviously have no clue, because I only develop the
framework and I'm implementing what the app developers request from me, so
I might be far off. From the input of people developing desktop
applications so far, clearly none of them have asked for those particular
features you state.

Here are some requests I got from amarok, beep, Rhythmbox and totem
developers:
1) Given a location, determine if it contains audio and/or video that can
be played back. (If it does, these apps tend to put this file into the
playlist.)
2) Given a location, extract the metadata from the file. (This is also
done when putting the file into the playlist.)
3) Given a location that contains audio and a sound output, reproduce the
audio of that location on that sound output.
4) Given a location that contains audio and video and a rectangular area
in my application, reproduce the video on that rectangular area and
reproduce the sound on the sound output. Determine the sound output
automatically.
5) Given a file that contains audio and video, transcode that file into
a different format.
6) Given a file and metadata, set this metadata on the file, replacing /
extending the existing one.

If someone feels like it, write examples that show how these examples
work. Especially because I could add them to our examples then and point
people asking about it to those instead of Rb or amarok sources...

Benjamin



More information about the kde-multimedia mailing list