[Kdenlive-devel] moved here from kde-multimedia (was Re: Multimedia Frameworks)

Jason Wood jasonwood at blueyonder.co.uk
Mon Oct 28 15:32:42 UTC 2002


On Monday 28 Oct 2002 10:29 am, Rolf Dubitzky wrote:
> On Friday 25 October 2002 08:36 pm, Christian Berger wrote:
> > > Ok, that is a must. You cannot pass any video data arround.
> >
> > Well but the problem is the cutter should be as little dependend of the
> > rest of the system as possible. Well OK X can do something here, but I
> > don't want to let the cutter become a GUI application like Adobe
> > Premiere, which makes automatisation almoust impossible.
>
> Yes, I completely understand, since I thougt about this topic for quite
> while. The only idea I had just very recently is, if both applications, the
> GUI and the cutter as well, are GStreamer apps. In that case the cutter can
> be implemented as a GStreamer plugin and thus can render into a gst-queue.
> The GUI can build any gst-pipe it wants, eg. pipe to a file or display in a
> widget. In the end you _will_ need an Interface between cutter and GUI,
> this could be something selfmade, or something which is already there. 

I agree here. The null-cutter idea (which would be finished by now, but for an 
excursion to set up Linux on a new computer) currently uses a 
quickly-put-together interface, which is in now way meant to be permanent.

I'm not so sure about the GUI being a GStreamer app - mostly, I guess, because 
I don't want to tie the program too tightly to any framework untill I know 
that it is the right framework.

I do not believe that there will be a significant (as in noticeable to the 
user) performance penalty for having communication through seperate threads, 
but if I turn out to be wrong, I am coding so that the details of the 
interface should be hidden from the cutter itself. E.g. all of the 
client/server code is in a base class which you then inherit and fill in a 
couple of functions. In the case of GStreamer, that would simply mean putting 
the GStreamer rendering code within this class.

> If the KDE guys could live with GSteamer becomming part of KDE, this would
> even be Desktop independent. I have already coded something like a wrapper
> which makes my complete engine something like a super plugin, i.e. a single
> gst-element with as many input pads as there are video streams to merge and
> a single output pad. There is one problem though with gst-dv plugin, As far
> as I can see, it decodes every frame it passes to the pipe, concidering the
> bad quality and performance of linux-libdv-codec, this is not acceptable.
> My render tree is smart enough, to only decode a frame if it is really
> necessary, i.e. if the image data is not touched it will be written to the
> output file without beeing de-en-coded.

> Unfortunately I don't have a copy of Adobe Premiere, I don't know how it
> works.

I can't comment on it's rendering capabilities, as the only version I have 
used is the one which has a Matrox RT2500 wired in to give it real-time 
rendering on most effects and transitions.

However, the one real problem it has is that it is just plain dumb when it 
comes down to figuring out if it has to render something or not, so you end 
moving clips and having to re-render them, or moving a clip, pressing "undo" 
to get back where you started, and having to re-render it again.

Anyway, enough ranting ;-)

>
> > Well the 2 screen-paradigm is just one idea, there are many programs
> > without. The timeline images surely could be passed from one application
> > to another.
>
> sure, and "2 screens" don't mean really two widgets on the desktop next to
> each other, but displaying a preview of the video and displaying  single
> frame in which I want to adjust text or other video with the mouse are two
> very different tasks, which will need different wirdgets.
>
> > Yes, but we can maybe define somekind of "meta-format", which contains
> > the "common denominator" but is dynamically extensible.
>
> Something like this has already been done, I think, do you know SMIL? In
> the beginning, I thought about using a subset of SMIL, but I think it is
> overkill and is not very well suitable to persist my kind of rendertree.

Having a quick skim of the w3c webpage for SMIL, I don't think that it will be 
overkill in say, two or three years time. But whilst I am forward looking, I 
also want to get something up and running sometime soon :-) In particular, I 
can't see any way in SMIL to define your own effects and transitions, so I am 
not sure that it would be appropriate anyway. Where would the EffecTV filters 
be defined, for instance?

But I don't think we need to go to that length anyway. For a transition, the 
GUI knows how to draw a timeline, keyframes, integer, double, time, string, 
font values, etc. The Cutter knows how to render/preview the transition 
correctly and which settings are invovled.

All we need to do is communicate the possible settings to the GUI, and 
communicate the selected settings back again.

> > Ohh my engine is currently still non-existant. It'll be based on lavpipe
> > which enables us to at least have a working cutter for a start.
>
> As I said, I would be happy to share the code if you are interested. My
> 'engine' is not ready by any means. I would say it is in a prove of
> principle state. I mean cutting a movie in plain C++ is not a pleasure, I
> need a GUI.

Oh, definitely! I shall try and get the null-cutter finished tonight in some 
form tonight or tommorrow (I said that last time) and then, assuming 
everything works as expected, we can start looking into the best way to 
integrate your code with the GUI.

Cheers,
Jason

-- 
Jason Wood
Homepage : www.uchian.pwp.blueyonder.co.uk




More information about the Kdenlive mailing list