[Kdenlive-devel] moved here from kde-multimedia (was Re: Multimedia Frameworks)

Rolf Dubitzky dubitzky at pktw06.phy.tu-dresden.de
Mon Oct 28 11:15:11 UTC 2002


On Saturday 26 October 2002 11:01 am, Christian Berger wrote:
> Am Samstag, 26. Oktober 2002 00:05 schrieben Sie:
> > On Friday 25 Oct 2002 10:10 pm, Christian Berger wrote:
> > > I personaly doubt moving an image around in memory is _THAT_
> > > consuming. Just think of that Sinclair guy, he moved images around on
> > > a Z80 50 times per second.
> >
> > If we considered PAL, at 720x576 at 25 fps, then we are looking at
> > copying between 20 and 40 Meg to the screen per second, depending on the
> > color depth of the screen.
>
> Well but I doubt many systems can do a preview of the cut at _THAT_ speed.
> What might be more important here is to have exactly the right image. If
> you use overlay, the overlay might lag behind a frame or two.

Well, I don't know, but any professional video editor uses the same approach 
AFIK. This aproach is: 
1) Have single frame preview window. Selecting a frame in the timeline will 
render/display this frame with decent (highes?) quality.
2) Have _large_ buffers on HD and render the movie in preview quality to the 
buffers. If you wnat to previe a scene, it will be taken from the buffers. 

For 2) if you have a smart engine, the engine will be able to use the 
offscreen capabilities of the grafics hardware and/or use hardware to render 
ind preview quality if the selected scene has not yet been prerendered. It is 
possible, to have different implementations of the same effect/transistion. 
e.g. high or production quality, which just renders everything by hand as 
good as possible, and then low/preview quality which tries to use hardware 
acelleration. The effect might then not look 100% perfect or fonts maybe not 
antialiased, color gradients maybe reduced, etc, etc...

The real problem is not 1). Doing it perfect is not that big a deal, but 2) 
doing it almost perfect in realtime to preview.. that's the challenge.

> > However, if we were to copy data from the Cutter to the GUI, then the
> > problems that arise are due to context-switching. We would have to
> > consider looking at the Cutter as a seperate thread rather than as a
> > seperate process.
>
> Well this wouldn't probably work without integrating the cutter a lot more
> tightly.

Making the cutter a gst-plugin, this is for free, the GUI can wrap it in 
gst-thread, or not. Using gst-launch I guess the plugin can become a real 
process, too.. I'm not sure though.

> > I don't like the idea of threads for a number of reasons : Firstly, if
> > the Cutter crashes, it will take the GUI with it. Secondly, we would
> > still have to allow the Cutter to run as a seperate process so that it
> > could be run remotely or for batch processing anyway. Thirdly, it would
> > really screw up my idea of having a scheduler for multiple cutters
> > distributed on multiiple computers ;-)

Ok, I don't want to get envolved with multiple computers too much, because 
that's my major. But if we want to test this on a 500+ PC cluster at some 
point I can do it. But using a PC cluster to render a movie is trivial. The 
task is intrinsicly parallel if you can make the cutter read a XML file and 
launch it with gst-launch. You just have to tell everybody which frames to 
render and have a seriealizer in the end. That's what I do all day ;-)

> Well first of all if we programm propperly, the cutter won't crash.

;-)

> However that's why I was suggesting the cutter to anounce it's
> posibilities. If the cutter completely runs over network it might only
> offer the frames or might even just display frames on a seperate display.

Ok, you are talking about interactive work on a cluster?

Cheers,
Rolf

***************************************************************
 Rolf Dubitzky  
 e-mail: Rolf.Dubitzky at Physik.TU-Dresden.de
 s-mail see http://hep.phy.tu-dresden.de/~dubitzky/
***************************************************************






More information about the Kdenlive mailing list