[Kdenlive-devel] kdenlive

Jason Wood jasonwood at blueyonder.co.uk
Wed May 1 23:09:48 UTC 2002


On Wednesday 01 May 2002 8:18 pm, Christian Berger wrote:
> Jason Wood schrieb:
> > On Wednesday 01 May 2002 09:51, Christian Berger wrote:
> > >
> > > Well for preview I would use a lower framerate as well as render all
> > > the effects in a low resoultion. But I don't think we'll be able to
> > > reuse that data. At least I wouldn't really put much energy on that.
> >
> > It may well depend on which cutter program and the user's hardware though
> > - if we can generate full quality output quickly, then we might as well
> > make the preview images full quality, in which case they can be reused.
> > The only advantage (though a major one) of low quality preview files is
> > that they can be rendered much quicker than high quality preview files.
>
> Well I guess one problem is, our "preview" won't be much faster than the
> real thing. There is also the same problem with other programms,
> rendering a preview can take longer than doing the "real thing" under
> many cirumstances. However, well let's see.

Agreed

> > Sorry, I haven't made myself clear here - filehandles is not the best
> > term for what is supposed to be happening. What we have is more a
> > description of the "paths" that videos have to traverse through effects
> > in order to make this particular video.
> >
> > Take a look at the image attatched. Here, I have shown how one particular
> > piece of the output video may have to be constructed. It consists of 5
> > files and 7 effects, some of which require previous effects to have been
> > computed first, and some which require multiple effects to have been
> > calculated.
> >
> > The identifiers for "inputs" into effects and "outputs" from files and
> > effects are really just that - identifiers showing how to get from one
> > effect from the next - in other words, each line gets a seperate
> > identifier. By reasoning that they should be "closed" after use is to
> > guarantee that a line only has one input and one output - a useful
> > simplification as it prevents two effects from trying to access the same
> > video and messing up when it seeks forwards after each.
>
> Well It doesn't make much sense, no 2 effects can access the same video
> simultainiously. I would prohibit that. But if you close the files, you
> cannot access them anyhow without difficult seeking and maybe even
> decompression or something. So we shouldn't do that.

It might make sense is, for example, if you wanted an effect which placed the 
same video on screen simultaneously multiple times at different locations 
(perhaps behaving like a particle effect). You would either have to a) open 
the same file multiple times, which may be your only option if the video that 
you want to display will be open a different positions, or a more effective 
way would be to have the ability to have a single file piped to multiple 
effects (each of which would perform a seperate translate/rotate/scale 
transform on the video). 

However, in this case, you would have a seperate special effect which would 
take a single input, and send it to several outputs - it would handle any 
problems that might occur if other effects tried to pull different numbers of 
frames from it.

> for effects with 2 videos we always have to have one "master" video and
> a slave one, because we have different framerates.
>
> So we would have something like this:
>
> open file1 a
> open file2 b
> pipe p1
> play -in a -out p1 -effect "effect1" -effectparm b -effect "effect2"
> -effectparm 34
> close a
> close b
>
> open file3 a
> open file4 b
> pipe p2
> play -in a -out p2 -effect "effect3" -effectparm b -effect "effect4"
> -effectparm 34
> close a
> close b
>
> pipe p3
> play -in p1 -out p3 -effect "effect5" -effectparm p2
> close p1
> close p2
>
> output result.avi x
>
> open file5 a
> play -in a -out x -effect "effect6" -effectparm 34 -effect "effect7"
> -effectparm p3
> close a
> close p3
>
> Pipes aren't designed for multithreadthin, but to temporarily store
> data.
> A line like :
> play -in a -out x -effect "effect1" -effectparm a
> should not be allowed and be rejected by the parser.
>
> >
> > In my file definition, this would be simply defined as follows :
> >
> > # First greenscreen clip + background (colorised)
> > IN 1 file1.avi
> > IN 2 file 2.avi
> > EFFECT 3 chroma 1 2 (options go here)
> > EFFECT 4 colorise 3 (options go here)
> >
> > # Second greenscreen clip + background (in hell, flame effect)
> > IN 5 file 3.avi
> > IN 6 file 4.avi
> > EFFECT 7 chroma 5 6 (options go here)
> > EFFECT 8 flameeffect 7 (options go here)
> >
> > # wipe between the two
> > EFFECT 9 wipehorizontal 4 8 (options go here)
> >
> > # superimpose text caption
> > IN 10 file 5.avi
> > EFFECT 11 superimpose 9 10 (options go here)
> >
> > # write to final output
> > OUT 11
> >
> > I'm not saying it's perfect (I haven't written any duration or
> > positioning information above) but it's quite easy to expand indefinitely
> > without getting complicated syntax, and it is easy to backtrack from OUT
> > to IN's to figure out what needs to be done to generate each new frame of
> > output.

In comparing the two different ways of doing things here I think I've figured 
out where we are disagreeing :-)

My idea considers the cutting list to be a description of what needs to be 
done, and which effects need to be applied, and nothing more - optimisation 
of the cutting list (like knowing when a file can be kept open from one piece 
of output to the next to avoid unnecessary seeking) should occur within the 
cutting program itself. This means more work in the cutter program (it will 
have to optimise the cutting process itself), but less work in the program 
which generates the cutting list.

Your idea considers the cutting list to be more "all in one" - that the 
cutting list is already optimised by the time it reaches the cutting program. 
This cuts down on the work that the cutting program has to do (it can just 
follow the cutting list blindly), but adds work to the program which 
generates the cutting list.

The problem that I see with optimising within the cutting list generator is 
that the program will not know what cutter it is designing the list for - it 
will know a few simple paramters so that it knows if it can use preview files 
within the cutting list, for example, but it will not know the specifics of 
the hardware that it will be generating the cutting list for. 

Is the hardware capable of holding so many files open at the correct place? 
Memory might be an issue, otherwise, why couldn't we just open every file 
that we needed right at the beginning of editing and keep them open until the 
end. Perhaps it is possible to generate each seperate "chunk" (here, a chunk 
would be the output of the diagram from the previous email, and would be 
followed by another "chunk" which would have a different rendering diagram) 
of output in a different order which makes better use of the hardware, and 
then patch them together later. Of course, this can't be done as easily or 
quickly with VCR machines... but which part of the program should know about 
it - the cutter program, which has been designed to work with and interface 
with the hardware in question, or the cutting list generator, which is 
designed to be as generic amongst cutting programs as possible?

If you review the two different syntax's we are using, the structure is very 
similar. The only question is - should pipes and files be explicitly opened 
and closed, or should this be left to the cutting program's discretion, which 
could very well have a better idea on how to optimise them.

Another problem that I see with specifically optimising between chunks is that 
it makes it more difficult to tell the cutter "preview this chunk" - we would 
either need to construct a specific cutting list for generating the preview, 
or the cutter would need to "backpedal" to a previous chunk, calculate where 
the file it needs were opened, calculate where they need to seek to (since it 
will have been "played" a certain amount by effects within those previous 
chunks) and then get on with actually playing the chunk.

I'm not saying that's not possible, I just think it would be an arse to 
program :-)

-- 
Jason Wood
Persistence is a virtue






More information about the Kdenlive mailing list