[Kdenlive-devel] Re: kdenlive

Christian Berger einStein at donau.de
Wed May 1 19:18:34 UTC 2002


Jason Wood schrieb:
> 
> On Wednesday 01 May 2002 09:51, Christian Berger wrote:
> > > > > Several parameters which the user interface will check - for example,
> > > > > can the cutter stitch files together losslessly, or will a final
> > > > > render be required which takes the whole timeline in one go?
> > > >
> > > > I'm not sure what you mean there? I think it strongly depends of the
> > > > formats of the input files. In our internal format, we will be able to
> > > > do that without any problems.
> > >
> > > Imagine that the format that the video files that the cutter outputs are
> > > lossy. Whilst previewing, we do not care about the degradation of the
> > > picture quality, so if, foir example, we choose an effect and preview it,
> > > if we apply a second effect we can simply apply it to the first preview.
> > >
> > > However, when we do our final render we do care about picture quality. If
> > > we know for sure that our preview files are non-lossy then we can still
> > > use them when creating our final cutting list. Otherwise, we need to use
> > > the original files. For this reason, the program which generates the
> > > cutting list needs to know whether or not it can consider preview files
> > > for use for the final render.
> >
> > Well for preview I would use a lower framerate as well as render all the
> > effects in a low resoultion. But I don't think we'll be able to reuse
> > that data. At least I wouldn't really put much energy on that.
> 
> It may well depend on which cutter program and the user's hardware though - if
> we can generate full quality output quickly, then we might as well make the
> preview images full quality, in which case they can be reused. The only
> advantage (though a major one) of low quality preview files is that they can
> be rendered much quicker than high quality preview files.

Well I guess one problem is, our "preview" won't be much faster than the
real thing. There is also the same problem with other programms,
rendering a preview can take longer than doing the "real thing" under
many cirumstances. However, well let's see.

> > > As an aside, I think that the program which generates cutting lists will
> > > be seperate to the program which generates the user interface.  It will
> > > accept the project file created by the user interface and generate a
> > > cutting list optimised for the cutter specified. This will be better than
> > > making it part of the user interface, as it means less work porting from,
> > > for instance, KDE to Gnome.
> >
> > Of course, just think of the possibilities, we could, for example work
> > with OSD chips, or on tiny PDAs and stuff.
> 
> :-)

Yes, there are full hardware harddisk editing sets avaliable, those
could only have a small LCD screen, not suitable to have a full user
interface like Premiere.
 
> > > > I think maybe we could make the effects to be parameters of the "play"
> > > > command.
> > > >
> > > > effect a blur 12   #"opens effect blur with paramater 12"
> > > > effect b mosaic 1  #"opens effect mosaic with paramater 1"
> > > >
> > > > Then we might add that to the play command:
> > > > play A B 200 a b  #or something
> > > >
> > > > Maybe we should add some "coding" for the parameters:
> > > > play -in A -out B -duration 200 -effect a -effect b
> > >
> > > I think that the problem with this is that there can effectively be an
> > > infinite number of effects occuring, using an infinite number of video
> > > files. (Ok, not infinite, but there should be no upper bound on the
> > > number of tracks).
> > >
> > > And if we think about it, effects will be applied sequentially to each
> > > other.
> >
> > Yes, the coding would simply tell what value is used for what field.
> >
> > > How about this for a file format :
> > >
> > > We have IN's (files, VCR,s etc.), we have Effects, and we have the PLAY
> > > command.
> > >
> > > Play command only accepts one "stream", which it writes to the final
> > > file.
> > >
> > > E.g.
> > >
> > > #
> > > # Open file1.avi at time 0, assign it to file handle 1
> > > #
> > > IN 1 file1.avi  00:00:00.00
> > > #
> > > # Open file2.avi 10 seconds from the start, assign it to file handle 1
> > > #
> > > IN 2 file2.avi 00:00:10.00
> > > #
> > > # Apply a transition between the two. This automatically closes file
> > > handles # 1 & 2, to keep the command list linear. The result of the
> > > effect is a new # filehandle, assigned here as 3
> > > #
> > > EFFECT 3 fade 1 2  -start 0.0  -end 1.0 -duration 1000msec
> > > #
> > > # Write the result of file handle 3 to a file. If we do not specify a
> > > # duration, we assume the entire file should be written. After this
> > > # operation, file handle 3 will be closed.
> > > #
> > > OUT 3
> > >
> > > So, to generate this output which I suggested in a previous email:
> > >
> > > (I wrote)
> > >
> > > > which would play file 1 for 5 seconds, wipe to file 2 in 2 seconds, and
> > > > then continue playing file 2 for 10 seconds. At that point it would
> > > > take 2 more seconds to fade to black.
> > >
> > > We would write.
> > >
> > > # play file1.avi for 5 seconds
> > > IN 1 file1.avi 00:00:00.00
> > > OUT 1 -duration 00:00:05.00
> > >
> > > # wipe to file2.avi in 2 seconds
> > > IN 1 file1.avi 00:00:05.00
> > > IN 2 file2.avi 00:00:00.00
> > > EFFECT 3 wipe 1 2 -start 0.0 -end 1.0 -duration 00:00:02.00
> > > OUT 3
> > >
> > > # play file2.avi for 10 seconds
> > > IN 1 file2.avi 00:00:2.00
> > > OUT 1 -duration 00:00:10.00
> > >
> > > # fade to black (the first effect generates a video which is black, the
> > > # second applies the fade) Duration is given to color so that we never
> > > # accidentally try and OUT a file which never ends...
> > > IN 1 file2.avi 00:00:12.00
> > > EFFECT 2 color -r 0.0 -g 0.0 -b 0.0 duration 2:00
> > > EFFECT 3 crossfade 1 2 0.0 1.0 duration 2:00
> > > OUT 3
> > >
> > > The advantages of this format are firstly that it is clear at each point
> > > what needs to be done to generate the next bit of output.
> >
> > Well first of all, commands should have as little "sideeffects" as
> > possible, so no command, except for a close command should close a file.
> > I would really do the effects as parameters of the play command. So we
> > can do all effects without storing stuff to disk.
> 
> Sorry, I haven't made myself clear here - filehandles is not the best term for
> what is supposed to be happening. What we have is more a description of the
> "paths" that videos have to traverse through effects in order to make this
> particular video.
> 
> Take a look at the image attatched. Here, I have shown how one particular
> piece of the output video may have to be constructed. It consists of 5 files
> and 7 effects, some of which require previous effects to have been computed
> first, and some which require multiple effects to have been calculated.
> 
> The identifiers for "inputs" into effects and "outputs" from files and effects
> are really just that - identifiers showing how to get from one effect from
> the next - in other words, each line gets a seperate identifier. By reasoning
> that they should be "closed" after use is to guarantee that a line only has
> one input and one output - a useful simplification as it prevents two effects
> from trying to access the same video and messing up when it seeks forwards
> after each.

Well It doesn't make much sense, no 2 effects can access the same video
simultainiously. I would prohibit that. But if you close the files, you
cannot access them anyhow without difficult seeking and maybe even
decompression or something. So we shouldn't do that.

for effects with 2 videos we always have to have one "master" video and
a slave one, because we have different framerates.

So we would have something like this:

open file1 a
open file2 b
pipe p1
play -in a -out p1 -effect "effect1" -effectparm b -effect "effect2"
-effectparm 34
close a
close b

open file3 a
open file4 b
pipe p2
play -in a -out p2 -effect "effect3" -effectparm b -effect "effect4"
-effectparm 34
close a
close b

pipe p3
play -in p1 -out p3 -effect "effect5" -effectparm p2 
close p1
close p2

output result.avi x

open file5 a
play -in a -out x -effect "effect6" -effectparm 34 -effect "effect7"
-effectparm p3
close a 
close p3

Pipes aren't designed for multithreadthin, but to temporarily store
data.
A line like :
play -in a -out x -effect "effect1" -effectparm a
should not be allowed and be rejected by the parser.

> To give an example of when this might occur, imagine if we were using chroma
> on two seperate clips - each consists of a background video, and a green
> screen video which will be superimposed over the top. The result of each of
> these clips get's passed through another  effect - say one of them goes
> through a coloriser to make it look slightly better, whilst the other clip is
> supposed to put the presenter in the depths of hell and so applies a flame
> effect... And then we are wiping or fading from one to the other. The final
> video file/input superimposes a caption at the bottom of the screen.
> 
> In my file definition, this would be simply defined as follows :
> 
> # First greenscreen clip + background (colorised)
> IN 1 file1.avi
> IN 2 file 2.avi
> EFFECT 3 chroma 1 2 (options go here)
> EFFECT 4 colorise 3 (options go here)
> 
> # Second greenscreen clip + background (in hell, flame effect)
> IN 5 file 3.avi
> IN 6 file 4.avi
> EFFECT 7 chroma 5 6 (options go here)
> EFFECT 8 flameeffect 7 (options go here)
> 
> # wipe between the two
> EFFECT 9 wipehorizontal 4 8 (options go here)
> 
> # superimpose text caption
> IN 10 file 5.avi
> EFFECT 11 superimpose 9 10 (options go here)
> 
> # write to final output
> OUT 11
> 
> I'm not saying it's perfect (I haven't written any duration or positioning
> information above) but it's quite easy to expand indefinitely without getting
> complicated syntax, and it is easy to backtrack from OUT to IN's to figure
> out what needs to be done to generate each new frame of output.

Well here you would need to create several pipes which would mean quite
some overhead in harddisk accesses.
You don't really need them, because you can have several -effect
parameters, and the pipes happen somewhere inside (without having to
write something to disk)

Servus
  Casandro




More information about the Kdenlive mailing list