[Kdenlive-devel] kdenlive

Christian Berger einStein at donau.de
Fri May 3 20:39:28 UTC 2002


I'm sorry for not replying in full last time

Jason Wood schrieb:

> 
> > > Sorry, I haven't made myself clear here - filehandles is not the best
> > > term for what is supposed to be happening. What we have is more a
> > > description of the "paths" that videos have to traverse through effects
> > > in order to make this particular video.
> > >
> > > Take a look at the image attatched. Here, I have shown how one particular
> > > piece of the output video may have to be constructed. It consists of 5
> > > files and 7 effects, some of which require previous effects to have been
> > > computed first, and some which require multiple effects to have been
> > > calculated.
> > >
> > > The identifiers for "inputs" into effects and "outputs" from files and
> > > effects are really just that - identifiers showing how to get from one
> > > effect from the next - in other words, each line gets a seperate
> > > identifier. By reasoning that they should be "closed" after use is to
> > > guarantee that a line only has one input and one output - a useful
> > > simplification as it prevents two effects from trying to access the same
> > > video and messing up when it seeks forwards after each.
> >
> > Well It doesn't make much sense, no 2 effects can access the same video
> > simultainiously. I would prohibit that. But if you close the files, you
> > cannot access them anyhow without difficult seeking and maybe even
> > decompression or something. So we shouldn't do that.
> 
> It might make sense is, for example, if you wanted an effect which placed the
> same video on screen simultaneously multiple times at different locations
> (perhaps behaving like a particle effect). You would either have to a) open
> the same file multiple times, which may be your only option if the video that
> you want to display will be open a different positions, or a more effective
> way would be to have the ability to have a single file piped to multiple
> effects (each of which would perform a seperate translate/rotate/scale
> transform on the video).
> 
> However, in this case, you would have a seperate special effect which would
> take a single input, and send it to several outputs - it would handle any
> problems that might occur if other effects tried to pull different numbers of
> frames from it.

Well I would do it like this: I have several input modules, and several
effects can access the input modules at once. All effects are calculated
and all the inputs advance a bit (to the next frame for example)
 

> In comparing the two different ways of doing things here I think I've figured
> out where we are disagreeing :-)
> 
> My idea considers the cutting list to be a description of what needs to be
> done, and which effects need to be applied, and nothing more - optimisation
> of the cutting list (like knowing when a file can be kept open from one piece
> of output to the next to avoid unnecessary seeking) should occur within the
> cutting program itself. This means more work in the cutter program (it will
> have to optimise the cutting process itself), but less work in the program
> which generates the cutting list.
> 
> Your idea considers the cutting list to be more "all in one" - that the
> cutting list is already optimised by the time it reaches the cutting program.
> This cuts down on the work that the cutting program has to do (it can just
> follow the cutting list blindly), but adds work to the program which
> generates the cutting list.

Exactly
 
> The problem that I see with optimising within the cutting list generator is
> that the program will not know what cutter it is designing the list for - it
> will know a few simple paramters so that it knows if it can use preview files
> within the cutting list, for example, but it will not know the specifics of
> the hardware that it will be generating the cutting list for.
> 
> Is the hardware capable of holding so many files open at the correct place?
> Memory might be an issue, otherwise, why couldn't we just open every file
> that we needed right at the beginning of editing and keep them open until the
> end. Perhaps it is possible to generate each seperate "chunk" (here, a chunk
> would be the output of the diagram from the previous email, and would be
> followed by another "chunk" which would have a different rendering diagram)
> of output in a different order which makes better use of the hardware, and
> then patch them together later. Of course, this can't be done as easily or
> quickly with VCR machines... but which part of the program should know about
> it - the cutter program, which has been designed to work with and interface
> with the hardware in question, or the cutting list generator, which is
> designed to be as generic amongst cutting programs as possible?
> 
> If you review the two different syntax's we are using, the structure is very
> similar. The only question is - should pipes and files be explicitly opened
> and closed, or should this be left to the cutting program's discretion, which
> could very well have a better idea on how to optimise them.
> 
> Another problem that I see with specifically optimising between chunks is that
> it makes it more difficult to tell the cutter "preview this chunk" - we would
> either need to construct a specific cutting list for generating the preview,
> or the cutter would need to "backpedal" to a previous chunk, calculate where
> the file it needs were opened, calculate where they need to seek to (since it
> will have been "played" a certain amount by effects within those previous
> chunks) and then get on with actually playing the chunk.
> 
> I'm not saying that's not possible, I just think it would be an arse to
> program :-)

Well I have a new idea this is an example for functional programming  :)

openfile -name "input1" -filename "Input1.avi"
openfile -name "input2" -filename "Input2.mpg"
outputfile -name "output1" -filename "output.mp4"

scene -name "Some name just for comments" -duration
play #1 
#1 pip -input1 #2 -input2  "input1"
#2 fade -start 20 -stopp 80 -input1 "input2" -input2 #3
#3 mosaik -input "input1" -raster 5
sceneend

So we actually have some "functional attempt" at the "language". Ohh my,
I've gotta learn a lot of C++ in order to be able to program this :)

Servus
  Casandro
 
> --
> Jason Wood
> Persistence is a virtue
> 
> _______________________________________________________________
> 
> Have big pipes? SourceForge.net is looking for download mirrors. We supply
> the hardware. You get the recognition. Email Us: bandwidth at sourceforge.net
> _______________________________________________
> Kdenlive-devel mailing list
> Kdenlive-devel at lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/kdenlive-devel




More information about the Kdenlive mailing list