[Kdenlive-devel] 3d effects and transitions with blender

Dan Dennedy dan at dennedy.org
Mon May 18 17:40:16 UTC 2009


On Mon, May 18, 2009 at 3:02 AM, Alberto Villa <villa.alberto at gmail.com> wrote:
> hi list!
> since i like big proposals, i come here with another one :P
>
> some time ago someone wrote on the forums about a project,
> gvfx.blogspot.com, which is actually a python script to embed images and
> videos in blender animation

I have already looked at it a couple of times. I explored integration
into MLT, and chose not to pursue further because I could not get past
the issues with managing temp files and processed for each frame.
Sure, gvfx is not designed in that fashion, preferring to render a
series of frames, but not MLT. It is designed to support random
access. I have to think about it some more. At the time, I was not
happy with the integration and realized the work would be fairly
significant, while (no insult intended) the effects are much more eye
candy than practical. Maybe this discussion can result in a different
approach than I was thinking...

> i had a talk with one of the two guys behind this, and i'd like to discuss
> this feature with you all
>
> 1. how does it work?
> here's the script: http://gvfx.blogspot.com/2008/01/script-gfvx-firts-
> version.html
> with the new coming version of the script (i would contribute to it, it's
> quite easy), you would have to supply one or two video files, select the frame
> range for the clip(s) (the duration of the effect/transition), and choose
> export parameters as well as the effect/transition you want
>
> 2. how to implement?
> here's what i think: we could add a "3d transition" in the transition dialog
> (and list available effects in their list - the way to detect them should be
> defined - but i'll focus on transitions now) and list (like in the luma
> transition) all available 3d transitions (we could have the big "official"
> .blend file and lots of user contributed small files)
> blender does take a long time to render, but it's about 5-10 seconds with a
> 80x60 animation, so we could show a preview (maybe with just A and B images)
> of that size (only in the dialog, not in the timeline!)
> those transitions should then be saved as kdenlive special ones; when
> rendering, we should:
> 1. warn the user about the long long time it could take

Not necessary. They already expect rendering to take long.

> 2. render all the timeline with melt alone (there will be rough cuts where 3d
> transitions are)

what exactly do you mean by rough cuts? Like a dummy/proxy MLT
transition because melt needs to do something here?

> 3. render the 3d transitions (think about slowmotion clips, or effects...
> blender needs a plain video)

Kdenlive could do this by coordinating blender. Or, this MLT "proxy"
transition might be able to do it when we limit its work to sequential
processing. I am thinking it could use the pre-rendered A/B preview
animation in the "dummy" mode for random access, but then the render
run can instruct it to do the real rendering. In that case, MLT can
assume sequential processing and do the heavy lifting. In fact, the
user could ask the transition to render the transition for preview at
the quality level of their choosing and wait for it to be done.

> 4. join the clips (with melt, i guess)

MLT can't really do that without re-rendering and losing some
additional quality when not using lossless or uncompressed. If
kdenlive does the integration, it can ask melt to render just the
parts of the timeline that contain the inputs to the transition. Then,
it can substitute the clip for the transition. This is rather a lot of
DOM manipulation to manage. If MLT does the integration, this step
would not be necessary.

> 3. pros and cons
> pros:
> - i can't think of any other way to implement 3d in kdenlive (maybe you can)
> - it would be easy to contribute custom transitions (i'd start making lots of
> them!). and effects, of course. you can have a look at some of them on that
> website (testing them is quite funny)
> - blender can also apply complex color filters and/or uv distortions (they're
> working on this), or... titles?

We should be careful here. Blender can obviously do a lot; it can even
edit video. I am not worried about competition, just not making stuff
as simple as compositing rendered animations more complex than it
needs to be. And also not doing something special that requires messy
blender integration that can be more easily be done in frei0r or MLT.

> cons:
> - it depends on blender (we already depend on other software like
> recordmydesktop or dvdauthor, but blender is big...)

optional dependencies

> - it takes a lot of time to render (they're talking about using the game
> engine to make it faster, but i guess it wouldn't work with every animation)
> - you have to render the project 2-3 times

maybe not as I suggested. I am also starting to think about something
that works in a frameserver,  client-server mode in a long-running
blender process.

> - the worst one, i think: you're limited to blender video export capabilities.
> unless we export to png and then join every single image to match the project
> profile (they say it should be faster then rendering to video), there's only
> avi and mp4 i think (dan could put light on this maybe)

There are a lot of options here including HuffYUV, MJPEG, DNxHD,
MPEG-2 I-frame. However, from MLT's perspective, getting a RGB image
via shared memory, pipe, or domain socket would be better than dealing
with temporary or intermediate files.

> well, that's it, to begin
> if you are interested, i can write a patch to test it (for the list eyes only)

I suggest that you explore the options more. Think about things that
do not take gvfx as is, but that learns from it. Things like:

- generating the python script that gets handed to the blender child process.
- giving that script a series of PPM images via a stdin pipe
- blender -h shows FRAMESERVER as an option for the render format.
what is that about?
- IIRC, blender has a Google SoC project to make its engine available
as a library. Should we wait for that, be one of its biggest first
customers?
- how about converting blender into a server through its python
support, communicating over domain sockets from MLT? MLT can send a
couple of PPM images in the body of some request, and get a single
rendered image as a result. Qt could be used in the MLT module to make
coding the I/O easier. Or, if you want to use some Python
client-server framework, one could even create a MLT module that
contains a python interpreter and uses the python swig bindings for
MLT.

-- 
+-DRD-+




More information about the Kdenlive mailing list