Filters "dialog" in 2.0

Boudewijn Rempt boud at valdyas.org
Fri Jun 22 08:22:49 CEST 2007


On Wednesday 20 June 2007, Moritz Moeller wrote:

> I know. But the pushing is done with a tool. Which can creates a
> brush-stroke (a bezier) which warps the space in which everything is
> rendered (think of writing the position of every pixel into it. Now a
> warp brush modifies this position data.
> This means rendering brush strokes is done as implicitly as possible.
>
> I.e. instaed of:
>
> 1. Iterate along spline
> 2. Place brush image at resp. position
>
> one would implement it like so:
>
> 1. Iterate over image
> 2. Get position of resp pixel.
> 3. Warp position applying current warp field (filled in by any brushes
> that do warping)
> 4. Find closest position on spline that is orthogonal to this point
> 5. Use point to look up any varying properties of the spline along it's
>     length at this position.
> 6. Render pixel of current brush using this information (possibly
> putting an image of the brush into an offscreen brush cache and looking
> that image up)
>
> This is a lot harder to implement (particularly 4. can get tricky, since
> there might be multiple points returned), but it is very powerful. Since
> all you do is looking up functions based on positional data, warping
> etc. all becomes a breeze.

Ew, yes -- that looks pretty hard. I think that'll have to wait until we've 
learned more, maybe even until Krita 3.0. It would be pretty hard to 
reconcile with Krita's design. (I mean, kimageshop started out as a Gimp 
clone only using ImageMagick at the core, while keeping one eye on Photoshop. 
Lots of internals are geared towards that kind of application, not towards 
dynamicallyt recomputed strokes.) 

> If you want to deposit something -- that's merely having a 'paper' layer
> and special brush stroke properties that brushes can put into such a
> layer. The layer then uses them somehow (i.e diffuses them, if it's
> watercolor etc.).
> Since that paper layer would have certain properties, one can alter them
> at any time and have the layer update (i.e. how much water it absorbs
> and whatnot).

Yes, that sounds very cool, but it's too advanced for what we can achieve 
right now. We'll have the dynamic filter and transformation masks and 
adjustment layers. Dynamic layer effects (like drop shadow) probably aren't 
going to make it -- which means that dynamically recomputing the effect of 
brush strokes using natural media simulation and physics filters is even 
further away.

The problem with natural media is that it's very computationally intensive, 
even for a small area -- the area most simulations work on, even if they use 
the GPU to full effec, is just 500x500 pixels.

> See my proposal above. The problem with the system I suggest is is of
> course any kind of convolving filters that need neighbouring
> information. But since you can feed any position into the function chain
> and get the result, if a filter needs neighbouring data, you just need
> to cache pixel samples of neighbouring pixels and provide them to the
> filter. Lastly you need to write filters so that they antialias
> themselves using the filter area of the pixel.

Hm... For that to work, we would need to redesign a couple of classes. And it 
would be dependent on Bart's future work on making the tile backend more 
efficient, I think.

Am I right in thinking that scaling in these cases doesn't use interpolation 
or sampling, but that there's a blur filter or something like that as a final 
step to make the image appear smooth? 

> We do this stuff when writing RenderMan shaders all the time... people
> have done crazy stuff. There's e.g. a DSO that allows one to render
> Illustrator files as textures at arbirtary resutions. It uses a tile &
> sample cache (see
> http://renderman.ru/i.php?p=Projects/VtextureEng&v=rdg) The interesting
> bit is that this is using explicit rendering but makes it accessible
> through an implicit interface (feed in a position, return a color). The
> caching rendering into an explicit image buffer happens behind the scenes.
>
> And since Krita itself would render the image, the access of such a
> buffer would rarely be random. Coherence can be maximized in such a
> situation.



-- 
Boudewijn Rempt 
http://www.valdyas.org/fading/index.cgi
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
Url : http://mail.kde.org/pipermail/kimageshop/attachments/20070622/d28f6c8e/attachment.pgp 


More information about the kimageshop mailing list