Filters "dialog" in 2.0

Moritz Moeller mnm at dneg.com
Wed Jun 20 22:09:53 CEST 2007


Boudewijn,

>> And the other thing is that many people never experienced the very
>> pleasure of working an a totally non-destructive imaging app...
> 
> Hm, hm... I certainly never have. But there is one thing bordering me about 
> this, and that is that needs to keep the paint strokes separate. The one 
> thing most artists complain about of imaging apps is that they cannot "push 
> gunk around". We're now working on making it possible to deposit a simulation 
> of paint (or other gunk) on the canvas and then push it around with tools 
> like brushes, pencils and palette knives. I am not aware of any way to 
> simulate this kind of mixing and pushing while keeping the brush strokes 
> distinct.

I know. But the pushing is done with a tool. Which can creates a 
brush-stroke (a bezier) which warps the space in which everything is 
rendered (think of writing the position of every pixel into it. Now a 
warp brush modifies this position data.
This means rendering brush strokes is done as implicitly as possible.

I.e. instaed of:

1. Iterate along spline
2. Place brush image at resp. position

one would implement it like so:

1. Iterate over image
2. Get position of resp pixel.
3. Warp position applying current warp field (filled in by any brushes 
that do warping)
4. Find closest position on spline that is orthogonal to this point
5. Use point to look up any varying properties of the spline along it's
    length at this position.
6. Render pixel of current brush using this information (possibly 
putting an image of the brush into an offscreen brush cache and looking 
that image up)

This is a lot harder to implement (particularly 4. can get tricky, since 
there might be multiple points returned), but it is very powerful. Since 
all you do is looking up functions based on positional data, warping 
etc. all becomes a breeze.

RenderMan shaders work that way.

If you want to deposit something -- that's merely having a 'paper' layer 
and special brush stroke properties that brushes can put into such a 
layer. The layer then uses them somehow (i.e diffuses them, if it's 
watercolor etc.).
Since that paper layer would have certain properties, one can alter them 
at any time and have the layer update (i.e. how much water it absorbs 
and whatnot).

>> Simply by only ever rendering what was on screen. If you think about it
>> -- at the time high res 19" Sony screens used on the SGI workstations
>> this app ran on had max 1280x960. Mostly it was 1024x748. Now subtract
>> the screen space taken by the palettes.
>> So all you ever have to render (and keep in memory), in the worst case
>> is this data (maybe 800x600x24bits.
> 
> Yes -- that should help. One problem here I've often wondered about (and I 
> don't think I ever got a straight answer from Gegl's pipping about it) is 
> what you do with filters that affect an area of pixels on zoomed images. Say, 
> the simple oilpaint simulation filter that my kids love so much. It's not 
> only random, but it is an area effect filter. So, if you're working at 25% 
> zoom, you need to apply the filter at 100% zoom and then scale down again for 
> display. Which would hurt render speed a lot.

See my proposal above. The problem with the system I suggest is is of 
course any kind of convolving filters that need neighbouring 
information. But since you can feed any position into the function chain 
and get the result, if a filter needs neighbouring data, you just need 
to cache pixel samples of neighbouring pixels and provide them to the 
filter. Lastly you need to write filters so that they antialias 
themselves using the filter area of the pixel.

We do this stuff when writing RenderMan shaders all the time... people 
have done crazy stuff. There's e.g. a DSO that allows one to render 
Illustrator files as textures at arbirtary resutions. It uses a tile & 
sample cache (see 
http://renderman.ru/i.php?p=Projects/VtextureEng&v=rdg) The interesting 
bit is that this is using explicit rendering but makes it accessible 
through an implicit interface (feed in a position, return a color). The 
caching rendering into an explicit image buffer happens behind the scenes.

And since Krita itself would render the image, the access of such a 
buffer would rarely be random. Coherence can be maximized in such a 
situation.


Cheers,

Moritz


More information about the kimageshop mailing list