Krita performance.

Boudewijn Rempt boud at valdyas.org
Wed May 22 19:36:07 UTC 2013


On Wed, 22 May 2013, Dmitry Kazakov wrote:

> 
> I am fairly sure that photoshop does actually two or three things here:
> 
> * keep the full data compressed
> * mipmapping
> 
> 
> Some cache, yes. Not sure about the whole mipmapping.

It makes a lot of sense, though -- because you always render the nearest 
level.

>
>       * do a kind of reverse composition first, like this:
>
>       1) for every screen pixel, determine the area of the image that is responsible for it.
>       2) go down the layer stack (not up, like we do), and check at which point that pixel becomes 100% opaque, possibly by simply using a
>       nearest neighbour on every pixel in the area responsible for the display pixel, on the nearest mipmap. This is easy because their stack
>       is fully linear, they don't have groups like we, and also no clone layers (as far as I can tell)
>       3) from that point, go up again
> 
> 
> Well, our scheduler doesn't merge a layer if its extent doesn't intersect the update area. I also did some experiments with merging only region() of
> the paint device. It gives about 20% better performance on full refresh. I'm not sure it gives much for usual painting though.

It would help a lot for the case where a layer is filled with a gradient 
or a color or a filter layer/mast is added -- which happens a lot, too.

> I didn't push these
> experiments, because they needed at least a week of work to change the KisPainter this way.

Yeah, but that's not what I mean. Say we look at an image at a zoom of 25% 
and we see only a portion of that image. Then only for that portion, the 
four pixels that form the single pixel on screen are interpolated, or 
actually, sampled using nearest neighbour, and that pixel is composed.

> 
> Optimizing overlapping fully opaque areas... well, it might be a good idea. But i'm not sure how much help it would be for us, because most of our
> layers are either line-art or some (usually semi-transparent) coloring. We need to calculate how many overlapping fully opaque areas we have on a
> set of real-world images.

Yes... The current issue that simon presents is even simpler because there 
is only one, big, background layer. So the composition doesn't even come 
in. But the mipmapping does. We need to do quite a bit of profiling here 
-- I've asked Boemann to help me a bit, too.

There are frequent usecases when painting though where a complex 
layerstack actually has a lot of pixels that completely cover the
pixels of the layers underneath.

>  
>       Then, I am guessing, they start in the background to composite the full image and when that is done, scale it and show it.
> 
> 
> Yep, that is true.
>  
>
>       Nuke works this way, too, and does it by scanlines. For every pixel in the scanline, get the area of pixels in the image resolution and
>       calculate back and forth.
> 
> 
> Is it like openGL calculates pixel values in the frame buffer?

That is my impression.

>       It's illuminating to read the nuke plugin coding docs.
> 
> 
> Could you give a link to that?
>

http://docs.thefoundry.co.uk/nuke/70/ndkdevguide/


> --
> Dmitry Kazakov
>


More information about the kimageshop mailing list