Smoothscaling down speed improvement of 2x, 3x, and better.

Mosfet dan.duley at
Sat Jun 14 07:19:23 BST 2003

I think this is a neat algorithm so I figured I would try submitting another 
patch and seeing how it goes ;-) This only applies to reducing images - 
scaling them down - not scaling them up.

I do image smoothscaling a lot and was looking at ways to improve it's 
performance. I first looked at ImageMagick and a few other free software 
smoothscaling solutions that use normal X11, (not OpenGL, SDL, etc..). Out of 
the algorithms out there that actually smoothscales and doesn't just sample 
by skipping pixels, Qt's and the NetPBM algorithm on which it is based on 
actually performed the best. I also looked at some MMX algorithms out there 
on Google but they do not seem to perform better than normal ones.

Nonetheless, the Qt/NetPBM method is pretty inefficent. Lots of long int 
arrays, floating point calculations, etc... I figured I could improve upon 
it. I should be able to get decent results by just breaking the image up into 
blocks and averaging the values in the blocks: simple integer arithmetic. So 
I coded up a test implementation.

It first determines a block size that can be used for sampling. Unless the 
reduced image size is divisible by the source image size there will be a 
remainder, so we have to take that remainder and spread it out evenly across 
the image by incrementing the block size every now and then. This is done at 
the beginning and is the only place floating point calculation is used. Then 
we simply iterate across the image and average the pixel values in the 
sampling block.   

Results are good, comparable with Qt. The time reduction is significant. Here 
are some of the timings:

300x300 32bpp image reduced to 55x100 - Qt:11ms, mine: 3ms
816x610 32bpp image reduced to 210x112 - Qt: 31ms, mine: 18ms
4234x2927 32bpp image reduced to 500x200 - Qt: 548, mine: 282

So as you can see it's usually ~2x or even ~3x faster. Even better, while the 
result will always be 32bpp it can handle 8bpp PseudoClass source images. Qt 
has to convert them to 32bpp. So in this case it's even better:

1504x1000 8bpp image reduced to 300x200 - Qt: 141ms, mine: 33ms

Now that's a helluva lot better >:)

In order to see the results of the different algorithms I put up an example 
page. It's available at:

It will take awhile to load... But it shows that the results are quite 
acceptable for most use. I'm actually rather suprised and pleased at how well 
the results of simple averaging turned out. Theoretically it should not 
handle as well if a line or other edge falls on a block boundary but it seems 

The algorithm works best when scaling in both directions. Otherwise it's 
similiar in performance to Qt and currently just calls the Qt version because 
Qt has special case if the width or height is the same in the source and dest 
image. The more you reduce the better the performance is in comparison to Qt. 
If your only scaling a few pixels they perform about the same - if your 
scaling a couple dozen this version is signficantly better. I have not been 
able to find a situation where it performs worse. 

Since my last patch didn't get much attention instead of submitting a patch I 
just attached the method's .cpp file. Check it out, check out the webpage, 
and if your interested I could stick it in kimageeffect and submit it as a 
patch, (although you can pretty much just cut and paste it in there if one of 
you guys want to do it). And no, Dirk, it's not based on ImageMagick or 
anything else ;-)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: reduceimage.cpp
Type: text/x-c++src
Size: 5350 bytes
Desc: not available
URL: <>

More information about the kde-core-devel mailing list