Lossless image compression
dimula73 at gmail.com
Tue Feb 7 14:43:19 UTC 2012
On Tue, Feb 7, 2012 at 5:01 PM, Boudewijn Rempt <boud at valdyas.org> wrote:
> On Tue, 7 Feb 2012, Dmitry Kazakov wrote:
>> For the actual compression of the tile data we use LZF algorithm. I did
>> some research on the compression times of LZO and LZF , . It
>> turned out that LZF has better times. It takes twice faster to compress
>> the image with LZF while keeping almost the same compression rate.
>> One more thing I found then is that it is faster to "linearize" colors
>> before compression and then compress the image than compress the
>> image directly (RGBARGBA -> RRGGBBAA). It boosts both the speed and the
>> compression rate. The time of the "linearization" itself is
>> negligible in comparison with the boost it give, so now we do it this way.
> Well, actually when Sven wanted to know why loading and saving files is so
> slow, it turns out that the linearization is the biggest bottleneck we
> currently have! But that might depend a lot on cache sizes and so on. In
> general, we want to avoid missing the cache when compressing a tile.
Probably, I'm a bit outdated.
I've just run the test on my new machine.
The results are the following:
Compression LZF w/o linearization: 247 memcpy's. Ratio: 0.938
Compression LZF with linearization: 213 memcpy's. Ratio: 0.769
Decompression LZF w/o linearization: 78 memcpy's.
Decompression LZF with linearization: 112 memcpy's.
It means that for compression the linearization still makes it faster, but
for decompression it makes the things worse on newer cpu. Here are the
callgrind graphs for compression and decompression: , 
 - http://dimula73.narod.ru/callgrind.out.9540_compression_two_pass
 - http://dimula73.narod.ru/callgrind.out.9513_decompression_two_pass
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the kimageshop