Tiles data format

Cyrille Berger cberger at cberger.net
Wed Jun 16 13:03:23 CEST 2010


On Wednesday 16 June 2010, Dmitry Kazakov wrote:
> > > What do you think?
> > 
> > I don't see the point of optimalBufferSize ? or estimateCompressedSize ?
> > In our
> > case the buffer should be the size of the tile, and the data is saved
> > compressed only if the compression result is inferior to the size of the
> > tile.
> > I don't see the point in making it more complicated.
> 
> Do these engines make compression in-place?
I hope not. you definitely don't want that when saving (maybe when swapping 
because then you don't care about the tile data), it would require us to copy 
the data prior to compressing.

> If not we need to create buffer
> for this. This buffer should be smaller that the tile-size (now, i'm in
> doubt whether there is a guarantee for this from the LZ* for maximum size
> of compressed data, should take a look at [2] again).
No there is not, they will simply fail if the buffer is exhausted. That is why 
we can just set it up to be the tile of a size.

> I'm going to have
> slabs of compressed tiles, those will be transferred to the disk at once,
> so i need to be able to check whether a slab can accept one more tile.
> 
> So can we say for sure, that the buffer of some particular size will fit
> the compressed tile, before actual compressing?
No we can't. Only data analysis can tell, all we can say is that we don't want 
the output buffer to be bigger than a certain size.

For my problem, I don't care. For you, a solution might be to compress the 
data, check if there is enough space in your current chunk, if there is not, 
save the chunk and put the current tile in the next chunk.

It is a little digression of the real subject, but how are you going to write 
on the disk ? Using files, or mmap (like in the previous tile engine) ?


> [2] -
> http://www.cs.duke.edu/courses/spring03/cps296.5/papers/ziv_lempel_1977_uni
> versal_algorithm.pdf
> 
> > Also I don't know how the swapper will work, but for the loading of a
> > file it
> > is more convenient to have decompressTile(KisTileSP, quint8* buffer);
> > (but then
> > I would pass data pointer to have an interface that can be easilly used
> > outside of the context :) )
> 
> Well, KisTile takes col, row as arguments in the constructor, and, more
> than that, reports about its creation to the memento manager. But the
> problem is we don't know new tile's col,row before calling decompressor
> (as it encapsulates the header) [3].
I would leave the header outside. The reason is that you won't gain much in 
compression by including it, and it will allow lazy loading/decompression of 
tiles.

> So there are two possibilities:
> Or we pass all the needed arguments for construction of a tile:
> KisTileSP decompressTile(quint8* buffer, KisMementoManager* manager);
> 
> or we just pass KisTileHashTable as a factory of tiles that will create
> tiles with getTileLazy() - i think this is the best choice:
> quint32 decompressTile(quint8* buffer, KisTileHashTable* table);
> 
> Btw, we need return value to show how many bytes were read from the stream.
We kind of know the size of the output tile :)

This is important, however for the compress function.

> 
> 
> [3] - btw, we can have different KisTileCompressor for every version of kra
> ;)
Yeah :) But no, it is kind of nice to be able to open 2.(x+1) file in 2.x :)

I am considering to have some sort of backward compatibility, either a "Krita 
2.2 file format" in the save dialog, or make the "fully" compressed kraz use 
the old format for saving in 2.3, this would allow people to read file created 
in 2.3 with 2.2.

-- 
Cyrille Berger


More information about the kimageshop mailing list