CKMY craziness...

Saem Ghani saem-ghani at shaw.ca
Sat Nov 20 18:42:50 CET 2004


On Saturday 20 November 2004 04:33, Boudewijn Rempt wrote:
> > I've been giving the entire thing a lot of thought.  AFAICS, this is how
> > it breaks down.  There is a display format (RGB), an intermediate format
> > (Our own, RGB, HLS...) and storage/user formats (RGB, RGBA, CMYK, et al).
> >  The user ones are basically out of our control and so is the display. 
> > The only thing left for us to figure out is the intermediate.  Being as
> > that is the case, we can either have it completely different (one that
> > serves our needs and that alone) or match it with either the display or
> > the storage/user format.  An elegant design might state that we have our
> > own memory based representation so it simplifies a lot of things, for
> > instance, writing plugins and stuff is much easier since everything is
> > written for our internal format.  Mind you, this thinking process has
> > been completely removed of the realities of Krita.
>
> That's the way Photoshop works: everything is internally stored as 8 or 16
> bit sRGB. It's not a suitable strategy for Krita because I want to have
> more fun with colour models -- there are ways of working with pixels that
> don't easily map onto any colour model but need information about other
> physical properties like wetness, thickness or pigment composition.

Though, I wonder if this can be done with layers?  A simple wetness layer 
could be used in conjunction with each pixel layer?

> My first goal with Krita is to have the basis up and running: a working
> application with all the conveniences (layers, tools, cut & paste,
> interface stuff done. Then, I want to build on that to go more in the
> direction of an application like Corel Painter (which is in its IX version
> really good if crashy), because that's what interests me. For that, we need
> more flexibility than you'd need if you were working on a photoshop clone.

All right, back to thinking.

> > As for using bytes and having them repinterpretted, that doesn't make
> > sense to me, at least.  Why not simply define a pixel as a collection of
> > channels for an internal representation, work in there and only do
> > translations to other formats on an as needed basis?
> >
> > Then again, maybe I'm missing something vital.
>
> The basic point is that the colour model doesn't store the pixels; it
> interprets them. There is one component that stores the data for all colour
> models -- the paint device with the underlying tiles. This is,
> conceptually, a raster of pixels, where a pixel is defined by the number of
> bytes a colour model needs to represent a pixel.

I knew it, intuitively, at least, but it just didn't make much sense to work 
that way, until now.

> > Now, if we don't go with one internal data representation, then we make a
> > pixel interface and have cmykPixel, rgbPixel... inherit from it, if you
> > get what I'm saying.
>
> Not completely, I think. In many use-cases, filter code doesn't need to
> know which channel is what, just the number of channels, and then iterate
> through the channels of a pixel. For that, you don't need a pixel class,
> just information the colour strategy can provide.

Yeah, this would only really work if you were going with a unified 
representation and then having a small abstraction for plug-in stuff.  So 
let's scrap that idea.

> > One internal data representation affords us a fair number of avenues for
> > performance optimizations, this being one of them.
>
> There is one internal datarepresention, hidden by the paint device class
> (which needs a cleanup and overhaul -- it still exposes the internal data
> structure, the tile manager to the outside world, and worse, that's
> necessary in some cases). The actual data is interpreted by the colour
> models. This model looks a little like Java2D's architecture, but is a
> little less complicated and overdesigned.

Taking in everything.  I think the simplest solution would be to have a 
struct, with an array of "channels" (they won't necessarily be colour 
channels, they could be wetness channels), a number of channels and a value 
storing the sizeof each element in bytes -- we can standardize this, say 8, 
16, 24,  & 32 bits per channel, across pixel structs.  The only issue here is 
that one needs uniform data representation for everything, though one could 
employ clever tricks to encode many properties into say a byte using 
bitflags.  This would replace QUANTUM* out right and QUANTUM is just channel 
foo = myPixel.channel[x];  What data size we standardize along is going to be 
key.


More information about the kimageshop mailing list