Patrick Julien freak at
Mon Mar 8 13:44:58 CET 2004

On March 8, 2004 07:24 am, Boudewijn Rempt wrote:
> On Monday 08 March 2004 13:07, Patrick Julien wrote:
> > True, but this is not how I want to use them.  Here, I just wanted a
> > small info structure to be defined and initialized once.  Then, you
> > simply iterate over the array of structures to fill in the gaps.
> Well, it was why I started designing this bit of code :-). And even if this
> is not the answer, these points (spreading functionality that belongs to an
> image type all over the place, connecting code to the ui, extensibility)
> are things that should be solved -- so why not by having a rich ImageType
> class.

I was not stating that these weren't problems or that the solution was not 
without merit.  I am just being a pain by forcing you to look at different 

Let's just compare a bit.  With a static, global, constant array of 

We can have one time initialization, done by the compiler no less.
We get flyweights for free.  
A single, constant, read-only point of access.

Does it prevent you from extending it?  no.
Does it prevent you from returning an item from a KisImage?  No, you can 
return a pointer to one element.

Does it prevent iteration?  No.

Does it prevent having different depths for color channels?  Yes,  defining 
this linearly in the structure would be next to impossible.

Either way, you still need the enumeration to provide an anchor for your 

I.e. exposing KisImageType to a widget written for Krita is still a bad thing.  
Exposing constants used in Krita for the implementation of widgets is also a 
bad thing, however passing the enumerations/constants has keys (has simple 
integer parameters) is good.  These values are then mapped back to their 
objects when needed.

> > I don't think it should no, the image type depth should be determined by
> > the image.
> Not the image as such, but rather the paint device -- i.e., the layer.
> Having the restriction that all layers should be the same image type is not
> really necessary, and will make some things more difficult later on. And
> then, not all colour strategies will be able to work with all bit depths
> either.

Later on?  Or right now if we allow this, how does this work when composing 
the different layers on screen to get a coherent image?  Think of all the 
difficulty and complexity you are adding to a program that is still a baby.

Additionally, how would you support other developers joining the list who want 
to write filters, etc?  Do you really want to code where for each channel you 
need to determine all kind of context information?  Do you really want to 
write code where for each single step you make you need to look both at the 
right and left to see if you got all this context info right?

I think it's only reasonable that we get a decent environment for single color 
space model images first.  Is it not?

However, the idea of placing the color type inside a paint device is not 
without merit tho... this could be used to assert the paint device color 
space model is the same when inserting it into an image.

Then, later on, this assertion can then be removed when you want to make that 
wacky stuff of yours :)

More information about the kimageshop mailing list