Design issues (opengl, hdr, channels, pigment(?))
Cyrille Berger
cberger at cberger.net
Sat Jan 27 17:47:49 CET 2007
> - add a getTexture(), writeTexture() to KisPaintDevice (and the tile
> backend). There is no reason why glsl shaders need rgb data. You can put
> anything in a texture and work on it as if it were just another
> multi-dimensioned array. The glsl operations should come from the
> colorspaces. Cyrille: you need to remind me of how we were going to do
> glsl, mmx and software operations in pigment.
Write an implementation of KoColorTransformation (or the relevant
Transformation like KoConvolutionTransformation for convolution). And then we
need to create a mechanism to determine which transformation to select.
> I think we should go the getTexture() and writeTexture() route.
what is this route ?
> * OpenEXR multi-layer files
>
> I'm not sure whether this is just an import/export filter thing or whether
> we need to review our projection composition or layer stack design. But
> it's an interesting area our most-favoured user (Basse) is very interested
> in.
From the screenshot, one of the blender guys gave us, it looks like it needs a
layer stack design changes, but not sure if it won't require something more
complicated than a stack.
> * Channels
>
> We discussed this briefly yesterday. Basically, the question is: is there a
> need to view in-band pixel data as separate bands? Or a set of separate
> bands? Or is the math toolbox solution sufficient?
I am affraid that you are confusing two different problems:
- apply filter on only one channel
- signal based processing for filters (which is for what KisMathToolbox is
for), it only works for colorspace with continuous data (so not for the index
colorspace) and it doesn't work for colorspace with an "angle" as channel,
but one would need to provide a KisMathToolbox implementation for those.
> The implementation in trunk I started for this (but didn't finish) was to
> add a const QBitArray & channelFlags parameter to KisFilter::process() and
> KoColorSpace::bitBlt() that has a 1/0 flag for all channels in the pixels
> of the current colorspace and force the implementation to check that before
> working with the given channels.
>
> That's a bit fragile. If we can come up with either a KisChannel class that
> returns iterators that point to the chosen channel instead of the beginning
> of the pixel and have a pixelSize the size of the chosen channel, then we
> don't need to do much work on the composite ops or the filters: they should
> work out of the box. to/fromQImage and to/fromTexture would return a
> grayscale QImage or a texture with only the right pixel.
>
> But there's a performance hit: I cannot image how we could work on a subset
> of the channels except by iterating over all the selected channels
> individually and calling the filter or composite op for all the selected
> channels individually.
see below (at the end of my comment on pigment), I think the QBitArray
solution, after all, we didn't even choose the solution to enforce
KisSelection...
> * Pigment
>
> As I said, I've lost track of pigment a little, what's done, what needs to
> be done and what the design is... I wouldn't mind a little summary :-).
Not much has change you know ;) It's just that instead of calling a virtual
function of the colorspace, you call a virtual function of a transformation
object that you did created with a call to a function of the colorspace. This
allow us to dynamically creates operations and choose the best one for the
CPU or the task at hand (I am thinking that this might be a possibility to
keep fast implementation for the previous point, I need to think a little bit
more about it).
Now, we need to think if we need or if we want to have extensibility of those
operations.
--
Cyrille Berger
More information about the kimageshop
mailing list