Design issues (opengl, hdr, channels, pigment(?))
Boudewijn Rempt
boud at valdyas.org
Thu Feb 1 20:46:51 CET 2007
On Thursday 01 February 2007, Cyrille Berger wrote:
> Which means, that we would need to have two plugins implementation, one for
> opengl and one for C++/mmx ?
Unfortunately, yes. I still think there are quite a few places where using
opengl makes sense, but given the inaccuracy of opengl calculations, those
will either be just for display, or at the user's option.
I'm envisioning something like this:
* For convolution; the convolution painter asks the colorspace whether it has
got a shader for convolution & whether the user allows opengl to do the
convolving (option from the preferences dialog). If so, convolution is done
through opengl, if either of these prerequisites are not fulfilled,
convolution is done on the cpu.
This means we need an extra method on KoColorSpace, something like:
QByteArray getShader(const QString & shaderType);
or
GLuint getShader(enumShader shader); (for precompiled shaders)
Convolution then proceeds to get the textures from the paint device in a
to-be-specified manner and convolves them using the given shader. For getting
a texture from a paintdevice, we can either use readBytes (which can still be
optimized a lot!) or write code inside image/tiles that creates a texture for
a given rect and returns it. The big thing, apparently, is to create textures
sparingly and write to them often, because changing an existing texture is
much cheaper. See our opengl context class.
* For optional filters. There's stuff you can do in opengl that's simply not
feasible on the cpu, like fluid simulation. There are bound to be others.
These use the same to-be-specified api to get textures from the paint device,
and write them back.
* For display. If the user is satisfied with a lightning fast but
inaccurate-as-hell display, we can simply put every layer into the texture
map, perhaps as L*a*b floats, or even converted to rgba8, and use shaders or
OpenGL's own blending functions to blend them. Whether this would work
depends a lot on the graphics card, available memory, image size and so on,
but it could be fun to code.
* For composition. We could have an alternative path in KisPainter::bitBlt
that, when enabled & a shader is available for the given colorspace, blends
the specified area using opengl. Whether that will give us a speed advantage
remains to be seen, but i would like to give it a try.
* For visualisation. Canvas height & shadow, wetness and more: these will be
much easier, faster and prettier to do with opengl.
There may be a couple of other possibilities that would be interesting, and I
may be overlooking something: I'm only now really diving into the field of
using the gpu for more than just first-person shoot-em-ups :-). But I've
already got as far as having my opengl environment setup without showing a
new window! Tonight or tomorrow I might be converting a chunk of image to a
texture and who knows -- process it.
--
Boudewijn Rempt
http://www.valdyas.org/fading/index.cgi
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
Url : http://mail.kde.org/pipermail/kimageshop/attachments/20070201/e3af62b3/attachment.pgp
More information about the kimageshop
mailing list