noise generators (forked from: Krita user community?)
mw_triad at users.sourceforge.net
Tue Feb 26 23:55:53 CET 2008
Boudewijn Rempt wrote:
> On Tuesday 26 February 2008, Matthew Woehlke wrote:
>> Where did colormapping come into the plugin? The idea was that the
>> channel-generation is totally decoupled from the color mapping. So, my
>> plasma generator (as an example) spits out, say, an FP64 per pixel, with
>> no knowledge or concern for how that gets translated into color data.
> Well, if there is only one way of going from the generated noise to actual
> colors, then a plugin isn't necessary.
Ok, perhaps I misunderstood. I don't mind if there can be plugins for
the color mapping, as long as a color mapping plugin need not be in any
way related to the generator plugin. I'd read your previous message as
the generator would do the mapping as well.
> I just expected that there would be
> many ways to actually use the generated noise :-). It's also easy to
> decouple, in a couple of ways:
> * if the generated data is in a particular colorspace, the color conversion
> system can handle the mapping; it's possible to have different colorspaces
> with different conversion characteristics
Sure, that could work. Note that I assumed the generated stuff would be
in a "colorspace" where each pixel is one or two 64-bit FP value(s).
Then the filtering happens. Then the channel(s) are mapped to color [and
alpha]. I don't think anything more than 'reduction to the
representational space of the document' is required for alpha mapping,
since the filter stage should handle any desired effects.
> * or the mapping can be in separate plugins
For the color mapping, I'm ok with built-in-or-plugin.
> * or it can be hard-coded in the node type (which I doubt to be the best
>>> * add a node type that contains that plugin, a source paint device in the
>>> 2 channel color model and a projection in a regular color model
>> I don't know enough krita guts, but that sounds about right
> We'll teach you :-). Then you can help implementing it!
>> I'm envisioning one "layer" (node?) looking like this:
>> <generator> (plugin)
>> <filter> (maybe plugin but probably core; optional)
>> <colormap> (core)
>> Note that 'colormap' is basically the gradient map we talked about
>> earlier, of course operating on the higher-bit-depth information from
>> the previous stage. This would only be applied to the first channel.
>> The filter (one per channel, so one or two) would be somewhat like the
>> 'curves' adjustment tool.
> If we create a colorspace for the output of the generator and implement the
> various colorspace operations, the filter can be the curves adjustment
> filter -- or any other filter. You could filter the output of the generator
> dynamically using a blur, an oilpaint or the raindrops filter -- or any new
> filter we create that is properly cs-independent. Filters can already work on
> any subset of channels in the paint device, and it is already possible to
> have more than one filter mask on a layer.
Whee, now *that* would really be crazy. But I like it :-).
So... am I understanding that (in krita 2.0 at least) you can have
layers in different color spaces within the same document? In that case,
then having the generators output in either 32-bit clamped (or 64-bit
HDRI) grayscale is the way to go. Then we just need a downsampling
gradient map "filter" that can convert into a non-grayscale colorspace.
(In fact, making gradient map implicitly convert the input to a
grayscale color space might make the back-end cleaner?) And since we
need a gradient map anyway :-)... (This also takes all of the "how to do
the mapping" stuff out of the equation, since it's merely leveraging
If that all can be stitched together, then it sounds like a plan to me;
the UI is only slightly more "heavy" than if the filtering/mapping was
more tightly coupled to the noise layer, and it's far more flexible
> Actually, if we use the already-existing filter masks, you can have different
> filter settings for different areas or even filter some areas of your node
Right. If noise is output in a high-rez grayscale colorspace, then you
can do practically anything before color mapping. Forget filtering
twice, you can stack different filters, throw in regular paint layers...
all sorts of good stuff. I like it :-).
> The colormapping would probably best be implemented in the colorspace
> conversion system, that way we can go to any colorspace from the original
> data, and generate, for instance, lab pixels instead of rgba.
I'll still want a gradient map most of the time. Does that count as a
colorspace conversion? (Certainly it takes input in one channel and
spits out multiple channels.)
Hmm... this is making me want to add an LMS color-reduction filter to
the TO-DO (or do we already have that ;-)?).
> So, we'd need
> * a new colorspace class
> * a generator plugin framework: the generator should have a method that
> suggests the preferred colorspace for the plugin and a generate method
> * a node type similar to the adjustment layer, but with a generator instead of
> a filter.
Check. Oh, and we should have that anyway, so we can add solid-color and
gradient fills ala PS ;-). I've used those quite a bit in my own work.
(I hope it will be possible to add an alpha channel independently?)
Maybe Flake could be leveraged for this some how? I keep thinking text
layers could also use this, but I assume text layers use flake, so I
wonder if this could also use flake.
> The node would update the data if the parameters were changed; on projection
> the filter masks associtated with the node would be applied and the result
> would be converted to the projection colorspace using the new colorspace and
> then composited using any of the available composite ops.
>>> Come to think of it, it would be nice if we could use such generators as
>>> input for masks, too.
>> Hmm... yes, that is harder... or is it? I'd mentioned the second channel
>> being alpha, I don't suppose we have a DestinationOver composition mode?
> Not yet -- but it shouldn't be too hard to add.
Ok, I'll let you think how best to do that. I think it might be best to
decouple masks (alpha channel) from the color data, so a non-filter node
can have color from a noise generator, regular paint layer, etc, and so
can the mask, and the two are independent.
Which means, I think, that we still have two kinds of nodes (filters and
"regular"), and generators are a type of "regular". More
specifically/generally, we have nodes that take the projection beneath
them as input, and nodes that don't take input from the projection
stack, with generators and regular layers being both in the second category.
>> Btw, is this getting on a wiki anywhere, or is there a good place I
>> should dump it?
> http://wiki.koffice.org is also our krita scratchpad.
Thanks. I'll try to summarize my thoughts on (combined with my
understanding of your thoughts - please feel free to comment/correct as
needed) this when I get a chance. Do we talk about the layer DAG somewhere?
ENOCOFFEE: operator suffering from lack of sleep and/or early-morning-itis
More information about the kimageshop