noise generators (forked from: Krita user community?)
boud at valdyas.org
Thu Feb 28 08:21:49 CET 2008
On Tuesday 26 February 2008, Matthew Woehlke wrote:
> So... am I understanding that (in krita 2.0 at least) you can have
> layers in different color spaces within the same document?
Oh yes, even 1.5 already could handle that. All layers are converted on
composition to the colorspace the group layer they are in.
> In that case,
> then having the generators output in either 32-bit clamped (or 64-bit
> HDRI) grayscale is the way to go. Then we just need a downsampling
> gradient map "filter" that can convert into a non-grayscale colorspace.
> (In fact, making gradient map implicitly convert the input to a
> grayscale color space might make the back-end cleaner?) And since we
> need a gradient map anyway :-)... (This also takes all of the "how to do
> the mapping" stuff out of the equation, since it's merely leveraging
> existing framework.)
> If that all can be stitched together, then it sounds like a plan to me;
> the UI is only slightly more "heavy" than if the filtering/mapping was
> more tightly coupled to the noise layer, and it's far more flexible
> > Actually, if we use the already-existing filter masks, you can have
> > different filter settings for different areas or even filter some areas
> > of your node twice,
> Right. If noise is output in a high-rez grayscale colorspace, then you
> can do practically anything before color mapping. Forget filtering
> twice, you can stack different filters, throw in regular paint layers...
> all sorts of good stuff. I like it :-).
> > The colormapping would probably best be implemented in the colorspace
> > conversion system, that way we can go to any colorspace from the original
> > data, and generate, for instance, lab pixels instead of rgba.
> I'll still want a gradient map most of the time. Does that count as a
> colorspace conversion? (Certainly it takes input in one channel and
> spits out multiple channels.)
> Hmm... this is making me want to add an LMS color-reduction filter to
> the TO-DO (or do we already have that ;-)?).
> > So, we'd need
> > * a new colorspace class
> > * a generator plugin framework: the generator should have a method that
> > suggests the preferred colorspace for the plugin and a generate method
> > * a node type similar to the adjustment layer, but with a generator
> > instead of a filter.
> Check. Oh, and we should have that anyway, so we can add solid-color and
> gradient fills ala PS ;-). I've used those quite a bit in my own work.
> (I hope it will be possible to add an alpha channel independently?)
> Maybe Flake could be leveraged for this some how? I keep thinking text
> layers could also use this, but I assume text layers use flake, so I
> wonder if this could also use flake.
> > The node would update the data if the parameters were changed; on
> > projection the filter masks associtated with the node would be applied
> > and the result would be converted to the projection colorspace using the
> > new colorspace and then composited using any of the available composite
> > ops.
> Right :-).
> >>> Come to think of it, it would be nice if we could use such generators
> >>> as input for masks, too.
> >> Hmm... yes, that is harder... or is it? I'd mentioned the second channel
> >> being alpha, I don't suppose we have a DestinationOver composition mode?
> > Not yet -- but it shouldn't be too hard to add.
> Ok, I'll let you think how best to do that. I think it might be best to
> decouple masks (alpha channel) from the color data, so a non-filter node
> can have color from a noise generator, regular paint layer, etc, and so
> can the mask, and the two are independent.
It's going to be nearly impossible to take the alpha channel out-of-band in
Krita. It is possible to add a transparency mask, though.
> Which means, I think, that we still have two kinds of nodes (filters and
Well, not quite-quite. In krita, there are two types of nodes: layers and
masks. Layers are nodes that are composited together in a stack, and masks
are nodes that affect the layer they are associated with. We've got the
layers: group, adjustment, paint, copy (and, potentially, generated)
masks: transparency, filter, transformation, selection
So, a group layer iterates through all layers and composites them. If there is
an adjustment layer, the composited projection up to the point where the adj.
layer is placed will be filtered, and then composition continues.
A layer with masks will take its source paint device (either a paint device, a
projection of child layers, a copy of (part of) another layer )or a generated
paint device) and apply the masks on it: a transparency mask alters the alpha
channel of the layer projection, a filter mask filters it, a transformation
mask applies a transformation. (Selection mask is different in that it
limites the actions of the user to just the selected pixels)
Wether we'll allow mask nodes to be nested remains to be decided, I think it
could be useful, but it could also be too hard for users to work with.
> and generators are a type of "regular". More
> specifically/generally, we have nodes that take the projection beneath
> them as input, and nodes that don't take input from the projection
> stack, with generators and regular layers being both in the second
Sort of: see above.
> Thanks. I'll try to summarize my thoughts on (combined with my
> understanding of your thoughts - please feel free to comment/correct as
> needed) this when I get a chance. Do we talk about the layer DAG somewhere?
The closest is this:
I'm not sure we've got a DAG as it is usually intended: it is a directed
acyclic graph, though, and if I made a mistake when implementing copy nodes,
it might even be infinitely cyclic. But it is also stack based, a the big
thing missing, I have come to realize in this discussion is the input
settings for nodes.
More information about the kimageshop