Krita Commitment to Empowering Human Artists (CEHA)

Emmet O'Neill emmetoneill.pdx at gmail.com
Tue Feb 20 21:57:03 GMT 2024


Hi Thorsten, sorry for the late reply.

I avoid looking at Krita stuff during the weekends if possible and since we
had a long discussion about this stuff yesterday morning I was a bit spent.

Drawing used to be my hobby (still is to a lesser extent). Now it’s
> part of the toolbox that I earn money with. Because of this, I am
> somewhat torn, but can’t afford ideology over pragmatism.
>

 Understandable.

That said, ignoring what is ethical, sustainable and healthy for society is
a dark path that we also can't afford to go down, in my opinion.
There are a lot of bad ways (via exploitation of technology and/or people)
that society could *optimize for money* *in the short term* by choosing not
to care about anything else.
In other words, *the path of least resistance to the highest financial
return isn't always the best path*.

Without getting into anything too heavy, we know that from history, so we
might as well at least try to learn something from it.

But I do see a parallel between a human learning from the work of others
> and ML training.


The reason you see a parallel between *human learning* and *machine
learning *is because of marketing.

Human beings clearly don't function like an "AI" does;
we have agency to seek out our own desires,
every waking moment our senses are bombarded with unique experiences that
shape us,
we have a limited capacity to produce,
we have our own creative x-factor,
and maybe most importantly of all, *we aren't property*.

I'd argue that the companies that currently dominate the "AI" business want
to conflate these things, specifically because it evokes science fiction,
drives hype, and muddies the water around copyright.
A camera or microphone could be marketed as technology which sees and hears
just like a human does (and there would be some truth to that) but that
doesn't dismiss the ethical questions surrounding how those tools might be
used.

Anyway, If we *are* going to start personifying "AI", then doesn't logic
suggest that the output should be the intellectual property of the AI
itself, and not the person prompting it or the company that "owns" it.
This all seems like an example of "wanting to have your cake and eat it
too" around "AI".  Either the "AI" is the artist or it isn't.

So where do we come in? If you don't own the meat, and you don't own the
meat grinder, then how can you claim to own the sausage?

It is not the same as
> flat out copyright infringement (unless a work is actually reproduced),
> though it could be judged a different kind of infringement.


I think that remains to be seen, and I'm not really in a position to argue
it anyway.
I personally do not see AI training as "fair use", but I'll let the highly
paid lawyers and judges figure that one out.

Now if the forces in favor of banning this kind of use succeed, the
> likely outcome is that there will be nothing like Stable Diffusion any
> more, free, available to everyone. There will still be commercial
> generators from the few big corporations that can afford large content
> libraries. It could well make it impossible to work on a professional
> level with Libre Software tools (for all but a selected few with the
> most discerning clients).


We aren't in a position to ban anything, only to moderate our own actions
by what we and the community think is right.

I simply don't want Krita to become yet another tool that maybe helps one
artist by hurting thousands (or even millions, or really some unknownable
number).

Based on the conversation that we had yesterday, my impression is that
Krita might end up using Stable Diffusion's model (or something like that,
it's not very clear to me).
I have a couple of open questions about that:

   1.  If Krita was to use another company's pre-trained (I assume as an
   opaque, black box trained on data that is totally unknown to us) model to
   implement various features, what would happen if that binary blob model was
   eventually monetized in some way?
   2. If we wouldn't be comfortable scraping millions of copyrighted images
   from all over the internet ourselves, then why should we be comfortable
   using someone else's model that has trained that way?

Finally, the implication here that it is only possible to "work on a
professional level" by using exploitative "AI" tools, hints at the
magnitude of the ethical problems around this technology: It exists purely
as a shortcut to money--not for you but for the keepers of the dataset.
The idea seems to be to create more content with less money. Paying less
money to human artists in the future by technologically exploiting the
artwork of the past.

Nevermind the fundamental idea of supply and demand, which says that "In
any market transaction between a seller and a buyer, the price of the good
or service is determined by supply and demand in a market
<https://www.imf.org/en/Publications/fandd/issues/Series/Back-to-Basics/Supply-and-Demand>".

Let's imagine that generative "AI" can produce an *infinite supply* of
high-quality artwork and "content", do you believe that *demand can scale
infinitely* too?

Even putting the *artistic merits* of "AI" art aside, I don't see any way
in which this path leads to anything resembling economic sustainability for
artists--even though it's their own damn artwork that has made any of this
technology possible.

Regarding the CEHA, on the side of to be expected actual outcomes, I
> don’t see anything empowering here. Only that Krita will be a less
> capable tool.


If the goal is to merely produce an image fitting a description as quickly
as possible, consequences be damned, then Krita is already "less capable"
than generative AI.

If the goal is to help humans develop skills and create unique and
expressive works of art, then I think we should continue to find every
ethical and sustainable way of doing that.

I feel as if there is *a lot of hand-wringing from proponents of "AI"
around the simple question of "ok, cool, but can it be done ethically with
consent or permissive licensing?", which maybe implies that it cannot*...

Anyway, sorry if I seem a bit jaded, but I've had a lot of permutations of
this discussion over the last year.

On Fri, Feb 16, 2024 at 1:07 AM Thorsten Wilms <t_w_ at freenet.de> wrote:

> On Thu, 15 Feb 2024 20:36:45 -0800
>
> > https://invent.kde.org/graphics/krita/-/merge_requests/2071
>
> Hi!
>
> To add a different perspective, if I may:
>
> Drawing used to be my hobby (still is to a lesser extent). Now it’s
> part of the toolbox that I earn money with. Because of this, I am
> somewhat torn, but can’t afford ideology over pragmatism.
>
> I am not happy about the devaluation (both senses) of art due to the
> arrival of ML image generators. But I do see a parallel between a human
> learning from the work of others and ML training. It is not the same as
> flat out copyright infringement (unless a work is actually reproduced),
> though it could be judged a different kind of infringement.
>
> Now if the forces in favor of banning this kind of use succeed, the
> likely outcome is that there will be nothing like Stable Diffusion any
> more, free, available to everyone. There will still be commercial
> generators from the few big corporations that can afford large content
> libraries. It could well make it impossible to work on a professional
> level with Libre Software tools (for all but a selected few with the
> most discerning clients).
>
> Regarding the CEHA, on the side of to be expected actual outcomes, I
> don’t see anything empowering here. Only that Krita will be a less
> capable tool.
>
>
> --
> Thorsten Wilms <t_w_ at freenet.de>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.kde.org/pipermail/kimageshop/attachments/20240220/7b8d6a6a/attachment.htm>


More information about the kimageshop mailing list