Disallow or discourage use of "AI" tools (Christoph Cullmann)

Martin Steigerwald martin at lichtvoll.de
Fri May 16 12:57:12 BST 2025


Hi.

Justin Zobel - 16.05.25, 02:22:54 CEST:
> The way I see it, AI has three main problems, legal, ethical and
> environmental.

I just want to throw in an impression I had when reading through posts of 
this discussion – more as a bystander:

It seemingly went pro and contra in kind of a loop. Either no AI or pro 
AI. Maybe not completely so, but there IMHO was a tendency like that.

Also I think there are additional aspects that have not been mentioned. In 
my point of view AI at first is kind of a tool. It can be used, has been 
used and currently is used for both good and bad.

Some KDE software already uses AI. Kate has been mentioned. Digikam comes 
to mind as well. It offers to download some AI model on first start. For 
face recognition and I think there is also something broader for image 
tagging. Also I believe the activity and history of files opened system in 
Plasma uses some simple form of AI to provide relevant results. And there 
may be more. Maybe in Kdenlive, maybe in other apps.

For me an important differentiation point is: Who is behind an AI 
approach? For example several employees left OpenAI for what I understood 
so far ethical issues with the direction management in this company is or 
was taking. I can give some sources, but I think it is easy enough to 
find. I also only mentioned OpenAI as an example. I am pretty sure there 
are concerns regarding other large companies behind AI offerings like 
Meta, Microsoft, Google or X AI. And concerns about bias within AI models.

But there is also software like Alpaca¹, a GTK based application that can 
run AI models locally with Ollama. Then there is LocalAI². Which for 
example can be integrated in Nextcloud as far as I know. To some extent 
local AI can run on powerful laptops which do power saving at least when 
the AI model is not in use. Of course running an AI model locally can and 
usually will consume a lot of resources as well. But maybe smaller and 
more resource efficient models at least in some scenarios can give better 
results. At least local experiments gave me the impression that larger 
models may also contain more crap data.

And Nextcloud offers an ethical AI rating³. And apparently they invested 
quite some time already to make up their minds about AI. I quite like 
their approach: Off by default. So the user or administrator has to invest 
some work to enable it and set it up.

Their rating is based on whether the software is open source, whether the 
trained model is freely available for self-hosting and whether the 
training data is available and free to use. Of course having the training 
data available and free to use does not automatically empower anyone who 
does not have the immense resources to actually use that training data to 
train a new model. But at least some scrutiny to the training data might 
be applied.

But then all of these mentioned uses are for the user. The original 
question of this thread was to decide whether to allow AI contributions to 
the bug tracker if I remember correctly. I would be cautious regarding 
this as well. For license reasons, models I tried do not even give any 
references where the data they used for an answer is coming from, for 
accuracy reasons – AI models do hallucinate, I have experienced it with 
local experiments[4] – and for other reasons already outlined in this 
thread.

I mainly wanted to point to some aspects, point to previous work on 
evaluating AI benefits and risks and ask some questions.

From a developer point of view there might be reasons to find some kind on 
first consensus on contributions by AI quite soon. I have read that 
developers wasted quite some time with some of those extremely lengthy bug 
report posts.

From a broader including user point of view, I'd rather see KDE people 
come up with some off by default integration of locally running AI models 
than more and more users giving some of their data to some of those large 
companies I mentioned (or others I did not think of). This would of course 
need more time and discussion. It is a delicate topic.

There also is a concern of a larger digital divide. Between those who 
understand and are in control of AI and those who just use it. Also people 
who stop using their own brains for some things will likely see a decline 
in mental capability regarding those things. At least that is what I 
understand so far from what I read about brain research.

So my hope would be that the KDE community can play a way in educating 
about AI and in offering ways to use some AI in an ethical fashion. Maybe 
at one point there will be even models which include original references 
in their training data so the model can give references as foot notes 
where it got information for its answers from. I have not seen that so 
far, but I also did not have the chance to try out every model there is.

Or put in other words: I rather trust you or us about finding a good 
approach to AI than many, maybe all of those large companies which push it 
like crazy at the moment.

I hope this is somewhat useful for a more differentiating perspective.

My main question would be: What would be a way to help to empower people 
about at least some ethical (!) AI?

So maybe it would be good to split this discussion into what is needed on 
a shorter team and what can be a broader vision behind the relation of KDE 
to or with all this AI stuff with strong ethical guidelines?

I close with a thank you to you all. I have often seen KDE people taking 
ethical issues to heart – and in a sense environmental issues are ethical 
issues as well. And its important. Thank you!


[1] https://jeffser.com/alpaca/

[2] https://localai.io/

[3] AI in Nextcloud: what, why and how

https://nextcloud.com/blog/ai-in-nextcloud-what-why-and-how/

[4] Good ones can admit it, others insist on their mistake or go 
completely astray.

Best,
-- 
Martin




More information about the kde-devel mailing list