<!DOCTYPE html><html><head><title></title></head><body><div>On Mon, May 19, 2025, at 11:45, Ilya Bizyaev wrote:<br></div><blockquote type="cite" id="qt" style=""><div class="qt-protonmail_signature_block qt-protonmail_signature_block-empty" style="font-family:Helvetica, sans-serif;font-size:14px;"><div class="qt-protonmail_quote"><div><span>If you really care about the licensing aspect, focus on it instead of diverting this thread into other topics with statements like this one.</span></div></div></div></blockquote><div><br></div><div>I strongly agree with this statement. "AI" as a term is a nothing-burger, it needs to be broken down further to have sensible conversations about it.</div><div><br></div><div>I encourage everyone in this conversation to avoid saying "AI" when we can talk about actual technical terms such as "LLMs", "LLM coding assistants", "image/video/speech generation models", "image/speech/face/character recognition models", "machine learning", and such.</div><div><br></div><div>Likewise, we have concerns about licensing, quality of contributions, wasting contributors' time, excessive use of computing power, and more. "AI" is at best a proxy for these, and at worst a wedge issue that can chase away valued contributors like Jin while not doing KDE much good otherwise.</div><div><br></div><div>There are enough differences in technology, providence, context of use, supervision of use, severity of concerns, scale, impact and whatnot. These make a real difference in whether or not we should deem some form of "AI" acceptable in a certain field or not.</div><div><br></div><div>I'm a big fan of the Al Capone theory of sexual harassment:</div><div><a href="https://hypatia.ca/2017/07/18/the-al-capone-theory-of-sexual-harassment/">https://hypatia.ca/2017/07/18/the-al-capone-theory-of-sexual-harassment/</a></div><div><br></div><div>If "AI" does something that we hate, let's first see if we can ban the thing that we hate, rather than painting the whole field of technology with a single broad stroke.</div><div><br></div><div>---</div><div><br></div><div>On the licensing front for example, I very much like the clarity of the Linux kernel contribution requirements. The kernel require a contributor to know and assert that the contributed code is licensable as Free Software:<br></div><blockquote type="cite"><div>It is imperative that all code contributed to the kernel be legitimately
free software. For that reason, code from anonymous (or pseudonymous)
contributors will not be accepted. All contributors are required to “sign
off” on their code, stating that the code can be distributed with the
kernel under the GPL. Code which has not been licensed as free software by
its owner, or which risks creating copyright-related problems for the
kernel (such as code which derives from reverse-engineering efforts lacking
proper safeguards) cannot be contributed.</div></blockquote><div><br></div><div>If a contributor cannot assert that they own the copyright to the code they submit, then they can't contribute. I feel like this kind of rule should do fine to keep out products of vibe coding and such, while still allowing for "AI" workflows that fall underneath the copyright "triviality" threshold and which the contributor is willing to put their good name on the line for.</div><div><br></div><div>We can still use "AI" in examples to demonstrate which use is believed to comply with or violate against our particular rules.</div><div><br></div><div>- Jakob</div></body></html>