<div dir="ltr"><div dir="ltr">On Sat, Feb 14, 2026 at 2:36 AM Harald Sitter <<a href="mailto:sitter@kde.org">sitter@kde.org</a>> wrote:</div><div class="gmail_quote gmail_quote_container"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Fri, Feb 13, 2026 at 12:37 PM Ben Cooksley <<a href="mailto:bcooksley@kde.org" target="_blank">bcooksley@kde.org</a>> wrote:<br>
> Resource utilisation wise, i've not looked into whether there has been a significant bump in the number of jobs, but over the past year some additional CD support has been added so that indicates some trouble there.<br>
<br>
Going off on a tangent: if the resources aren't sufficient for the<br>
development of our flagship products (ruqola is not one, nor is<br>
messagelib, but also I don't know what either do without looking them<br>
up so maybe they are crucial to something 🤷♂️) then we need to put<br>
more resources up.<br></blockquote><div><br></div><div>Which I already said I planned to do....</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
That said, I think the problem isn't one of resources so much as it is<br>
one of scale 📈. From the repeat complaints about CI performance it<br>
should be apparent that intermittently there are not enough resources.<br>
That doesn't mean we need more resources in general, it means we need<br>
the CI to be able to adapt and fan out into the cloud 🌫️ when there<br>
is too much load for the persistent nodes.<br></blockquote><div><br></div><div>Unfortunately CI nodes in order to perform well really need the half a terabyte of cached resources that the physical nodes carry - otherwise you end up having to spend time downloading multiple gigabytes of data for each job that starts.</div><div>If you have a job that uses Appium, you need KWin. KWin for a Linux build is a 1GB artifact, ignoring everything it depends on.</div><div><br></div><div>So moving into the cloud doesn't really work unfortunately as you'd spend more time downloading (which would strain the Gitlab master server) than actually building.</div><div><br></div><div>Generally we only tend to get backlogged whenever:</div><div>- PIM does a version bump (which they just need to stop doing and just let Gear release processes manage)</div><div>- Gear/Plasma/Frameworks do a release</div><div>- Nodes fall over and stop processing builds</div><div><br></div><div>Replacing the existing nodes that are flaky with newer ones of similar processing capability (will be a couple of % slower), but which are more numerous in number, will likely alleviate the bulk of our issues, as more build slots should reduce the contention.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
HS<br></blockquote><div><br></div><div>Cheers,</div><div>Ben</div></div></div>