CI Requirements - Lessons Not Learnt?

Ben Cooksley bcooksley at
Thu Jan 5 10:20:58 UTC 2017

On Thu, Jan 5, 2017 at 10:28 PM, Martin Gräßlin
<privat at> wrote:
> Am 2017-01-05 09:44, schrieb Ben Cooksley:
>> Hi all,
>> It seems that my previous vocal complaints about system level /
>> serious impact dependency bumps on the CI system have gone completely
>> unnoticed by (some) members of our Community.
>> This was demonstrated earlier this week when components of Plasma
>> bumped their version requirements for XKBCommon and Appstream-Qt -
>> without even a thought about notifying Sysadmin or checking which
>> version the CI had, until their builds broke.
>> Neither of these is easy to fix at this stage, as the system base is
>> now too old to receive updates such as these. Base upgrades require a
>> full rebuild of everything on the CI system, and usually involve
>> significant additional churn and is a process that must be done
>> roughly twice a year, depending on dependency bump demands.
>> Does anyone have any suggestions as to how we may avoid this in the
>> future?
> I have a few questions here:
> 1) Where is this requirement to check with sysadmins codified? So far I was
> only aware of dependency freeze.

It's been codified since the PIM Qt 5.6 / WebEngine debacle, where
Sysadmin had to rush delivery of Qt 5.6 to the CI system because the
whole of PIM broke when they started using QtWebEngine. That was
around March/April 2016, my mail archives can't seem to find the exact
thread though.

> 2) How can we easily check what has? Looking at cmake output
> is not a sufficient way as it gives me wrong information

If CMake is outputting wrong information, then your CMakeLists.txt
can't make the appropriate decisions as to whether the available
version is suitable, so i'd say you've got bigger problems here that
need to be addressed first.

In any case, you can see the Docker log of the container being
generated at

> 3) What should we do when does not have the requirement?

You have to file a Sysadmin ticket, also tagging the project
'' at the same time.

> It should be rather obvious that we don't introduce new dependencies because
> we like to. There is a very important software reason to it.
> That's the case for the xkbcommon dependency increase. Should I have let the
> code broken as it was, expecting half a year of bug reports till
> has the base upgraded to Ubuntu 16.04?

That's what #ifdef is for...

> If I have to degrade the quality of the product for serving the CI, I and
> all users have a problem. And this is currently the only alternative. The
> quality of our product is highly at risk as our changes are no longer
> compile tested. This is a huge problem for the release of Plasma 5.9. On the
> other hand I cannot revert the dependency change as that would break tests
> or introduce the broken code again. So actually we are caught between a hard
> and a rock place.
> When I increased the dependency I had the dependency freeze of Plasma 5.9 in
> mind. That's the one target I have to hit from release process currently.
> Also I had to consider a social aspect here. I asked xkbcommon devs to do
> the release. I would have feeled ashamed if we asked for the release and
> then don't use it. For me it was from a social point of view a very high
> requirement to ship with the dependency in the next release after xkbcommon
> release.
> If we have to wait an arbitrary time till has upgraded the
> base, maybe the choice of the base is not sufficient. E.g. I asked openSUSE
> about this dependency weeks ago. Actually a few days after xkbcommon had the
> release and it was already shipped in tumbleweed. Similar for Mesa 13 which
> I'm also eagerly waiting for to fetch it.

Mesa 13 is news to me.

Base upgrades are a major, major piece of effort. Ignoring changes to
packaging made by the distros, everything on the CI has to be fully
rebuilt due to broken binary compatibility (GLIBC usually changes)

Even if it were kept, as soon as you get new builds using new features
while old build artifacts are still using old ones, it'll start to
break (Cue wave to Qt's plugin loader & Akonadi with even patch level
version bumps to Qt). This problem is exacerbated by us often ending
up using PPA's and other third party repositories to provide certain
version bumped dependencies - which of course are packaged
differently, leading to not only potential naming differences but also
different sets of compiler flags (ABI compatibility says hi again).

In terms of the rebuild - that's everything from Qt, up through
Frameworks, then all of the libraries that aren't in Frameworks but
everyone uses, then finally into Applications/Frameworks/Extragear -
easily a solid 24 hours of building and test runs. During the time the
CI is completely unavailable, and we usually spend a good 2-3 days
afterwards mopping up various breakages in tests, etc.

That time frame also dates back to an era where the Dependency Tower
of Jenga was much shorter as we just had kdelibs / kdepimlibs /
kde-runtime to contend with. Things are much more fragile now, and
hence need much more handholding. Especially in the land of PIM (the
most wobbly part of the tower), which stuff from other areas of
Applications and Plasma both use. That's before I even look into the
various dependency chain orders elsewhere...

Ideally we'd isolate things along Product (think
Frameworks/Plasma/Submodules of Applications) boundaries to minimize
the Jenga tower effect, but that isn't possible, in large part due to
the manner in which some software is developed (libraries in Extragear
or even Playground, requiring latest master builds of Frameworks, and
so forth). This isolation would have stopped the fallout from the
CMake version bumps in Frameworks from spreading beyond Frameworks had
it been in effect (another example of the Jenga tower at work).

> Cheers
> Martin


More information about the release-team mailing list