Manner in which kde-gtk-config development is conducted
Ben Cooksley
bcooksley at kde.org
Sat Mar 21 21:27:34 GMT 2020
On Sun, Mar 22, 2020 at 3:27 AM Johan Ouwerkerk <jm.ouwerkerk at gmail.com> wrote:
>
> On Sat, Mar 21, 2020 at 1:32 AM Ben Cooksley <bcooksley at kde.org> wrote:
> >
> > Comments welcome. Please note that simply fixing the dependency
> > breakage in this case is not enough to resolve this - there are
> > underlying issues which need to be addressed here.
> >
> > Regards,
> > Ben Cooksley
> > KDE Sysadmin
>
> I cannot comment as to whether or not this is a pattern of behaviour
> or just a few isolated instances. From a technical perspective I feel
> there are two (additional) underlying issues worth addressing here:
>
> 1. This could be prevented for the most part by having CI run before,
> and not after the fact. I.e. prior to merging code.
This will happen once we are on Gitlab.
> 2. Different projects have different CI needs, and it would help if a
> project could safely manage their CI environment "on their own" as
> much as possible. The current system requires a lot of daunting
> (possibly otherwise unnecessary) complexity purely to manage the fact
> that a builder image will be used not just for one project but for
> perhaps the whole of KDE.
The global builder image doesn't cause too many issues fortunately and
isn't the problem here. Our current setup actually has the flexibility
to use different images - we're just constrained in terms of setting
this up by Jenkins as there is a bunch of tooling needed to provision
jobs.
A good proportion of the current "complexity" is due to our use of
Jenkins. The remainder is due to needing to provide the most recent
version of a given KDE project as a dependency, along with the
infrastructure for tests to execute successfully (which is quite a bit
more than just 'make test')
The need to use the most recent version of a KDE project means either:
a) You have to have all of those projects use the same image as well; or
b) You have a special job to build all of those projects that don't
use the same image (which is what the 'Dependency Build' jobs do); or
c) You incorporate the project within the base system itself
Option (c) is not possible on Windows or FreeBSD as those platforms do
not support dynamic provisioning of images - meaning that all KDE
projects have to share the same machines and therefore must be
provided via our mechanisms (options a and b above).
In the case of Plasma, baking in say Frameworks to the image would
mean a full image rebuild everytime you needed something new in a
Framework. Please note that kwindowsystem and plasma-framework are
both Frameworks. Suffice to say, that won't work - you would be
rebuilding that image multiple times a week at a minimum.
The only way to manage the build environment for something like Plasma
is to do it on a Product level, taking all of it's member projects
into account. This is what we do currently - while the actual Docker
image may happen to be shared with Frameworks and Applications, that
isn't required and could be seperated within our existing
architecture.
>
> As for running the CI beforehand: by this I mean that it is relatively
> easy to 'break the build' one way or another and for this not to be
> caught during review, especially if a change is complex and has been
> in development for a while with many revisions. Certainly this is a
> particularly severe case of breaking the build, but the same should
> also apply to e.g. tests that start to fail.
>
> On the day job I find that it is absolutely routine for commits to be
> pushed to feature branches for review which don't even pass the CI
> sanity check. This is not because people are lazy, but because of the
> perennial pitfall of "works on my machine" (local config vs. remote),
> and sometimes because people want to get early review on code
> structure for example.
>
> I think it would help massively for CI to be run on feature branches
> prior to merging as a pre-requirement for merging. This should be
> automatic and not require any special care on the part of reviewers,
> and the result of the CI should be immediately visible as part of the
> review tooling (workflow UI/UX). After that it is mostly a process of
> unlearning to commit to master directly. This should be fairly easy to
> implement for invent.kde.org, in the sense that Gitlab offers such CI
> as a feature out of the box and it is fully integrated with the MR
> workflow.
>
That is correct, and the plans we've discussed for CI on Gitlab
(invent.kde.org) do plan on allowing for CI on feature branches.
At this time, we are on Jenkins, and it is extremely non-trivial and
therefore not possible to support feature branches in this manner.
> Regarding your point about changing dependencies and the need for
> communication to manage the CI environment, to the extent that this
> breaks the build this could be simplified if it were easier to manage
> the build environment yourself (from a project perspective). This
> brings to my second point: I think it would be desirable for projects
> to be able to cater for their CI needs on their own as much as
> (safely) possible.
>
> I understand why you would want to avoid proliferation of too many CI
> setups but as the same time having a single image that contains
> everything + kitchen sink also leads to problems. In particular it
> becomes hard to ensure the image and versions of tooling or
> pre-installed libraries are compatible with every project, to keep
> this up to date and to ensure that this all remains consistent with
> the CI aims of a particular project.
>
Please see above for my explaination on this.
> For instance a KF5 framework might want to validate that they can
> build and pass tests against *both* an ancient version of Qt (for
> stability promises) and the most recent Qt release as well (and
> possibly even Qt from master to get a heads up of forwards
We already do this. See the SUSEQt5.12 and SUSEQt5.13 set of builds at
https://build.kde.org/job/Frameworks/ (we really should have a Qt 5.14
image instead of a 5.13 one, but nobody has asked for one yet so i've
not looked into it)
> compatibility issues). A Plasma Mobile app might have rather different
> CI needs: it matters that both 64 and 32 bit builds are produced, both
> as Android apps and flatpaks. A project might want to validate that it
> builds against both master versions of KF5 frameworks and those more
> typically found on a typical distro. This sort of per-project
> complexity is hard to deal with in one big builder image, but
> something we ideally would be able to pull off for all our projects as
> a matter of routine.
That isn't CI. That is CD - as it results in an artifact for use on
developers/testers/end user systems.
Because these in most cases need a completely different setup, they
are out of scope for the CI system and are the responsibility of the
Binary Factory, which has specialist tooling to handle Android and
Flatpak builds in ways more suited to how those are usually done
(including providing infrastructure to sign the results and distribute
them)
You wouldn't want to use the binaries the CI system builds on a system
for anything more than very quick testing, as they're built in full
debug mode, to ensure that tests run with asserts enabled - to ensure
we catch as many possible issues as we can.
With regards to a project wanting to do builds against latest
Frameworks as well as an older Frameworks found on a distribution, I
had already considered that scenario and have a plan to handle that.
>
> I think we already run into the complexity problems pretty hard with
> the current CI setup in that sense: taking binary factory as an
> example, it is quite obvious that the JSON blob encoding dependency
> information for Android apps is something that came out of a need for
> "yet another" layer of abstraction to manage CI needs of many
> different projects, and that this is hard to grasp, comprehend fully
> let alone maintain: there is an ever-growing list of ad-hoc helper
> scripts needed to keep the JSON blob manageable, and even so it is not
> very easy to follow if a project has to pull in multiple dependencies.
The infrastructure for Android builds has grown organically, and I
suspect we are at a point where we do need to take a step back and ask
ourselves if there is a better way of doing this that is more
maintainable. The dependencies bit in particular definitely needs
work.
>
> For comparison I would ask: how many people understand how all of the
> following actually work:
> - The various instructions in Docker files
> - The related resources copied into the images, in particular the shell scripts
> - The Python scripts that do much of the heavy lifting
> - The Jenkins groovy DSL scripts that actually make this work on Jenkins
> - The repository metadata, and dependency information files
> - The various data files that encode crucial config data/build steps
> that drive the various layers of scripts at run time
> - CMake of the project including any settings that might be passed
> through e.g. environment variables ...
> - How this all interacts with cross compiling
> - What, exactly the nested build environments become when you take
> into account the habit of executing commands by eval'ing them inside
> shell
>
> I'm sure there are a fair few heroes among us who can keep this
> ticking, but just the simple act of typing it out struck me
> powerfully: *this is hard*. This is just the stuff I am aware of now,
> and the next time I need to dig into things I will probably have
> forgotten more than half of that.
>
CI is definitely not easy. Especially not when you are dealing with an
organisation the size of KDE, where projects depend on the latest
development version of other projects.
Please note however that a portion of the above items only apply to
Android builds - the main CI system does not copy resources into the
image, use data files to encode crucial build steps or setup nested
build environments. The Jenkins groovy of course will be eliminated
> Again Gitlab (and therefore invent.kde.org) has a feature that allows
> for this (the `image` directive in .gitlab-ci.yml files). I think it
> would be very nice if projects could work with custom CI images. To
> guard against malicious or compromised images, I suppose these would
> have to come from a trusted registry, and that the instructions/code
> for building these should be managed as part of a KDE project
> (including review) as well.
You mean like the ones at
https://invent.kde.org/sysadmin/ci-tooling/-/tree/master/system-images
?
>
> Regards,
>
> - Johan
Cheers,
Ben
More information about the kde-core-devel
mailing list