Another proposal for modernization of our infrastructure
bcooksley at kde.org
Wed Jan 28 09:08:54 GMT 2015
On Tue, Jan 27, 2015 at 11:08 PM, Jan Kundrát <jkt at kde.org> wrote:
> as promised, here is a proposal on how our infrastructure can be improved,
> with emphasis on service integration. There are screenshots inside.
> Feedback is very welcome.
A few comments.
1) Most applications integrate extremely poorly with LDAP. They
basically take the details once on first login and don't sync the
details again after that (this is what both Chiliproject and
Reviewboard do). How does Gerrit perform here?
2) For this trivial scratch repository script, will it offer it's own
user interface or will developers be required to pass arguments to it
through some other means? The code you've presented thus far makes it
appear some other means will be required.
3) We've used cGit in the past, and found it suffered from performance
problems with our level of scale. Note that just because a solution
scales with the size of repositories does not necessarily mean it
scales with the number of repositories, which is what bites cGit. In
light of this, what do you propose?
4) Has Gerrit's replication been demonstrated to handle 2000 Git
repositories which consume 30gb of disk space? How is metadata such as
repository descriptions (for repository browsers) replicated?
5) If Gerrit or it's hosting system were to develop problems how would
the replication system handle this? From what I see it seems highly
automated and bugs or glitches within Gerrit could rapidly be
inflicted upon the anongit nodes with no apparent safe guards (which
our present custom system has). Bear in mind that failure scenarios
can always occur in the most unexpected ways, and the integrity of our
repositories is of paramount importance.
6) Notifications: Does it support running various checks that our
hooks do at the moment for license validity and the like? When these
rules are tripped the author is emailed back on their own commits.
7) Storing information such as tree metadata location within
individual Git repositories is a recipe for delivering a system that
will eventually fail to scale, and will abuse resources. Due to the
time it takes to fork out to Git, plus the disk access necessary for
it to retrieve the information in question, I suspect your generation
script will take several load intensive minutes to complete even if it
only covers mainline repositories. This is comparable to the
performance of Chiliproject in terms of generation at the moment.
The original generation of our Git hooks invoked Git several times per
commit, which meant the amount of time taken to process 1000 commits
easily reached 10 minutes. I rebuilt them to invoke git only a handful
of times per push - which is what we have now.
8) Shifting information such as branch assignments in the same manner
will necessitate that someone have access to a copy of the Git
repository to determine the branch to use. This is something the CI
system cannot ensure, as it needs to determine this information for
dependencies, and a given node may not have a workspace for the
repository in question. It also makes it difficult to update rules
which are common among a set of repositories such as those for
Frameworks and Plasma (Workspace). I've no idea if it would cause
problems for kdesrc-build, but that is also a possibility.
9) You've essentially said you are going to eliminate our existing
hooks. Does Gerrit support:
a) line ending checks, with exceptions for certain file types and
b) Convenient deactivation of these checks if necessary.
c) Checks on the author name to ensure it bears a resemblence to
d) Prohibiting anyone from pushing certain types of email address
such as *@localhost.localdomain?
10) You were aware of the DSL work Scarlett is doing and the fact this
is Jenkins specific (as it generates Jenkins configuration). How can
this work remain relevant?
Additionally, Scarlett's work will introduce declarative configuration
for jobs to KDE.
11) We actually do use some of Jenkins advanced features, and it
offers quite a lot more than just a visual view of the last failure.
As a quick overview:
a) Tracked history for tests (you can determine if a single test
is flaky and view a graph of it's pass/fail history).
b) Log parsing to track the history of compiler warnings and other
matters of significance (this is fully configurable based on regexes)
c) Integrated cppcheck and code coverage reports, actively used by
some projects within KDE.
d) Intelligent dashboards which allow you to get an overview of a
number of jobs easily.
Bear in mind that these intelligent dashboards can be setup by anyone
and are able to filter on a number of conditions. They can also
provide RSS feeds and update automatically when a build completes.
How would Zuul offer any of this? And how custom would this all have
to be? Custom == maintenance cost.
Addendum: the variations, etc. offered by the Zuul instance which
already exists in the Gerrit clone are made possible by the hardware
resources Jan has made available to that system. Jenkins is fully
capable of offering such builds as well with the appropriate setup,
some of which are already used - see the Multi Configuration jobs such
as the ones used by Trojita and Plasma Framework.
You've lost me i'm afraid with the third party integration - please
clarify what you're intending here.
12) The tone of the way the event stream feature is mentioned makes it
sound like sysadmin actively prevents people from receiving the
information they need. We have never in the past prevented people from
receiving notifications they've requested - you yourself have one that
triggers builds on the OBS for Trojita.
13) You've used the terminology "we" throughout your document. Who are
the other author(s)?
> With kind regards,
> Trojitá, a fast Qt IMAP e-mail client -- http://trojita.flaska.net/
More information about the kde-core-devel