Keeping CI green

Alex Merry alex.merry at kde.org
Tue Jun 3 18:25:56 UTC 2014


On 03/06/14 17:24, Kevin Ottens wrote:
> I see room for improvement in what gets evaluated when (like ability to run a 
> patch in jenkins as part of the review process), I'm just stuck on the term 
> "enforcing" there, not sure what you have in mind.

Neither am I, to be honest. It just feels like we're ignoring the situation.

>> I don't think Jenkins currently sends emails directly to people who appear
>> to have broken something - can we make it do so? Do we want to?
> 
> AFAIK we can't do that reliably. Now, what we could do is to always have the 
> maintainer in CC of breakages, expecting said maintainer to react.
> 
> I'm not fully sold on that idea, as that's likely to create a situation where 
> the maintainer hunt down and point finger to the person who pushed the commit 
> which broke something, while ideally it should be treated as a team thing.

Well, ideally the commit was reviewed, so that at least spreads things
out a bit... But, yes, it should be "we need to fix this", not "you need
to fix this".

> The problem is that we're too good at ignoring breakages... it's a cultural 
> thing, it needs changing. I didn't find the right tool for that yet, if 
> someone has an idea I'm all ears.

Yeah, I agree. Changing the culture is doable. It would involve us (by
which I loosely mean the core frameworks developers) checking Jenkins
status before and after pushing, and encouraging others to do the same.
This encouragement can include asking people to hold off on shipping RRs
until the build is fixed, and (gently) poking people who commit to a
project that isn't green.

It also means making an effort not to just ignore those emails Jenkins
sends to the list.

>> Do we want to have a "fix it or have it reverted" policy? How would that
>> interact with failures that can only be reproduced on Jenkins (which does
>> have an unusual setup)?
> 
> That's something we could do, but that means we should do something about 
> flaky tests. A flaky test is a mostly useless test, so either it should be 
> fixed or it should be removed...

Definitely. I know Qt's Gerritt setup tries to compensate for flaky
tests by running them a second time if there's a failure, but that's
hardly ideal.

Alex


More information about the Kde-frameworks-devel mailing list