Getting to 100 % succedding tests (for 2.9), or, simply dropping them all?

Friedrich W. H. Kossebau kossebau at kde.org
Thu Feb 5 08:31:15 GMT 2015


Am Donnerstag, 5. Februar 2015, 11:12:37 schrieb Dmitry Kazakov:
> Hi, Friedrich!
> 
> My notes about the tests in Krita.

Thanks for the quick and detailed reply, Dimitry :)

> 1) Quite a lot of tests in Krita are based on comparing to reference
> QImage. These tests are really useful for catching regressions and
> debugging whole subsystems. But they have a few drawbacks:
> 
> 1.1) Refernce .png files take a lot of space in repository (current
> solution: https://answers.launchpad.net/krita-ru/+faq/2670)

We should talk to sysadmin about those then. Perhaps could be solved by 
creating another package with those files, which then would be installed on CI 
as another build dep for Calligra.

> 1.2) The rendered images depend on too many things like libraries
> installed, their version and CPU model (e.g. Vc, FFTW3, CPU capabilities
> found). It means that the test may run fine on developer's PC, but fail on
> jenkins.

I see. Hm, too bad KDE CI uses different build systems (with surely different 
hardware), for the software there the build deps should be defined, so 
reliable.

> 2) Consequence of 1): from time to time we should manually check what
> exactly is wrong with the test. Is it a pixel drift or a real problem.

Do you happen to have a list of the tests where such external deps influence 
the result? Or could the list be created from checking for 
checkQImageExternal()?

> 3) I am firmly against disabling failing unittests from the build system.
> We had quite a few cases when the tests were simply forgotten and rotten
> after being disabled from build, since we have no system for controlling
> it. Spamming (already overloaded) bugzilla is not a solution as well.

Which means CI should at least confirm the tests are building (but not run 
them as tests, to avoid useless results). Okay.

> 4) Is it possible to add some tagging to unittests? Like in cmake:
> 
> kde4_add_unit_test(KisDummiesFacadeTest
> TESTNAME krita-ui-KisDummiesFacadeTest
> TESTSSET integration # <----------------------------special tag
> ${kis_dummies_facade_test_SRCS})
> 
> kde4_add_unit_test(KisZoomAndPanTest
> TESTNAME krita-ui-KisZoomAndPanTest
> TESTSSET extended # <----------------------------special tag
> ${kis_zoom_and_pan_test_SRCS})
> 
> So that we could have several sets of tests:
> 
> make test
> make test integration
> make test extended

Will investigate, nothing I know about yet.

> There is one important point: *all* the test sets should be compiled when
> KDE_BUILD_TESTS==on

Where compiled!=run? Makes sense, I agree. The whole idea of CI is to cover as 
much as possible, so we do not need to do things manually :)

> 5) It would also be nice to be able to choose different subtests of one
> executable to be in different subsets. Though I am not sure whether it is
> doable.

Could you give an example what you mean here? "executable" is not clear, as 
well as "to be in different subsets"?

> As a conclusion:
> 
> If we had the tests tagging system implemented before the release, we could
> just tag the failing one with 'fix-later' or something.

Cheers
Friedrich



More information about the calligra-devel mailing list