Getting to 100 % succedding tests (for 2.9), or, simply dropping them all?

Friedrich W. H. Kossebau kossebau at kde.org
Thu Feb 5 22:00:22 GMT 2015


Am Donnerstag, 5. Februar 2015, 12:47:22 schrieb Dmitry Kazakov:
> > > There is one important point: *all* the test sets should be compiled
> > > when
> > > KDE_BUILD_TESTS==on
> > 
> > Where compiled!=run? Makes sense, I agree. The whole idea of CI is to
> > cover as
> > much as possible, so we do not need to do things manually :)
> 
> Yes. Build all, but be able to choose what to run. Ideally I should be able
> to run two subsets of tests: 1) for integration testing; 2) more
> complicated for yearly semi-manual regression-testing.

s/yearly semi-manual regression-testing/pre-release regression testing/ :)

>  > 5) It would also be nice to be able to choose different subtests of one
>  > 
> > > executable to be in different subsets. Though I am not sure whether it
> > > is
> > > doable.
> > 
> > Could you give an example what you mean here? "executable" is not clear,
> > as
> > well as "to be in different subsets"?
> 
> See e.g. KisNodeTest.
> 
> It has two parts. The first consists of "integration" tests, which can be
> run very fast:
> 
>     void testCreation();
>     void testOrdering();
>     void testSetDirty();
>     void testChildNodes();
>     void testDirtyRegion();
> 
> The second consists of stress tests which are brute-force checking thread
> safety. They might take up to several minutes to execute and preferably
> should be run in semi-manual mode yearly or something like that:
> 
>     void propertiesStressTest();
>     void graphStressTest();

Hm... at build time an idea would have been to just use a c++ preprocessor 
macro to control which tests are 

#ifdef SOME_CMAKE_INJECTED_PARAM
#define INTEGRATIONTESTS slots public
#else
#define INTEGRATIONTESTS public
#endif

INTEGRATIONTESTS:
     void testCreation();
     void testOrdering();
STRESSTESTS:
     void propertiesStressTest();
     void graphStressTest();

So the tests will be always built, but only the slots activated as requested 
on cmake configuration time.
But that does not help with the idea of running either or of these tests at 
runtime, it only allows to add tests to the whole atomic unittest executable.

Another option would be to wrap all tests methods with forwarding methods or 
intial guards, which would simply QSKIP tests not matching the current 
testset. Disadvantages: some coding overhead, and test result listing would 
include the skipped tests, which might be not wanted.

So separating the subsets into different test "executables" (hm, need some 
proper agreed terms :) ) so far seems the only solution to me. Which might not 
be such a problem per se, some code has to be written/moved anyway. 
Setup/teardown logic duplication 
Anyone with another trick idea?

> >> 2) Consequence of 1): from time to time we should manually check what
> >> exactly is wrong with the test. Is it a pixel drift or a real problem.
> > 
> > Do you happen to have a list of the tests where such external deps
> 
> influence
> 
> > the result? Or could the list be created from checking for
> > checkQImageExternal()?
> 
> Everything that uses TestUtil::checkQImageImpl(). It includes almost all
> tests written for last couple of years.

Cheers
Friedrich



More information about the calligra-devel mailing list