<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">2015-02-05 23:00 GMT+01:00 Friedrich W. H. Kossebau <span dir="ltr"><<a href="mailto:kossebau@kde.org" target="_blank">kossebau@kde.org</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Am Donnerstag, 5. Februar 2015, 12:47:22 schrieb Dmitry Kazakov:<br>
<span class="">> > > There is one important point: *all* the test sets should be compiled<br>
> > > when<br>
> > > KDE_BUILD_TESTS==on<br>
> ><br>
> > Where compiled!=run? Makes sense, I agree. The whole idea of CI is to<br>
> > cover as<br>
> > much as possible, so we do not need to do things manually :)<br>
><br>
> Yes. Build all, but be able to choose what to run. Ideally I should be able<br>
> to run two subsets of tests: 1) for integration testing; 2) more<br>
> complicated for yearly semi-manual regression-testing.<br>
<br>
</span>s/yearly semi-manual regression-testing/pre-release regression testing/ :)<br>
<span class=""><br>
> > 5) It would also be nice to be able to choose different subtests of one<br>
> ><br>
> > > executable to be in different subsets. Though I am not sure whether it<br>
> > > is<br>
> > > doable.<br>
> ><br>
> > Could you give an example what you mean here? "executable" is not clear,<br>
> > as<br>
> > well as "to be in different subsets"?<br>
><br>
> See e.g. KisNodeTest.<br>
><br>
> It has two parts. The first consists of "integration" tests, which can be<br>
> run very fast:<br>
><br>
> void testCreation();<br>
> void testOrdering();<br>
> void testSetDirty();<br>
> void testChildNodes();<br>
> void testDirtyRegion();<br>
><br>
> The second consists of stress tests which are brute-force checking thread<br>
> safety. They might take up to several minutes to execute and preferably<br>
> should be run in semi-manual mode yearly or something like that:<br>
><br>
> void propertiesStressTest();<br>
> void graphStressTest();<br>
<br>
</span>Hm... at build time an idea would have been to just use a c++ preprocessor<br>
macro to control which tests are<br>
<br>
#ifdef SOME_CMAKE_INJECTED_PARAM<br>
#define INTEGRATIONTESTS slots public<br>
#else<br>
#define INTEGRATIONTESTS public<br>
#endif<br>
<br>
INTEGRATIONTESTS:<br>
void testCreation();<br>
void testOrdering();<br>
STRESSTESTS:<br>
void propertiesStressTest();<br>
void graphStressTest();<br>
<br>
So the tests will be always built, but only the slots activated as requested<br>
on cmake configuration time.<br>
But that does not help with the idea of running either or of these tests at<br>
runtime, it only allows to add tests to the whole atomic unittest executable.<br>
<br>
Another option would be to wrap all tests methods with forwarding methods or<br>
intial guards, which would simply QSKIP tests not matching the current<br>
testset. Disadvantages: some coding overhead, and test result listing would<br>
include the skipped tests, which might be not wanted.<br>
<br>
So separating the subsets into different test "executables" (hm, need some<br>
proper agreed terms :) ) so far seems the only solution to me. Which might not<br>
be such a problem per se, some code has to be written/moved anyway.<br>
Setup/teardown logic duplication<br>
Anyone with another trick idea?<br></blockquote><div><br></div><div>No idea about the finer granularity :(</div><div><br></div><div>But, re. running certain only certain sets of test, perhaps we could label tests using</div><div><br></div><div> set_tests_properties(SomeTest SomeOtherTest PROPERTIES LABELS "stress")</div> set_tests_properties(FooTest BarTest PROPERTIES LABELS "integration")</div><div class="gmail_quote"><br></div><div class="gmail_quote">and then pass -L / -LE to ctest using e.g.</div><div class="gmail_quote"><br></div><div class="gmail_quote"> make test ARGS="-LE integration"</div><div class="gmail_quote"><br></div><div class="gmail_quote">The above example would exclude integration tests from the run.</div><div class="gmail_quote"><br></div><div class="gmail_quote">Regards,</div><div class="gmail_quote">Elvis</div><div class="gmail_quote"><br><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span class="im"><br>
> >> 2) Consequence of 1): from time to time we should manually check what<br>
> >> exactly is wrong with the test. Is it a pixel drift or a real problem.<br>
> ><br>
> > Do you happen to have a list of the tests where such external deps<br>
><br>
> influence<br>
><br>
> > the result? Or could the list be created from checking for<br>
> > checkQImageExternal()?<br>
><br>
> Everything that uses TestUtil::checkQImageImpl(). It includes almost all<br>
> tests written for last couple of years.<br>
<br>
</span><div class=""><div class="h5">Cheers<br>
Friedrich<br>
_______________________________________________<br>
calligra-devel mailing list<br>
<a href="mailto:calligra-devel@kde.org">calligra-devel@kde.org</a><br>
<a href="https://mail.kde.org/mailman/listinfo/calligra-devel" target="_blank">https://mail.kde.org/mailman/listinfo/calligra-devel</a><br>
</div></div></blockquote></div><br></div></div>