Fwd: Automated Unit tests for the Future

Andrew Manson g.real.ate at gmail.com
Thu Aug 6 16:12:48 BST 2009

Hello Everyone, 

I am forwarding this email as suggested by "CyrilleB" . I really think that 
this discussion should continue on kde-buildsystem so please direct your 
replies to that mailing list.


----------  Forwarded Message  ----------

Subject: Automated Unit tests for the Future
Date: Thursday 06 August 2009
From: Andrew Manson <g.real.ate at gmail.com>
To: Kde-buildsystem at kde.org

Hi everyone , 

I'm a Marble developer and have just started to think seriously about unit 
tests and test driven development, possibly a bit late in the game but better 
late then never! I was surprised to see that we already had some unit tests in 
Marble but what was worse was that the core developers ( including myself )  
were "Surprised to see" that more than 50% of the tests failed. This is why 
I've come to you lot, hopefully to spark some discussion and get some things 
done ;) 

The shocking thing about ^^^ is that the core developers were completely 
unaware that the unit tests were failing, which i find a bit odd when there are 
some pretty good Dashboard programs out there that can display and notify 
about the results of unit tests. I would really like to see one of these 
programs being used in a big way in KDE sooner rather than later so thats why 
I spend most of yesterday chatting with someone that knows much more about 
this stuff than I do. Sorry in advance if this turns out to be a very long 

Over the course of yesterday's discussions we have identified two major 
"styles" of unit test dashboard system. 1) a distributed model using something 
like CTest and CDash to display the results or 2) a centralised model where 
the build system is "part" of the *dedicated* unit test running system and the 
dashboard, like buildbot and CruiseControl . Each have their merits and 
hopefully we can discuss which one would be "more right" for KDE. 

For the centralised model we have a few organisational problems, mainly that 
we would need a (dedicated ) server to be provided so that we could run the 
build and display the results. This may not be so difficulty because I know that 
"Dirk" has a system like this setup already where the build results are 
displayed here: http://ktown.kde.org/~dirk/dashboard/ but this currently only 
works for displaying the actual result of the build and does not include unit 
tests. Using this model to incorporate unit tests shouldn't be too hard but it 
might cause an organisational nightmare for the sysadmins ( who have a hard 
enough time already ). On the other side of that, if we used a buildbot system 
and some of the new cool buzzwords in computer science like "distributed 
virtualised cloud computing" we could make a really cool system that would be 
able to check the build of KDE on Linux, Windows and Mac. This would be pretty 
cool but like i said an organisational nightmare. 

The other possibility is that we could use the distributed unit test reporting 
model that is currently incorporated by the CTest and CDash system. This is 
favorable for a number of reasons: 
1) we are currently using CMake so adding CTest support is *easy*! 
2) we don't have to have a centralised build system because any time the unit 
tests are run on any computer the results are submitted to the CDash dashboard 
3) to set up the CDash system we would only need to add a few CMake variables 
to our CMakeList.txt files and we will be submitting results to a database in 
no time 
4) from what I hear the kind people at http://www.cdash.org/CDash/index.php 
have already offered to host the CDash installation so our sysadmins would be 
able to take it easy. 
There are a lot of good points for using the CDash system but there is one 
pretty big problem with it that may render our test results somewhat useless. 
The fact that we are now starting to use the QtTest system makes things very 
easy for us but it means that each QtTest executable will be regarded as a 
single test by the CTest system. This is conceptually wrong because each 
QtTest executable contains many sub tests that can fail or pass independently 
of the single QtTest executable. Currently the CTest system only creates a 
test result on the test executable level which means that the results may not 
give any direct information as to why the test failed. Some may say that this 
is only a small detail and results on the executable level are "good enough" 
but if we are building a KDE-wide build system we should at least try and get 
it as close to perfect as we possibly can, or at least move in that direction. 
This problem could possibly be fixed with a patch to the CTest system but that 
would require some effort by someone far smarter than myself ;) 

Sorry again for the really long email, but i think that this really needs to 
start getting discussed. I'm CCing this email to kde-devel and kde-quality so 
that we can get as many people into the discussion as possible. I personally 
believe that this discussion should be on the kde-buildsystem mailing list so 
lets try and keep the discussions there ( feel free to correct me on this one 

Happy coding! 

Andrew Manson
Working for Marble Desktop Globe http://edu.kde.org/marble/
Implementing an OSM Annotation layer http://tinyurl.com/OSMMarble
Blog http://realate.blogspot.com

Andrew Manson
Working for Marble Desktop Globe http://edu.kde.org/marble/
Implementing an OSM Annotation layer http://tinyurl.com/OSMMarble
Blog http://realate.blogspot.com

More information about the kde-core-devel mailing list