Automated Unit tests for the Future
Benjamin Meyer
ben at meyerhome.net
Thu Aug 6 21:24:41 BST 2009
To go with this discussion I thought I would toss this out there and
maybe you guys will find it useful in the future as apps switch to git
one by one.
In Arora's source (http://github.com/Arora/arora/tree/master) there is
a git_hooks directory (it is up to the user to point .git/hooks to
this directory). One of the hooks is an autotest hook. On my local
machine if I make a commit that changes lineedit.cpp the autotest
commit hook will compile and run arora/autotests/lineedit/ and make
sure that there are no failures before allowing the commit to go
through.
http://github.com/Arora/arora/blob/b181a799ac1671a4c853b505f746bc1bf3299770/git_hooks/pre-commit_autotest
It will only run the autotests for the files I have changed, not every
autotest. Even with all of the commit hooks that arora has I do not
normally notice the delay of running them. Because this does not
check every autotest on every commit, failures can sneak in there, but
this simple check has caught the vast majority of test failures from
being accidently introduced. Not only that but because I (the
commiter) introduced the regression right then with that commit I can
typically fix the failure much faster then if I noticed that failure
months later.
This hook combined with the compile hook has resulted in Arora nearly
always building (minus two times where there was includes errors and
it only built on linux and windows, but not mac) and usually all of
the tests are passing.
I do think that setting up a dashboard etc is useful and am not saying
I am against it, just that spending ten minutes making a standard
kde_hooks that developers can use is also a good idea.
And when writing autotests I wrote a tool: QAutotestGenerator that can
create stub test file for you
http://benjamin-meyer.blogspot.com/2007/11/auto-test-stub-generator.html
-Benjamin Meyer
On Aug 6, 2009, at 11:12 AM, Andrew Manson wrote:
> Hello Everyone,
>
> I am forwarding this email as suggested by "CyrilleB" . I really
> think that
> this discussion should continue on kde-buildsystem so please direct
> your
> replies to that mailing list.
>
> -Andrew
>
>
> ---------- Forwarded Message ----------
>
> Subject: Automated Unit tests for the Future
> Date: Thursday 06 August 2009
> From: Andrew Manson <g.real.ate at gmail.com>
> To: Kde-buildsystem at kde.org
>
> Hi everyone ,
>
> I'm a Marble developer and have just started to think seriously
> about unit
> tests and test driven development, possibly a bit late in the game
> but better
> late then never! I was surprised to see that we already had some
> unit tests in
> Marble but what was worse was that the core developers ( including
> myself )
> were "Surprised to see" that more than 50% of the tests failed. This
> is why
> I've come to you lot, hopefully to spark some discussion and get
> some things
> done ;)
>
> The shocking thing about ^^^ is that the core developers were
> completely
> unaware that the unit tests were failing, which i find a bit odd
> when there are
> some pretty good Dashboard programs out there that can display and
> notify
> about the results of unit tests. I would really like to see one of
> these
> programs being used in a big way in KDE sooner rather than later so
> thats why
> I spend most of yesterday chatting with someone that knows much more
> about
> this stuff than I do. Sorry in advance if this turns out to be a
> very long
> email.
>
> Over the course of yesterday's discussions we have identified two
> major
> "styles" of unit test dashboard system. 1) a distributed model using
> something
> like CTest and CDash to display the results or 2) a centralised
> model where
> the build system is "part" of the *dedicated* unit test running
> system and the
> dashboard, like buildbot and CruiseControl . Each have their merits
> and
> hopefully we can discuss which one would be "more right" for KDE.
>
> For the centralised model we have a few organisational problems,
> mainly that
> we would need a (dedicated ) server to be provided so that we could
> run the
> build and display the results. This may not be so difficulty because
> I know that
> "Dirk" has a system like this setup already where the build results
> are
> displayed here: http://ktown.kde.org/~dirk/dashboard/ but this
> currently only
> works for displaying the actual result of the build and does not
> include unit
> tests. Using this model to incorporate unit tests shouldn't be too
> hard but it
> might cause an organisational nightmare for the sysadmins ( who have
> a hard
> enough time already ). On the other side of that, if we used a
> buildbot system
> and some of the new cool buzzwords in computer science like
> "distributed
> virtualised cloud computing" we could make a really cool system that
> would be
> able to check the build of KDE on Linux, Windows and Mac. This would
> be pretty
> cool but like i said an organisational nightmare.
>
> The other possibility is that we could use the distributed unit test
> reporting
> model that is currently incorporated by the CTest and CDash system.
> This is
> favorable for a number of reasons:
> 1) we are currently using CMake so adding CTest support is *easy*!
> 2) we don't have to have a centralised build system because any time
> the unit
> tests are run on any computer the results are submitted to the CDash
> dashboard
> 3) to set up the CDash system we would only need to add a few CMake
> variables
> to our CMakeList.txt files and we will be submitting results to a
> database in
> no time
> 4) from what I hear the kind people at http://www.cdash.org/CDash/index.php
> have already offered to host the CDash installation so our sysadmins
> would be
> able to take it easy.
> There are a lot of good points for using the CDash system but there
> is one
> pretty big problem with it that may render our test results somewhat
> useless.
> The fact that we are now starting to use the QtTest system makes
> things very
> easy for us but it means that each QtTest executable will be
> regarded as a
> single test by the CTest system. This is conceptually wrong because
> each
> QtTest executable contains many sub tests that can fail or pass
> independently
> of the single QtTest executable. Currently the CTest system only
> creates a
> test result on the test executable level which means that the
> results may not
> give any direct information as to why the test failed. Some may say
> that this
> is only a small detail and results on the executable level are "good
> enough"
> but if we are building a KDE-wide build system we should at least
> try and get
> it as close to perfect as we possibly can, or at least move in that
> direction.
> This problem could possibly be fixed with a patch to the CTest
> system but that
> would require some effort by someone far smarter than myself ;)
>
> Sorry again for the really long email, but i think that this really
> needs to
> start getting discussed. I'm CCing this email to kde-devel and kde-
> quality so
> that we can get as many people into the discussion as possible. I
> personally
> believe that this discussion should be on the kde-buildsystem
> mailing list so
> lets try and keep the discussions there ( feel free to correct me on
> this one
> ).
>
> Happy coding!
> -Andrew
>
> --
> Andrew Manson
> Working for Marble Desktop Globe http://edu.kde.org/marble/
> Implementing an OSM Annotation layer http://tinyurl.com/OSMMarble
> Blog http://realate.blogspot.com
>
> -------------------------------------------------------
> --
> Andrew Manson
> Working for Marble Desktop Globe http://edu.kde.org/marble/
> Implementing an OSM Annotation layer http://tinyurl.com/OSMMarble
> Blog http://realate.blogspot.com
More information about the kde-core-devel
mailing list