The state of the unit tests

Esben Mose Hansen kde at mosehansen.dk
Sun May 30 09:31:45 UTC 2010


Hi,

I ran "make test" in kdevplatform+kdevelop in master today. Not exactly a 
"clean, every test parsed" experience :)  	


56% tests passed, 8 tests failed out of 18

Total Test time (real) = 528.22 sec

The following tests FAILED:
          1 - sublime-areaoperationtest (Failed)
          8 - sublime-toolviewtoolbartest (Failed)
         10 - shell-sessioncontrollertest (Failed)
         11 - embeddedfreetreetest (Failed)
         13 - dvcsTest (Failed)
         14 - vcsBlackBoxTest (Failed)
         15 - reloadtest (Failed)
         16 - kdevcvs-test (Failed)

Of those, [14 - vcsBlackBoxTest] hung, and 15 required user interaction.

For kdevelop:

71% tests passed, 6 tests failed out of 21

Total Test time (real) = 106.02 sec

The following tests FAILED:
         11 - cmakecompliance (Failed)
         15 - cmakeduchaintest (Failed)
         16 - cmakeprojectvisitortest (Failed)
         18 - cmakeloadprojecttest (Failed)
         20 - gdbtest (Failed)
         21 - qtprinters (Failed)


In my humble opinion, failing unit tests and especially completely broken unit 
tests are worse than no unit test at all, since it discourages further 
testing. Futhermore, when a test fails you have to check if it is also failing 
on master to gauge whether you have just broken something.

Can we improve this situation? Some suggestions:

1. disabling and/or expect-fail failing tests
2. some sort of automatic fingering of which checkin breaks previously passing 
tests.
3. some effort to make tests that depends on externalities (I'm guessing at 
least vcs + gdb depends heavily on such) to check for such externalities and 
replaces the test with an expect fail.
4. some effort to fix test.

Anyone else think some of this would be good ideas?

-- 
Kind regards, Esben




More information about the KDevelop-devel mailing list