Testing and testability in KDE PIM

Daniel Vrátil dvratil at kde.org
Thu Nov 1 11:36:12 GMT 2018


Hi Denis,

On Wednesday, 31 October 2018 14:07:39 CET Denis Kurz wrote:
> Today's the perfect date to bring up a scary topic: Testing!

whooooOOOOOoooOOOOOooOOOOOOOOoo :)

> 
> Dan commented this on a recent Diff:
> > And you are right, we should have more tests to cover this
> 
> I thought that, too, just some days ago, when I tried to get my head around
> Akonadi's FetchHelper. I felt that changing anything there, even apparent
> style changes, might break things. After all, I don't have a very deep
> understanding of the context this class is used in and methods are quite
> long: fetchItems is only the tip of the iceberg, with its 340 lines.

FetchHelper is complicated but it's not /that/ bad. Lack of documentation is 
likely a part of the problem too.

> I assumed that tests might help, and was confirmed by introductory part of
> the book [1] I then started to read. Let's assume for the rest of this mail
> that that book keeps me motivated to actually do something about test
> coverage in KDE PIM.
> 
> Here are a few questions that came up so far:
> 
> * How are chances to get patches through review that merely improve
> testability, without introducing actual tests? Note that improving
> testability can mean more code without adding user-facing features, and it
> might add a layer of indirection or two where it hasn't been needed so far.
> I consider testability as a feature, though.
> 
> Rationale: I wouldn't want to put time into writing tests against an
> interface that is never coming because a patch I'm depending on is not
> accepted. I assume chances are rather high, as D15727 seems to get
> accepted, too, and I haven't seen a Diff so far that was rejected. Just
> want to be extra sure.

There's no reason to not accept patches that only shuffle around code without 
introducing functionality, as long as it leads to something: better 
readability, cleaner and easier to understand code, better testability, ....

So chances your changes get accepted are basically 100%.

> I will try to refactor as little as possible as long as there are no tests
> to lead the way, as preached e.g. by Fowler [2] and Feathers [3]. When the
> time comes, the above question also applies to refactoring, too: How are
> chances to land mere refactoring changes, like extracting a small method
> from a larger one just to give it a name; giving classes, variables etc.
> meaningful names; port to newer C++ features like moving declarations to
> "const auto", nullptr etc.

Same as above. Making the code cleaner and easier to read is an essential part 
of development, so why not.

Regarding "auto", I'm not a huge fan of just going blindly in and replacing 
everything with "auto"  - makes sense to do so when it improves readability 
(like replacing the long iterator type names with "auto"), but just changing 
QString to auto everywhere doesn't make sense, and doesn't necessarily 
contribute to a better code.

Also, we are stuck with C++14 unfortunately, so the cool stuff from C++17 that 
would make a ton of sense to improve code quality is unavailable to us, some 
of the limitation also comes from Qt (like QScopedPointer not being movable, 
which annoys the hell out of me).

> * There seem to be several studies that show that, typically, "90% of the
> bugs are found in 20% of the code". This quote from Roy is followed by the
> advice to ask the team to point to those 20%. So: Can you identify any
> parts of the code that might benefit from testing and refactoring more than
> others?

Well, if knew what's broken, we'd have fixed it already :-)) 

I don't know if there's general area that's more broken than the rest, 
majority of bugs we are facing right now comes from the asynchronous nature of 
the overall solution, which cannot be tested with unit-tests, but rather 
integration test.

> * The other way around: Since I'm still an absolute beginner in testing, can
> you point to components that might not be too hard to refactor, but would
> still benefit greatly? A refactoring and testing junior job, if you will.

I think a good start would be the Handlers inside of the Akonadi Server - 
handlers handle commands from clients, process them and generate responses and 
notifications. Some of them are already tested, but not all and maybe not all 
cases are covered. They are also fairly isolated, so not much refactoring in 
this regard is needed.

Another place, which is basically untested (because it is hard to do so), yet 
critical is the stuff inside the agentbase directory - AgentBase and 
ResourceBase are base classes for implementation of Resources - they schedule 
changes, call the implementation and put data coming from the implementation 
into the Akonadi Server. Of all the stuff in the agentbase directory, only 
ResourceScheduler is really tested. But testing those parts is very tricky 
because there is a lot of places where calls to Akonadi happen and we 
currently don't have any means how to mock the entire Akonadi (without running 
an isolated Akonadi server instance like we do in the integration tests).

This is where it ties in with another long-term plan of introducing an 
"Akonadi Interface": an interface where you call iface->fetchItem(...) and you 
get some "future" (or you pass in a callback) to get notified when the item 
has arrived, then porting the entire code to this interface, so that for tests 
this can be mocked by a fake implementation - this is basically what Zanshin 
does and it works very well for them, as they can mock the entire Akonadi 
implementation for tests.

> 
> * I figured so far that in KDE PIM projects, "autotests" are meant to be
> unit tests (testing a unit of work in isolation; no configuration or
> interaction needed), while "tests" are more like "integration tests". Is
> that correct?

Weeell, yes and no: "autotests" are tests that can be executed without manual 
interaction (e.g. on CI) but contains both unit-tests (testing small isolated 
components) as well as integration tests (in Akonadi case that is starting an 
actual Akonadi Server and some agents, the tests are the basically Akonadi 
clients, doing changes and observing that all affected components did they 
job).

The "tests" folder contains utilities useful for manual testing, but they 
could also qualify as debugging tools - something you write for yourself when 
developing a new feature and you want to quickly see how it works without 
having to patch e.g. KMail.


> Right now, I'm especially interested in Akonadi, but components in
> KOrganizer or some of the libraries it uses would be interesting, too.

Certainly: our test coverage is sub-par nearly everywhere :(

Cheers,
Daniel

> 
> Happy Halloween,
> Denis
> 
> [1] Roy Osherove: "The Art of Unit Testing"
> [2] Martin Fowler: "Refactoring: Improving the Design of Existing Code"
> [3] Michael Feathers: "Working Effectively with Legacy Code"


-- 
Daniel Vrátil
www.dvratil.cz | dvratil at kde.org
IRC: dvratil on Freenode (#kde, #kontact, #akonadi, #fedora-kde)

GPG Key: 0x4D69557AECB13683
Fingerprint: 0ABD FA55 A4E6 BEA9 9A83 EA97 4D69 557A ECB1 3683
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/kde-pim/attachments/20181101/e7acb580/attachment.sig>


More information about the kde-pim mailing list