killing runners
Aaron J. Seigo
aseigo at kde.org
Mon Apr 28 04:23:33 CEST 2008
On Sunday 27 April 2008, Jordi Polo wrote:
> 2008/4/28 Aaron J. Seigo <aseigo at kde.org>:
> > On Sunday 27 April 2008, Jordi Polo wrote:
> > > Also a bad written runner can be killed in a critical section leading
> > > to
> >
> > a
> >
> > > held lock.
> >
> > so it's potentially dangerous and makes it more difficult to write
> > runners.
> >
> > what's the upside to this again?
>
> If we end up with a SearchContext that detach() when deleting matches for
> instance, we have the problem that when a new query comes, we potentially
> still have SearchContext around. The global context is resetted, as the
> reference count is greater than 1 will create a copy ...
>
> If we don't use detach, as "term" must be provide to add a match, old
> matches will not be added to new context, so the only problem I see is as
> we launch lots of runners for each typed letter, non-finishing runners can
> accumulate, fill the running slots and stall krunner.
that wasn't my question. you've successfully explained the problem, but not
how your proposed solution will work in the real world (e.g. have you
actually tried it?) nor how the downsides of potential deadlocks and making
it more difficult to write runners is offset by whatever improvements this
might bring.
i very much understand the "making copies of SearchContext::Private will
happen whenever a new term is introduced" concept; what isn't provided here
is:
a) how expensive each of those copies are
b) how often they happen in real world practice
c) how much improvement such an "interupt the running threads" system would
work in this particular case
what concerns me is that we have runners that evidently take far in excess of
100ms to run even on fast modern procesors. this would imply that either they
are woefully inneficient, are disk bound or held up by some other slow
mechanism (d-bus?)
profiling where the problems are, and then experimenting to see if scheduling
improvements, preempting active but obsolute runners, etc actually improve
things.
> > or rather: how much time are we spending waiting for useless runners
> > blocking
> > proper matches?
>
> The problem is that we can not know.
not absolutely, but statistically we can get a good feel for things.
if we take our current group of runners, how are we doing? without measuring
that, it's all just hand-waving and guess work. we might fix the "unnecessary
runs" problem after much effort only to discover that we still have the same
problems because they actually come from somewhere else.
> We can not know how many runners will
> be wait in a dbus transation with a program or in a network operation.
> We can not prevent them as they may be what the user really wanted anyway.
if they are deadlocked in a dbus transaction, we won't be able to stop them
anyways, right?
> I don't know if there is a way to provide a maximum reply time to dbus (I
there is, but that would be per-runner (as it is per-dbus call)
> insist on dbus because the xesam runner seems to hold on waiting on that
> when I stress krunner, that will need more testing thought)
that runner, or the service it relies on, needs to be fixed then. this is why
we disabled the search runner in 4.0, btw: strigi-over-dbus is very slow.
so ... assuming that we can sometimes interupt threads at best we still need
ways to schedule runners that behave poorly. this is probably only possible
by watching their performance at runtime and reacting accordingly.
--
Aaron J. Seigo
humru othro a kohnu se
GPG Fingerprint: 8B8B 2209 0C6F 7C47 B1EA EE75 D6B7 2EB1 A7F1 DB43
KDE core developer sponsored by Trolltech
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 194 bytes
Desc: This is a digitally signed message part.
Url : http://mail.kde.org/pipermail/panel-devel/attachments/20080427/ff2951f9/attachment.pgp
More information about the Panel-devel
mailing list