KRunner config dialog

Ryan P. Bitanga ryan.bitanga at gmail.com
Wed May 7 12:54:54 CEST 2008


2008/5/7 Jordi Polo <mumismo at gmail.com>:
>
> >
> >
> > >
> > > > Just a little question, what did we gain by sharing searchcontext? The
> > > > point of performMatch and local searchcontexts was to reduce the locks
> > > > on the searchcontext. Currently, we still lock everytime we add a
> > > > match and DataPolicy is useless.
> > >
> > > We also locked previously when calling addMatchesTo(), the main benefict
> is
> > > that local SearchContext share the data instead of copying it. So we
> have
> > > effectively _one_ shared list instead of a global list where multiple
> local
> > > list copied their data.
> > >
> >
> > My point was this was the original setup before I added performMatch
> > to reduce the locks as Aaron suggested last year. Back then, we added
> > matches directly to a global searchcontext.
>
> I don't see much problem with the locks. Adding a match to the context is
> fast enough and that's the only operation that can create contention. But, I
> guess that you and Aaron had good reasons to modify it back then and those
> reasons may still hold true ... What are them?
>
I don't actually remember. Haha. But I think the goal was just to
reduce contention.  You could try reading the archives from around
november or december last year. We never really profiled the benefit
of using local contexts. I think it would be great if someone (like
you :) ) could give a detailed comparison. Because we might be able to
get rid of performMatch or find another way to improve KRunner even
more.

And I never got to comment on this but the reason we don't have
abort() code in AbstractRunner is because for some slow runners 99% of
the time will be sent waiting for the reply to a single function call
(like the old search runner). Putting if(aborting()) return snippets
in the code only increased complexity for runner authors but offered
minimal benefits (like saving 0.5% or so of execution time).

The best way to go about that is to delay execution of a slow runner
by keeping it on the threadweaver queue. That's controlled by the
queuepolicy in which I gave a default limit of 2 threads for slow
runners. I'll change that limit to be based on core count / 2 + 1. :)

Cheers,
Ryan


More information about the Panel-devel mailing list