Fwd: Re: Automatically Generated Keyboard Accelerators (replacing kaccelgen.h)

Mark Summerfield mark at qtrac.eu
Wed Jun 11 08:16:24 BST 2008

On 2008-06-11, Michael Pyne wrote:
> On Tuesday 10 June 2008, Mark Summerfield wrote:
> > I certainly know that my algorithm outperforms the existing one in all
> > the tests I've done (and uses an algorithm that is intrinsically
> > better), so I'm inclined to hope you'll use the one I've done, at least
> > for now?
> Mark,
> Most of the kdelibs guys understand what O(n) notation entails, including
> the fact that there are constants involved.
> Please understand though, just how astronomically bad O(n!) can be for code
> which always executes.
> O(120!)    -> C * 6.689 e +198
> O(120 ^ 3) -> C * 1.728 e +006
> That's 192 orders of magnitude slower.  It would never finish, and the
> application would never start up.
> *Now*, you've tested this code yourself, so perhaps you either got the
> algorithmic complexity wrong (i.e. it's really not that bad) or what you
> mentioned about m typically being much less than n holds true.  But if
> we're going to be putting this into kdelibs we owe it to ourselves to
> figure out what the worst plausible case is, and what the effect would be.
> I guess the question is, what is "n" in this discussion?  The number of
> actions?  If so you should probably either refine the algorithm or at the
> very least fallback to the O(n) version with a threshold value of n (say 50
> or so, or whatever your testing shows to be possible).
> Thanks for the patch, I haven't had time to test it yet but I'm looking
> forward to better default accelerators in KDE!


"n" is the number of letters in the alphabet (of characters that are
legal accelerators) or the number of unique letters from all the strings
concerned (that are in the alphabet) whichever is the lower. So if the
alphabet is 0-9A-Z the biggest n possible is 36 no matter how many
strings there are.

Sure my algorithm is completely inpractical---but only in theory:-)

In practice I deal with the n! issue in two ways. First by considering
m < n in most cases (i.e., only considering the "best" characters from
each string, and if that fails adding just one more at a time from the
other characters), and second by limiting how much work the algorithm is
allowed to do (the depth variable). If the algorithm hits the depth
limit then it stops and uses the best solution it has found up to that
point. By setting depth to 6000 I've found on my 2400MHz machine that
the runtime is always < 20 msec which is plenty of time in the context
of a popup menu---but at the price of not always giving optimal results
if the depth limit was hit.

Using my algorithm there is no need to have a fallback of the old one.
In tests (e.g., of all the long OpenOffice Word Processor menus + many
others) even if I reduce the depth to as low as 16 my algorithm still
always equals or betters the current one, typically accelerating all or
all but one or two compared with mostly accelerating just over half.

What I cannot answer is whether the O(n^3) algorithm could be made
faster than mine in practice because this depends on the constant costs.
Basically the cost of any algorithm is K(n) + T(n) where K is some
constant setup cost and T is some runtime cost (that we measure as
O(n)). Generally with time complexity we ignore K(n) because it is
completely swamped by T(n), but this case may be different because n is
so small and O(n) is usually applied when n is big.

Clearly my T(n!) is much worse than T(n^3)---although in most cases it
is T(m!) where m < n, vs T(n^3). But what of K? For my algorithm I do
one pass on the alphabet and three passes on the strings to set things
up, which even for a 36 character alphabet and 40 strings is effectively
no cost at all. But I don't know the K for the O(n^3) algorithm and my
friend seems to think that the cost could be quite high. However, the
only way to know for sure is to try an implementation...

Mark Summerfield, Qtrac Ltd., www.qtrac.eu

More information about the kde-core-devel mailing list