Desktop memory usage
John Berthels
jjberthels at gmail.com
Mon Sep 11 11:22:46 CEST 2006
On 10/09/06, Lubos Lunak <l.lunak at suse.cz> wrote:
> I finally found some time to put together something that I had measured
> already quite a while ago. And I think the results are interesting. But
> before I post this somewhere publically I'd appreciate if somebody could at
> least quickly check it. I'd hate to see this trashed by people just because
> of some silly stupid mistake that I've managed to overlook. It still needs
> some final polishing but otherwise I right now consider it complete, so in
> case you see something wrong, missing or not clear with it, please tell me.
Hi.
This looks like a really interesting piece of work.
Whilst you do say that "All basic tests that follow are measured
against this number unless explicitly stated otherwise." it might be
clearer to explicitly point out that this is the 'diff' column in the
later results.
The only possible systematic issue I can think of is that, as you say,
these measurements are of a system which isn't under memory pressure.
So the memory usage will include discardable pages (e.g. one-time init
code) rather than reflect the working set size for the desktop. That's
the number which matters most for "does this feel slow on a 128M box",
I would say.
This is hard to measure, but I guess one way might be to:
a - bring up the desktop
b - cause great memory pressure to swap pretty much everything out
c - remove the memory pressure and "use the desktop naturally" to swap
in the needed bits
'c' is of course the difficult bit. However, the numbers you've got
are still useful in their own right and of interest, I would say.
You would also need an as yet unwritten version of exmap which uses
much less memory, since that will skew the numbers under memory
pressure. This version will fully separate the collection/anaysis
phases, since it's really the analysis/display part which requires a
lot of memory.
The most interesting point for me is that in the desktop environment,
the old Unix/Linux axiom that "processes are cheap" is clearly false.
The question is whether effort is better spent in adding complexity to
reduce the numbers of processes (accepting 'processes are expensive')
or in trying infrastructure work to attempt to reduce the per-process
cost.
regards,
jb
More information about the Kde-optimize
mailing list