Desktop memory usage
Lubos Lunak
l.lunak at suse.cz
Mon Sep 11 12:38:50 CEST 2006
On Monday 11 September 2006 11:22, John Berthels wrote:
> On 10/09/06, Lubos Lunak <l.lunak at suse.cz> wrote:
> > I finally found some time to put together something that I had measured
> > already quite a while ago. And I think the results are interesting. But
> > before I post this somewhere publically I'd appreciate if somebody could
> > at least quickly check it. I'd hate to see this trashed by people just
> > because of some silly stupid mistake that I've managed to overlook. It
> > still needs some final polishing but otherwise I right now consider it
> > complete, so in case you see something wrong, missing or not clear with
> > it, please tell me.
>
> Hi.
>
> This looks like a really interesting piece of work.
>
> Whilst you do say that "All basic tests that follow are measured
> against this number unless explicitly stated otherwise." it might be
> clearer to explicitly point out that this is the 'diff' column in the
> later results.
Yes.
>
> The only possible systematic issue I can think of is that, as you say,
> these measurements are of a system which isn't under memory pressure.
> So the memory usage will include discardable pages (e.g. one-time init
> code) rather than reflect the working set size for the desktop. That's
> the number which matters most for "does this feel slow on a 128M box",
> I would say.
>
> This is hard to measure, but I guess one way might be to:
> a - bring up the desktop
> b - cause great memory pressure to swap pretty much everything out
> c - remove the memory pressure and "use the desktop naturally" to swap
> in the needed bits
No benchmark is perfect. And it already took a lot of time to measure even
this way. Moreover I'd expect the parts saved by this to be not very
significant - e.g. most of the initialization code should be needed whenever
new app is started.
>
> 'c' is of course the difficult bit. However, the numbers you've got
> are still useful in their own right and of interest, I would say.
>
> You would also need an as yet unwritten version of exmap which uses
> much less memory, since that will skew the numbers under memory
> pressure. This version will fully separate the collection/anaysis
> phases, since it's really the analysis/display part which requires a
> lot of memory.
I tried to avoid hitting swap, but some of the results near the end show a
bit higher noise, although still within the expected (in)precision, so it's
possible there was a small impact by this. But in the worst case this
actually only helped the worst offenders :).
> The most interesting point for me is that in the desktop environment,
> the old Unix/Linux axiom that "processes are cheap" is clearly false.
> The question is whether effort is better spent in adding complexity to
> reduce the numbers of processes (accepting 'processes are expensive')
> or in trying infrastructure work to attempt to reduce the per-process
> cost.
Probably both, because there are limits to both.
--
Lubos Lunak
KDE developer
--------------------------------------------------------------
SUSE LINUX, s.r.o. e-mail: l.lunak at suse.cz , l.lunak at kde.org
Lihovarska 1060/12 tel: +420 284 028 972
190 00 Prague 9 fax: +420 284 028 951
Czech Republic http//www.suse.cz
More information about the Kde-optimize
mailing list