Parallel Startup for KDE.

C.O.Backstrom chp802 at bangor.ac.uk
Wed Oct 1 20:40:01 CEST 2003


Roger Larsson wrote:
> On Tuesday 30 September 2003 13.58, C.O.Backstrom wrote:
> 
>>Waldo Bastian wrote:
>>
>>>-----BEGIN PGP SIGNED MESSAGE-----
>>>Hash: SHA1
>>>
>>>On Tuesday 30 September 2003 13:36, Stefan Heimers wrote:
>>>
>>>>On Tue, 30 Sep 2003, Lubos Lunak wrote:
>>>>
>>>>>Perhaps on multi-CPU systems, but in general no - the same amount of
>>>>>work needs to be done, and it doesn't matter much in how many threads.
>>>>>But there's also a difference between KDE startup and system startup.
>>>>>During system startup, many of the started services wait for hardware to
>>>>>initialize, connect to remote systems, and similar things that make them
>>>>>wait while doing nothing. Starting more services in parallel helps
>>>>>there. But with KDE the CPU should be fully used all the time.
>>>>
>>>>What about loading files? KDE loads a huge number of separate config
>>>>files, each needing disk seek time or slow NFS-links. It's quite possible
>>>>that even a single CPU is idle from time to time.
>>>
>>>It would be nice to have a high-resolution graph (e.g. a 20m sample
>>>interval) of the state of the various processes during startup to get an
>>>idea what is happening.
>>
>>I'll have a play around after work today. I havn't got my laptop here. I
>>also want to check that the startup sequence really uses 100% cpu. Can't
>>say I belive it does. Is it possible to use top to dump the data? Or any
>>other suggestions?
>>
> 
> 
> You will find that it does not.
> Not even starting konqueror will use 100% all the time.

Very true, I'm no way near saturation.

> 
> Why?
> 
> Because startup needs to read lots of files. Using more aggresive read ahead 
> will help but not when the head hits the wall (memory used in applications
> +cached+buffered approaches the amount of available memory).
> And read ahead can not read ahead on a file not accessed yet... Parallell 
> start can help both help and hurt:
> + Another process can run while waiting
> - Different processes are likely to read from different files - causing more
>  disk seek (that HURTS)
> 

Aah, you're right there. 3 Comments:

1) The question is if a database structure wouldn't be better for this. 
Would take major work, though:(

2) Many small files? That sounds like one should benchmark the startup 
agains some different filesystems:

http://aurora.zemris.fer.hr/filesystems/small.html

Shows a general fs benchmark, and it looks as if, for example, xfs is about 
20% faster than reiserfs for small file read. Looks like having a laptop is 
not a good thing here. Very slow drives in those...

3) Wouldn't copying all config files to a mounted shmfs/tmpfs in ram during 
runlevel init (when theres plenty of cpu time wasting, and not much disk 
activity) make things considerably faster? Or some other kind of preload? I 
take it that the config files were talking about are the ones under 
/usr/share/apps & /usr/share/config? This also elliminates competition for 
disk resources. Sure, it's wasting ram, but it's not used for anything else 
during startup. And one can just hose the filesystem afterwards to reclame. 
I have to check this. Basically: after all filesystems are mounted:

mount -t tmpfs /dev/shm0 /kdetmp0
mount -t tmpfs /dev/shm1 /kdetmp1
cp -R /usr/share/apps /kdetemp0
cp -R /usr/share/config /kdetemp1
umount -at tmpfs
mount -t tmpfs /dev/shm0 /usr/share/apps
mount -t tmpfs /dev/shm1 /usr/share/config

Or whatever is fastest. Perhaps a small /usr/share partition which can be 
mirrored directly with dd?

> I once tested to analyze what files konqueror opened (libs, config files, ...)
> using strace. My plan was to schedule the reading of all those before starting
> konqueror - it did not work out since my memory could not hold all that data
> at the same time (my executable and libraries contains full debug - I would 
> have to be careful to only read executable/data sections)
> 
> It would be interesting to know how much memory is needed to completely
> avoid all reclaimation of file cache (256MB as I have is not enough)
> 
> Looking at vmstat fields:
> * swpd should remain unused.
> * free should not stabilize.
> * cached would grow and grow until the startup is done.
> 
> If you have such a computer a logout + login cycle should be use all CPU
> (it should not need to wait for any disk IO, maybe it will have to wait for 
> some network?) How fast is this compared to the initial login?
> 
> Another thing to test on a computer with enough memory, is to boot with less 
> memory enabled: 
> 	linux mem=256M
> and even
> 	linux mem=128M
> Then this will be with the same CPU, disk, etc.
> 
> As a final test read ahead could be increased before logging in:
> 	echo "file_readahead:200" > /proc/ide/hd?/settings
> Note: the interpretation of this value can be different depending on kernel.
> (there have been bugs in this area...)
> 
> The results should be quite interesting.

Hmm, I got 512+256 meg. I'll see if I can do anything about this. Not today 
though. I'm out on the beers:)

Cheers,

/Chris

> 
> /RogerL
> 


-- 
C.O.Backstrom         chp802 at bangor.ac.uk



More information about the Kde-optimize mailing list