Interesting abouut file operation optimization (Re: Minutes from 10/1 LSE Call)

Roger Larsson roger.larsson at skelleftea.mail.telia.com
Thu Oct 2 04:21:39 CEST 2003


On Thursday 02 October 2003 02.23, Jeff Garzik wrote:
> Larry McVoy wrote:
> > On Wed, Oct 01, 2003 at 04:29:16PM -0700, Andrew Morton wrote:
> >>If you have a loop like:
> >>
> >>	char *buf;
> >>
> >>	for (lots) {
> >>		read(fd, buf, size);
> >>	}
> >>
> >>the optimum value of `size' is small: as little as 8k.  Once `size' gets
> >>close to half the size of the L1 cache you end up pushing the memory at
> >>`buf' out of CPU cache all the time.
> >
> > I've seen this too, not that Andrew needs me to back him up, but in many
> > cases even 4k is big enough.  Linux has a very thin system call layer so
> > it is OK, good even, to use reasonable buffer sizes.
>
> Slight tangent, FWIW...   Back when I was working on my "race-free
> userland" project, I noticed that the fastest cp(1) implementation was
> GNU's:  read/write from a single, statically allocated, page-aligned 4K
> buffer.  I experimented with various buffer sizes, mmap-based copies,
> and even with sendfile(2) where both arguments were files.
> read(2)/write(2) of a single 4K buffer was always the fastest.
>
> 	Jeff

This might matter in KIO Slaves for example...
But in any case - using less memory wins!

Over time computers has become slower and slower on memory operations.
Example: Not that many years ago adding a table to speed up part of a 
calculation was the right thing - not any more... (replacing a really complex 
calculation with a small table could be a win but don't count on it...)

/RogerL

-- 
Roger Larsson
Skellefteå
Sweden



More information about the Kde-optimize mailing list