KIO experimental work report, RFC

Ingo Klöcker kloecker at
Mon Nov 24 20:25:06 GMT 2008

On Friday 21 November 2008, Andreas Hartmetz wrote:
> 2008/11/21 Andreas Hartmetz <ahartmetz at>:
> > Hi all,
> >
> > long time no post from me, eh...
> > During, and as a continuation of, my SOC project I did some work in
> > KIO that didn't make it for 4.2 due to the vagarities of networking
> > and the need for a large amount of testing and also bugfixing that
> > I didn't put the necessary time into, really. The changes I'm going
> > to describe not only expose but also create bugs in the sense that
> > things that were maybe not nice are flat out broken with them. Most
> > importantly KIO users can exhibit stalls or errors (no idea where
> > the errors come from ATM) if the number of ioslaves is effectively
> > limited.
> >
> > - HTTP pipelining for Konqueror, using a class called
> > PipelineScheduler that is a  KIO::SimpleJob and manages a list of
> > jobs to be pipelined. After much debugging of the required
> > functionality in the HTTP ioslave and in the PipelineScheduler
> > class itself... it turns out that some servers are broken as hell.
> > is the worst example, it drops TCP packets
> > without comment if there is more than one HTTP request header in a
> > packet. Probably a crap load balancer.
> >  Also (and this is somewhat surprising) a speed advantage only
> > seems to exist on high latency connections, so at least it's useful
> > for mobile devices. And yes, this is also true when using several
> > pipelined connections to a server. I'm saying this because I'd like
> > to get one of the free N810, see :) More seriously, I don't know if
> > pipelining is not as useful today or if my implementation is
> > suboptimal. There are not that many tunables though. I've asked the
> > Mozilla guys and they said they have dropped pipelining because,
> > paraphrasing here, it fills their bugtracker. Oh well...
> >
> > - optional hard limits on number of jobs in KIO. Currently a job
> > can be - scheduled: it won't cause more ioslaves to be created than
> > the per application per protocol limit allows. it will be scheduled
> > only when there are no unscheduled jobs waiting.
> >  - unscheduled: the job *will get a slave* (if necessary a new one)
> > and run at the next opportunity. This is the default behavior if a
> > job is not explicitly scheduled using
> > KIO::Scheduler::scheduleJob().
> >
> >  What's good is that there are two priorities. KHTML uses this to
> > schedule e.g. important stylesheets for immediate transfer. What's
> > bad is that forking lots of ioslaves is not free and can cause
> > slight hangs. Above some number around ten adding more ioslaves
> > does not usually seem to improve network performance either.
> >
> >  To fix the unlimited number of slaves problem -in participating
> > apps, which means Konqueror ATM- I created
> > KIO::Scheduler::prioritySchedule() for high-priority scheduling
> > with limited slave creation. I've also made the number of slaves
> > per protocol *and* per host tunable, overriding .protocol files.
> > Note that per host limits are completely new.
> >  My modified khtml::Loader uses prioritySchedule() and I've also
> > created a simple KControl module for the two tunables, plus an
> > "enable pipelining" checkbox.
> >
> > Note that real per user per remote host connection limits are
> > recommended by the HTTP spec but almost no one implements them, so
> > they are not really necessary and there is even more potential for
> > coding bugs and other problems. That would mostly be an interesting
> > programming exercise.
> >
> > None of this stuff works reliably enough for wide release.
> > Especially pipelining is barely usable due to the many ways servers
> > can screw it up... FWIW, works great while our friendly
> > competition is far too dependent on flickr's services :)
> > I'm hopeful about connection limits but I'd definitely like to hear
> > more opinions, ideas and general input (see subject line!).
> > A sensible course of action could be to merge connection limits
> > into trunk after 4.2 branching and be ready to revert them if too
> > many problems abound or no benefit is seen. Pipelining can be
> > merged as soon as it works well enough to make sense for somebody -
> > it's disabled by default. If an Opera developer wants to tell me
> > their secret of pipelining problem avoidance, just drop me a mail
> > :P [One part I've already noticed: Opera 9.5 does not use
> > pipelining if connections per host are not limited to less than
> > ~3.]
> >
> > Patches will come in the next 24 hours, when I'm at home and feel
> > like doing the necessary legwork.
> >
> > Cheers,
> > Andreas
> Sorry, the multiple mails were KMail's fault. It got "stuck" at 96%
> several times so I aborted and restarded sending a couple of times.
> It is sufficient to reply to the first mail only :)

Actually, the multiple messages were Google's fault as can easily be 
seen by a quick look at the Received headers of the four messages with 
identical dates. The first two Received headers are identical:

Received: by with SMTP id h2mr650ebh.123.1227228851782; Thu,
 20 Nov 2008 16:54:11 -0800 (PST)
Received: from ?
 (	[])
 by with ESMTPS id 10sm24219eyd.23.2008.
 (version=TLSv1/SSLv3 cipher=RC4-MD5); Thu, 20 Nov 2008 16:52:36 -0800 
Date: Fri, 21 Nov 2008 01:52:24 +0100

And then we have the following four headers:

Received: by with SMTP id b11so424108nfh.3	for
 <kde-core-devel at>; Thu, 20 Nov 2008 16:54:17 -0800 (PST)

Received: by with SMTP id i2so674225mue.3	for
 <kde-core-devel at>; Thu, 20 Nov 2008 16:55:33 -0800 (PST)

Received: by with SMTP id i2so676153mue.3	for
 <kde-core-devel at>; Thu, 20 Nov 2008 16:57:48 -0800 (PST)

Received: by with SMTP id i2so676992mue.3	for
 <kde-core-devel at>; Thu, 20 Nov 2008 16:59:02 -0800 (PST)

I bet that the sending getting stuck at 96 % also was Google's fault, 
but, of course, that's pure speculation. This is not the first time I 
have received the same message several times in the last few months and 
almost always it was Google's mail servers being responsible for the 

I'd appreciate if you wouldn't blame KMail unless you have hard evidence 
that it's really KMail's fault (as I have presented hard evidence that 
it is the fault of Google's mail servers). Please remember that this 
mailing list is read by lots of people and some people might take your 
message more seriously than it (hopefully) was.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: <>

More information about the kde-core-devel mailing list