[PATCH] KIO::SlaveBase and the event loop

Roland Harnau truthandprogress at googlemail.com
Thu Jul 17 20:26:33 BST 2008

2008/7/16, Thiago Macieira <thiago at kde.org>:

> Also note that all ioslaves are forks from kdeinit. All the main libraries
> are already loaded and there's no D-Bus connection -- it's only a plain,
> simple Unix socket that connects to the application.

But if an application "owns" n slaves, there are also n socket
connections and the data has to be processed by the application's main
thread.  This is perhaps a problem if n is a "big" number.  And the
creation of slaves in KIO::Slave::createSlave  includes some D-Bus
talk unless KDE_FORK_SLAVES is set. In principle forking kdeinit
should be faster than using QProcess. In this regard  the result of
calling KIO::Slave::createSlave 20 times in a row is quite
interesting: If klauncher/kdeinit is used, it takes ~2.6 seconds on my
eeePC,  whereas with set KDE_FORK_SLAVES it takes only ~1.4 seconds.

>>> If we used threads, a badly behaving ioslave could crash Konqueror,
>>> KWrite, Kile, KWord, anything that uses KIO. Right now, it crashes on
>>> its own and its name is pointed out.
> The difference to Plasmoids here is that a badly behaving plasmoid would
> crash plasma. A badly behaving ioslave plugin thread would crash any and
> all KDE applications.

If slaves are per-application  (e.g. by changing/subclassing
KIO::Slave for certain protocols) I don't see any difference.

> By the way, in KDE 3 kicker used a proxy process for some of its applets.

As Plasma applets are essentially QGraphicItems this seems not
possible anymore. After all, Plasma is pretty stable now and I don't
see even remotely  a reason for all the disadvantages  of an
out-of-process approach.


More information about the kde-core-devel mailing list