KDE is not an OS platform... (And neither is Gnome)

David Faure faure at kde.org
Tue Nov 3 00:57:54 GMT 2009


On Tuesday 03 November 2009, Kevin Krammer wrote:
> On Monday, 2009-11-02, Oswald Buddenhagen wrote:
> > On Mon, Nov 02, 2009 at 01:36:33AM +0100, Martin Sandsmark wrote:
> > >  Søndag 1. november 2009 21.13.11 skrev nf2 :
> > > > Read my "solution" number 2 above.
> > > >
> > > >> [...] aligning the behavior of protocol-handlers in KIO and
> > > >> GVFS [...]
> > >
> > > Well, what's missing? It shouldn't be so hard, the format of the URLs
> > > are well-defined by RFC 1738, and all the protocols I can think of have
> > > standardized names (maybe in RFCs too? I'm too lazy to check, but this
> > > is really a minor issue to fix...).
> >
> > if you manage to represent an object file in an ar archive in a tar file
> > in a gzip file in an encrypted rar file on an ftp server as an url which
> > every vfs understands then you have won.
> 
> Hmm.
> Since KIO operates on URLs as well, wouldn't have that the same limitation?
> What would e.g. Konqueror display in its location edit for such a case?

KDE3's KURL had support for that, using sub-urls. So the above would have 
looked like ftp://host/file.rar#rar:/foo.gzip#gzip:/#tar:/foo.a#ar:/foo.o
assuming that all these kioslaves would have existed (they didn't)
and that such amount of nesting was working (I don't think it did).

In KDE4 I dropped it, mostly because the lack of caching in the ioslaves
and the lack of reliable slave-reuse by the kio scheduler, meant that the same 
file was download over and over from the ftp site while you were browsing
through its contents. Oh, and the data was copied between 6 kioslaves, in a 
case like the above. Really horrible for performance.
(So I also dropped sub-urls altogether, since even making just the url parsing 
work with QUrl was some effort).

So if someone feels like doing this again (users subscribed to bug 73821 will 
certainly love you), he would have to fix all of the above: sub-url parsing, 
caching in the remote slaves like ftp/smb/fish/ssh --- ok, so better, caching 
of file contents inside KIO somehow, when these files are used as archives,
and finally doing the archive inspection without kioslaves, for instance using 
strigi streams instead. They are in-process, which is certainly much nicer 
than kioslaves when so many of them have to work together. We're missing a way 
to chain kioslave output with strigi streams, basically. Seems doable though.
And now, to come back to the thread topic: think of how much more complex it 
would be to do the same on top of GIO (nothing against GIO itself, I mean: on 
top of something we have no control over)... Mind boggling.

-- 
David Faure, faure at kde.org, http://www.davidfaure.fr
Sponsored by Nokia to work on KDE, incl. Konqueror (http://www.konqueror.org).




More information about the kde-core-devel mailing list