KIO directory listing - CPU slows down SSD
Mark Gaiser
markg85 at gmail.com
Wed May 28 19:12:43 UTC 2014
On Wed, May 28, 2014 at 7:33 PM, Aaron J. Seigo <aseigo at kde.org> wrote:
> On Wednesday, May 28, 2014 16:23:31 Mark Gaiser wrote:
>> On Wed, May 28, 2014 at 9:59 AM, David Faure <faure at kde.org> wrote:
>> > On Wednesday 14 May 2014 12:50:18 Aaron J. Seigo wrote:
>> >> * one entry at a time, serialized
>> >> * no changing data once put there
>> >> * no getting the data back out
>> >>
>> >> the class would then just batch up data into its QByteArray and then
>> >> shunt
>> >> it across the wire when needed. no middle men, no 100s of 1000s of
>> >> objects
>> >>
>> >> and QLists and << operators. just an API sth like:
>> >> startEntry
>> >> addField
>> >> endEntry
>> >
>> > I like this idea very much. Mark: see, it's quite close to what we were
>> > discussing by email recently. And I hadn't read this yet :)
>> >
>> > Note how this is not specific to the ioslave using QT_STATBUFF :)
>>
>> To be honest, i was trying to look at the most used case (which is
>> kio_file) and optimize that for raw speed.
>> But i kinda like the idea of this approach as well since it would work
>> on all slaves :) I will try to implement that instead.
>>
>> What i really like about this approach is that it could eventually
>> phase out UDSEntry from the slave side. Meaning that UDSEntry would be
>> fully client side (where this stream would be parsed).
>
> that would be good for performance indeed; optimizing UDSEntry would still
> have value, of course, since the client side would continue to use it. so
> nobody's efforts there would be wasted
>
>> >> internally it could either batch up the fields and then stream them out
>> >> when endEntry is called, or (and this is what i would do personally) on
>> >> startEntry it would set up the header but with bad data (e.g. field count
>> >> of 0) and then start appending the fields as they come in and increment a
>> >> field count variable. when endEntry is called, update the header with the
>> >> right field count.
>> >
>> > Ah but batching requires a vector again. Or you meant writing into a
>> > bytearray and coming back to update the count - that would rely on
>> > QDataStream internals I'm afraid (how to know the offset....).
>
> no internals needed. see attached example.
>
>> > But the alternative is to replace the field count with an "end of fields"
>> > special field.
>>
>> Huh? "batching requires a vector again" .. i don't understand that. It
>
> consider the current code:
>
> void SlaveBase::listEntries(const UDSEntryList &list)
> {
> KIO_DATA << (quint32)list.count();
>
> foreach (const UDSEntry &entry, list) {
> stream << entry;
> }
>
> send(MSG_LIST_ENTRIES, data);
> }
>
>
> so David means that because you can't seek in the QDataStream that you'd have
> to batch up entries in a temporary data structure so that you could write the
> header first and THEN write all the entries.
>
> thankfully, the base assumption is incorrect :)
You've written that with the assumption of backwards compatibility.
It's a nice idea, but why should we even try to remain backwards
compatible? It's all internal to SlaveBase so we can change whatever
we want if that suits a more optimized design.
Consider this patch (lives for a month):
http://p.sc2.nl/pgfto3npy
I don't see any harm in introducing a second path for slaves to send
their entries to the client. This new path from the patch above
completely eliminates the UDSEntry object need from a SlaveBase point
of view. It's then up to the slaves themselves to make use of it.
With some work the current listEntries(const UDSEntryList &list) could
even be adjusted to the streaming based mechanism (aka, inject a
"startEntry" to the stream right before it does "stream << entry"
which would make all slaves use the new path.
>
>> SomeDataStreamClass stream;
>> int i = 0
>> while(QT_READDIR(...) && i++) {
>> if(i = 200) { // or whatever the threshold is to send a batch of entries
>
> ... and before this happens it needs to set the # of entries it is sending.
> something like a finalizeEntries() method. the class could keep state,
> incrementing a variable every time startEntry() is called
>
> also, rather than count based i'd probably make it based on the size of the
> bytearray. smaller entries would mean more entries sent in a batch, larger
> entries might mean less time waiting on the client side for additional items
> (though that is likely to be nullified by the extra overhead of multiple
> read/writes to the socket and subsequent deserialization trips). in any case,
> one can then optimize the size of the transfer
>
>> listEntries(stream);
>> // reset stream and i.
>> }
>> startEntry()
>> addField(...)
>> addField(...)
>> ...
>> endEntry()
>
> endEntry would need to do something similar: it would need to seek back into
> the bytearray where the fields started and update the # of fields to retain
> compatibility with the current code which is:
>
> void UDSEntryPrivate::save(QDataStream &s, const UDSEntry &a)
> {
> const FieldHash &e = a.d->fields;
>
> s << e.size();
>
>
>> }
Thinking about this again, i won't even need a "endEntry" call as long
as each new entry starts with a "startEntry". I can then use that to
detect the start of a new entry. Once i know that i just keep fetching
fields from the data till i find another "startEntry" symbol. That
also removes the need to know how many fields are present in a given
entry and conveniently also removes the need to move back in a
datastream :)
Something else that comes in mind here is the current batching. Right
now each batch is 200 entries max. But if we know the actual byte size
(which we do when filling a stream) then we might as well change the
batching to not exceed X number of kb. That might be more efficient
batching. We obviously keep the timer. Send all entries currently
collected once 300ms has passed.
>
> the point is that before listEntries is called, the # of items will need to be
> placed at the head of the list so then when deserializing on the client side
> the # of entries is known.
>
> in any case, with QDataStream and a QByteArray (as currently used in the code)
> you CAN seek backwards into the stream and overwrite some of the bytes and
> then even seek back to the end. see the attached example to prove this for
> yourself :)
>
> of course, in this case you don't even need to seek back to the end. you just
> need to seek back to the point where the count ought to be in the stream and
> then serialize a sensible value.
>
> this makes it entirely backwards compatible which means minimal changes
> elsewhere in the code which should mean less chance of introducing new bugs.
> keeping the wire protocol stable should be a goal unless there is real reason
> to do otherwise.
>
> of course, since this is about performance ... what is the cost of that
> seeking?
>
> populating a QByteArray via a QDataStream with 104857600 (1024*1024*100) ints
> takes close to 19 seconds on my laptop and results in a QByteArray which is
> 400MB large. (i had to use something that large to be able to reliably measure
> results with a naive QTime::elapsed method.) seeking back to the midpoint of
> that monster, changing the value, then seeking to the end and adding a new
> value takes ... less than a millisecond.
>
> to model what this code would actually need to do (updating the number of
> entries at the end of serialization as well as the field count for each entry),
> i added a "seek back, serialize an int, seek forward" (to emulate updating a
> field count) every for every N ints serialized to the bytearray. with N = 6 (or
> 17,476,267 field count updates) adds ~4s; N = 60 and N = 100 both add something
> that is within the noise range offered by using QTime::elapsed() to measure
> such things.
>
> soooo.. i think performance wise this should be just fine, seeing as one would
> probably be sending buffers on the order of some 100s of KB rather than 100s of
> MB ;)
I can always increase the benchmark from 500.000 files to even more if
needed ^_-
>
> --
> Aaron J. Seigo
> _______________________________________________
> Kde-frameworks-devel mailing list
> Kde-frameworks-devel at kde.org
> https://mail.kde.org/mailman/listinfo/kde-frameworks-devel
>
More information about the Kde-frameworks-devel
mailing list