[KPhotoAlbum] More thumbnail investigations

Robert Krawitz rlk at alum.mit.edu
Sat May 19 15:31:15 BST 2018


On Sat, 19 May 2018 15:53:04 +0200, Johannes Zarl-Zierl wrote:
> Hi Robert,
>
> Do you know how the improvements interact with high latency media
> (i.e. network mounts)?

No, and I have no convenient way to test it.  I suppose I could set up
an NFS server somewhere to try it.

I think it will in large part depend on the performance
characteristics of the network medium and the operational behavior of
the filesystem.  Network filesystems aren't always especially high
latency per se.  A local 1Gb or 10Gb network can have latency in the
sub-1ms region and will have ample bandwidth, but in addition to
having two I/O paths to consider (client-server and server-disk),
there's also the problem of maintaining consistency.  Local fileystems
don't have a problem with maintaining consistency between the storage
medium and the state of the memory, because they know that nothing
else is touching the data.  In the case of network filesystems, the
client may or may not have any assurance that the state of the
filesystem hasn't changed behind its back.  This is particularly
problematic in the case of stateless filesystems.

My changes will likely reduce the amount of bulk data transfer (since
the normal state of affairs is that the image files won't change
behind the user's back), but there will still be a lot more network
traffic because the client will have to check more often with the
server whether the data is still valid.

What I haven't done -- and what would really help network filesystem
users -- would be to have the scout thread slurp the files in, and
then use those in-memory buffers for all succeeding operations
(calculating MD5 checksums, reading EXIF data, and building
thumbnails).  That will be a much more complex proposition.

The bottom line is that this will have to be tested by more users.

> Am 19. Mai 2018 05:24:41 MESZ schrieb Robert Krawitz <rlk at alum.mit.edu>:
>>On Fri, 18 May 2018 19:11:14 -0400 (EDT), Robert Krawitz wrote:
>>> On Fri, 18 May 2018 23:45:31 +0200, Johannes Zarl-Zierl wrote:
>>>> Am Montag, 14. Mai 2018, 14:41:43 CEST schrieb Robert Krawitz:
>>>>> The best solution would be to generate thumbnails upon image load
>>for
>>>>> images up to a certain size.  That would combine nicely with the
>>MD5
>>>>> code, which can also profit from having the entire file (since the
>>>>> underlying crypto code in Qt only does 16K I/O ops).  We could
>>always
>>>>> postpone the thumbnail generation for really big files (and files
>>that
>>>>> need load methods other than JPEG or thumbnail extraction from RAW)
>>to
>>>>> the end.
>>>>
>>>> My gut feeling is that we do read image files too many times (at
>>>> least 3 times for exif, thumbnails, md5) and without optimizing for
>>>> cache-friendliness.
>>>
>>> We do indeed.  We may read them four times; I'm not certain.  I've
>>> actually added a fourth one -- a scout thread (actually, I'm finding
>>> that two scouts work best, but we can tune it) that slurps the data
>>> into RAM, so the other reads are satisfied by buffering (and I've put
>>> in protocol so the scout thread doesn't get too far ahead). 
>>Combining
>>> thumbnail building with everything else helps too.
>>
>>So I did some more performance measurement, and found that one scout
>>thread actually works best.  I also tuned the I/O sizes for both the
>>scout thread and the MD5 checksumming.
>>
>>There turned out to be one more subtle (but quite significant)
>>performance issue in the new image loading code; it was trying to
>>compute MD5 checksums on all "modified" filenames, which can be
>>expensive if you have a lot of suffix substitutions; it was on the
>>order of 25-33% on both SSD and hard disk.  With that (and proper I/O
>>tuning), I'm now getting the kind of I/O performance I expect on the
>>hard disk (95 MB/sec or so with 100-110 IO/sec when reading 10 MB
>>image files).  The hard disk maxes out around 115 MB/sec, but that
>>needs sustained streaming I/O.  I'm getting about 350 MB/sec off the
>>SSD, but that appears to be partly CPU limited on my system; if I turn
>>off thumbnail building I get about 400-420 MB/sec (the peak is about
>>550 MB/sec, and I've gotten close to that with sufficient threading).
>>With an NVMe SSD I'd probably get a little better performance but not
>>enough to matter.  Being able to load 10800 images in 4'30" is quite
>>satisfactory (it's about 16'20" on hard disk).
>>
>>I'm pretty confident now (and I'm going to be preparing to push this
>>code this weekend) that the image loading is pretty close to what
>>we're going to get on a hard disk system; it would need some pretty
>>fancy footwork to do better on an SSD.
>>
>>>>> This work may not be entirely trivial, but it could have a pretty
>>big
>>>>> payoff when loading files.
>>>>
>>>> I've shied away from tackling this issue because of the complexity
>>>> of the code it touches.
>>>
>>> It's pretty complex code, to be sure, but this is the very first
>>thing
>>> people see (how fast does it read my photos, and how fast can I skim
>>> through the thumbnails?), and if you have a lot of images, it's very
>>> important from a workflow perspective.
>>
>>I'm going to try the thumbnail rebuild thing overnight; I'm curious
>>whether some of my other changes are having a significant impact
>>there.


-- 
Robert Krawitz                                     <rlk at alum.mit.edu>

***  MIT Engineers   A Proud Tradition   http://mitathletics.com  ***
Member of the League for Programming Freedom  --  http://ProgFree.org
Project lead for Gutenprint   --    http://gimp-print.sourceforge.net

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton



More information about the Kphotoalbum mailing list