[Kde-pim] SizeThreshold=32768 results (was: Re: A cache is not a config, a config is not a cache)

Martin Steigerwald martin at lichtvoll.de
Fri Jan 23 14:35:22 GMT 2015


Am Freitag, 23. Januar 2015, 10:13:51 schrieb Martin Steigerwald:
> On Mittwoch, 21. Januar 2015 11:46:07 CEST, Martin Steigerwald wrote:
> > Hi!
> > 
> > Considering:
> > 
> > [Akonadi] [Bug 338402] File system cache is inneficient : too many
> > file per directory
> > 
> > Bug 332013 - NFS with NetApp FAS: please split payload files in
> > file_db_data into several directories to avoid reaching maxdirsize
> > limit on Ontap / WAFL filesystem
> > 
> > Bug 341884 - dozens of duplicate mails in
> > ~/.local/share/akonadi/file_db_data
> > 
> > and maybe others, I wonder about caching:
> > 
> > 
> > Surely caching 7 GB of an IMAP account that according to Outlook Web
> > Access has 38,4 GiB (I doubt it has that much, I don´t know how
> > Exchange accounts space, it surely didn´t have that much space on
> > Zimbra), without the user having request offline access, seems over
> > the top to me.
> > Especially when thats
> > done in about 500000+ files in a single file_db_data directory on the
> > local disk.
> > 
> > But now I told KMail to download all messages for offline use (former
> > disconnected IMAP functionaliy), cause I thought, if it already
> > caches 7 GiB
> > of my IMAP account anyway, I don´t bother over the few
> > additional GiB it may
> > add for full caching (I still don´t think that 38,4 GiB is just
> > for the mails,
> > maybe it includes space usage for full text indexing). And this had
> > the
> > interesting effect that now it seems I can actually use KMail with
> > Exchange at least a bit better. Its still not good when Exchange
> > drops
> > IMAP connections
> > or delays request answers, Akonadi can still not cope well with
> > that up to the
> > point KMail does *nothing* anymore, until I restart either KMail
> > and/or
> > Akonadi (sometimes it seems to need both).
> > 
> > 
> > So I think there are two needs for caching:
> > 
> > 1) Fast IMAP server (Dovecot!), fast network: Cache way less
> > mails than what
> > Akonadi caches currently in file_db_data. Maybe even do not
> > cache all metadata,
> > but well, if its fast and done once, I won´t bother.
> > 
> > 2) Crappy IMAP server (Exchange) or slow network or slow I/O on
> > server: Cache
> > all for offline usage.
> > 
> > What do you think?
> > 
> > 
> > Trojitá has similar setting between fast, flatrate and expensive
> > network.
> > 
> > I think some way to adjust the behavior for a balance between
> > situation 1 and
> > 2 does make sense. Icedove has this as well. You can speficy
> > how many days and
> > the maximum size of a message to download.
> 
> And sending from Trojitá as I managed to break my Akonadi setup
> completely by trying out a suggestion that may help with this
> situation.
> 
> *beware the following is partly a rant*.
> 
> And a *plea*. A plea for *simplicity*, for *robustness* and for a clean
> separation of config and *cache*. Never ever loose a single *bit* of
> config on a cache corruption or loss. It is not solely under your
> control, a cache loss or corruption may happen due to a number of
> reasons. Make it more failsafe.
> 
> I am not the only one who wiped Akonadi configuration and database more
> than once. And with a reason to do it, i.e. cause arriving at a point
> where *nothing* else seemed to work. Never ever use database IDs in
> configuration files. Just don't.
> 
> I hope that after Akonadi and Baloo re-indexed my maildir from scratch I
> am able to use KMail again. I know I need to check or recreate all
> filter rules.
> 
> Forwarding via copy & paste as Trojitá does not seem to be able to
> forward inline:
> 
> From: Martin Steigerwald 
> List-Post: debian-kde at lists.debian.org
> To: debian-kde at lists.debian.org 
> Subject: Re: Possible akonadi problem?
> Date: Freitag, 23. Januar 2015 09:54:10 CEST
> 
> On Donnerstag, 22. Januar 2015 22:11:37 CEST, Martin Steigerwald wrote:
> > Am Freitag, 23. Januar 2015, 07:17:13 schrieb Dmitry Smirnov:
> >> Hi Brad,
> >> 
> >> On Fri, 2 Jan 2015 11:28:53 Brad Alexander wrote: ...
> > 
> > Thank you very much, I will try this. And see whether it helps with
> > these upstream bugs:
> > 
> > [Akonadi] [Bug 338402] File system cache is inneficient : too many
> > file per directory
> > 
> > Bug 332013 - NFS with NetApp FAS: please split payload files in
> > file_db_data into several directories to avoid reaching maxdirsize
> > limit on Ontap / WAFL filesystem
> > 
> > Bug 341884 - dozens of duplicate mails in
> > ~/.local/share/akonadi/file_db_data
> > 
> > 
> > I bet it may help with the first two, but third one might be another
> > bug.
> > 
> > 
> > Right now locally after the last akonadictl fsck I only have from 4600
> > after the fsck to 4900 files now in my private setup with still is
> > mostly POP3 just a 30 day limited IMAP for the Fairphone and as a
> > backup when I accidentally mess up with something locally. It really
> > seems Akonadi is more snappy since the fsck. I have lots more files
> > in there.
> > 
> > I also bumped innodb_buffer_pool_size but didn´t see that much of a
> > change, what helped most with MySQL load is to use Akonadi git 1.13
> > branch with database performance improvement.
> > 
> > I now implemented the treshold size change you suggested and did
> > another fsck and vacuum.
> [This is adding TresholdSize=32768 to akonadiserverrc and doing
> akonadictl fsck. I also did akonadictl vacuum, the vacuum was complete
> at the time I was writing this, mysqld did not rebuild any ibd files
> anymore]

With the SizeTreshold=32768 change I get a nice improvement for my work 
IMAP account on the laptop (the one I set to download all mails offline).

Before:

ms at merkaba:~/.local/share/akonadi> du -sch db_data/akonadi/* | sort -rh | 
head -10
2,8G    insgesamt
2,6G    db_data/akonadi/parttable.ibd
245M    db_data/akonadi/pimitemtable.ibd
13M     db_data/akonadi/pimitemflagrelation.ibd
248K    db_data/akonadi/collectionattributetable.ibd
200K    db_data/akonadi/collectiontable.ibd
136K    db_data/akonadi/tagtable.ibd
120K    db_data/akonadi/tagtypetable.ibd
120K    db_data/akonadi/tagremoteidresourcerelationtable.ibd
120K    db_data/akonadi/tagattributetable.ibd

ms at merkaba:~/.local/share/akonadi> find file_db_data | wc -l
524917


ms at merkaba:~/.local/share/akonadi#130> /usr/bin/time -v du -sch 
file_db_data
7,0G    file_db_data
7,0G    insgesamt
        Command being timed: "du -sch file_db_data"
        User time (seconds): 2.14
        System time (seconds): 95.93
        Percent of CPU this job got: 29%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 5:35.47
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 33444
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 8079
        Voluntary context switches: 667562
        Involuntary context switches: 60715
        Swaps: 0
        File system inputs: 31509216
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

After the change and akonadictl fsck:

ms at merkaba:~/.local/share/akonadi> find file_db_data | wc -l ;  du -sch 
db_data/akonadi/* | sort -rh | head -10
27
7,5G    insgesamt
7,3G    db_data/akonadi/parttable.ibd
245M    db_data/akonadi/pimitemtable.ibd
13M     db_data/akonadi/pimitemflagrelation.ibd
248K    db_data/akonadi/collectionattributetable.ibd
200K    db_data/akonadi/collectiontable.ibd
136K    db_data/akonadi/tagtable.ibd
120K    db_data/akonadi/tagtypetable.ibd
120K    db_data/akonadi/tagremoteidresourcerelationtable.ibd
120K    db_data/akonadi/tagattributetable.ibd

Yep, thats 27 files, instead of >500000 (after just a week of the last 
fsck, which reduced to about 500000 files, from 650000+).

After a nice vacuuming I even get:

ms at merkaba:~/.local/share/akonadi> find file_db_data | wc -l ;  du -sch 
db_data/akonadi/* | sort -rh | head -10
27
6,5G    insgesamt
6,2G    db_data/akonadi/parttable.ibd
245M    db_data/akonadi/pimitemtable.ibd
13M     db_data/akonadi/pimitemflagrelation.ibd
248K    db_data/akonadi/collectionattributetable.ibd
200K    db_data/akonadi/collectiontable.ibd
136K    db_data/akonadi/tagtable.ibd
120K    db_data/akonadi/tagtypetable.ibd
120K    db_data/akonadi/tagremoteidresourcerelationtable.ibd
120K    db_data/akonadi/tagattributetable.ibd

merkaba:/home/ms/.local/share/akonadi> du -sh file_db_data 
6,5M    file_db_data


I definitely prefer this over the original situation.

Original was 2,8 GiB DB + 7 GiB file_db_local.

Now is 6,5 GiB DB + 6,5 file_db_local and more than 524000 files less to 
consider for rsync and our enterprise backup software.

Let's see whether it brings an performance enhancement, but for now I like 
this.

Thanks,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/



More information about the kde-pim mailing list