Review of database aspect of Akonadi, Akonadi concepts and a master plan
Martin Steigerwald
martin at lichtvoll.de
Sun Mar 18 12:48:22 GMT 2018
Dear PIMsters,
Martin Steigerwald - 04.02.18, 21:53:
> In the kdepim-users thread "kmail sigh" with a report on unpredictable
> behavior of Akonadi Pablo offered to review the database handling of
> Akonadi. I also thought I took the chance to do something productive about
> the repeated user reports of performance issues and erroneous behavior and
> asked Daniel and Sandro about whether they would like to have such an
> review.
I like to give an update about the progress we made on this: First off, we all
had been quite busy, nonetheless I have some news for you:
> Pablo is currently on the review for MySQL / MariaDB backend on the
> Phabricator tasks
>
> akonadi > MySQL ERD Review
> https://phabricator.kde.org/T7846
>
> akonadi > MySQL configuration settings
> https://phabricator.kde.org/T7874
Pablo, I lost track on the current work on that, would you be willing to give
us a short summary? I know you and Dan have been quite active on this :)
> But this also lead to a longer communication off list also about a VM with
> KDEPIM for testing purposes and about information Pablo would need for an
> effective review. During which Daniel provided one gem after another on
> insight about Akonadi´s concepts and on how to effectively improve it. After
> a short time of questions and answers it became obvious that this
> information would benefit others and that it makes sense to continue this
> in public. […]
[…]
> So look forward to mails from Dan about the concepts and inner workings of
> Akonadi, how the change-recorder works and about three, well if I counted
> correctly four major improvements of Akonadi:
Dan posted his text about the concepts and different components and items in
Akonadi onto the community wiki:
https://community.kde.org/KDE_PIM/Akonadi/Architecture
It is not yet totally complete, there are some bits missing in "How the whole
thing works together". Dan or I happily update you when this is also complete.
I follow up with some information on the major tasks to improve the overall
KDEPIM on Akonadi experience considerably, I took together from various mails
– I added links to Phabricator tasks as far as I am aware of them.
> 1. rewriting the entire indexing infrastructure
Dan already worked on this quite a lot, let me quote him about the current
state:
> One part of the fix is rewriting the entire indexing infrastructure, which
> I've been slowly working on for the past half a year or so until I got
> stuck on some issues and kinda lost interest. I should probably try to pick
> up the work agin at some point. The new indexing mechanism will reduce the
> IO considerably by indexing the data in the Resource when it's sending the
> data to the server (via ItemCreateJob) and the server then only writes the
> indexed data into Xapian database - no more indexing agent that triggers
> agent-server-resource-server-agent roundtrips for each Item with over two
> dozens of SQL queries for each one.
Dan tracks the progress of this one in Phabricator already:
Make Indexing Great Again
https://phabricator.kde.org/T7014
> 2. notification payloads. So that change notifications have the changed data
> inside already.
Dan describes this task as follows:
> Second part of the fix is notification payloads - sending the entire Items
> as part of change notifications so that every client that gets notified
> about an Item change does not need to query Akonadi for the actual Item
> but get it right away. If you have 5 clients receiving the notifications,
> that's 5 ItemFetchJobs and at least 15 SQL queries on the major tables (3
> per client). We usually have most of the data already in memory when doing
> the change on the server, so just sending the data and the payload to the
> clients as part of the notification would be a massive speed up and would
> ease up on SQL.
I did not find a Phabricator task for this one yet. I will ask Dan about it
and look into creating one if its not there yet.
> 3. foreign payloads, for example tell KMail to access directly the mail in
> the maildir.
This one requires the implementation of the first two changes in order to work
reliably and not to loose any mails – thanks for the wonders of a asynchronous
multi process architecture. As far as I understand this change mostly helps
with Maildir. I am not yet sure whether it could also be used to access mails
on an IMAP account without stuffing them in the cache and whether that would
be beneficial.
Dan describes it as:
> Third part of the fix are foreign payloads - in other words Akonadi database
> directly refering to the maildir files, instead of having copies of the
> maildir emails in file_db_data. As such when you open an email in KMail,
> right now Akonadi checks the DB, talk to the resource, resource finds the
> maildir message, uploads it to Akonadi which copies it to file_db_data and
> sends it to client which then reads it from file_db_data (and the copy is
> eventually removed by Akonadi after some timeout) which probably also
> trigger the Indexing Agent. With foreign payloads Akonadi would simply
> have a path to the actual maildir file, so that KMail would access the
> file directly in the maildir, saving at least another dozen or so SQL
> queries and massively reduce IO and copying (and disk-space wasting).
Its also in Phabricator already:
[Akonadi] Foreign payload
https://phabricator.kde.org/T630
Last but not least we have:
> 4. and server-side change recording which as Dan will tell you fixes
> everything
Let me share the description of this feature Dan with you:
> It's a major new change that will save our souls, this planet and possibly
> the entire universe.
>
> It's related to how Akonadi solves the problem when you change something
> (mark an email as read etc.) but the IMAP resource is offline so it can't
> upload the change on the IMAP server. Right now the Resource remembers the
> change (this is called "change recording") and when it goes online it
> "replays" the change to the IMAP resource.
>
> Server-side change recording is a major plan to make the changes recorded on
> the Akonadi Server and the Resource requests those changes when it can
> replay them. This, among other things, will allow the changes to be re-
> replayed when they failed to be replayed the first time as well as keep
> consistency between the notification and the actal state of the change
> Entity.
So this is also the fix for: If a resource fails to replay a change, you end
up with "item has no rid", which means the item is only inside the database
and nowhere else. So this really has an impact on consistency between Akonadi
as a cache and the resources it manages.
This is:
[Akonadi] Server-side change recording
https://phabricator.kde.org/T638
> The good news is that these four changes have the potential to remove the
> reason for the majority of all KDEPIM related bug reports. And some of these
> changes are partly done already.
>
> The challenge is: These changes are big, scary and have a good potential for
> regressions. But especially they need time. A lot of time for one freelance
> KDEPIM developer alone.
And of course, all of what I wrote does not yet solve the above issue. Its
still a huge effort and we appreciate any help on these topics. Thing with
this is: I am not sure whether its feasible to divide each of those tasks into
sub tasks in a way that could allow volunteers to help with parts of the
implementation. I will be asking Dan about this as well.
Of course, these four changes aren´t the only ways to improve Akonadi, but
from what I understand so far, these four have a really huge impact.
Thank you,
--
Martin
More information about the kde-pim
mailing list