[Kde-pim] "Detected inconsistency in local cache"

Christian Mollekopf chrigi_1 at fastmail.fm
Fri Sep 4 08:45:58 BST 2015


On Thursday 03 September 2015 16.00:20 David Faure wrote:
> Hi Christian,
> 

Hi David,

> (one year later ...)
> 

sadly =/

> On Tuesday 23 September 2014 10:08:10 Christian Mollekopf wrote:
> > On Monday 22 September 2014 22.37:52 David Faure wrote:
> > > akonadi_imap_resource_0(19688)/kdepimlibs (kimap)
> > > RetrieveItemsTask::startRetrievalTasks: Starting retrieval for  "INBOX"
> > > akonadi_imap_resource_0(19688)/kdepimlibs (kimap)
> > > RetrieveItemsTask::onFinalSelectDone: Starting message retrieval.
> > > Elapsed(ms):  70 akonadi_imap_resource_0(19688)/kdepimlibs (kimap)
> > > RetrieveItemsTask::onFinalSelectDone: MessageCount:  215 Local message
> > > count:  190 akonadi_imap_resource_0(19688)/kdepimlibs (kimap)
> > > RetrieveItemsTask::onFinalSelectDone: UidNext:  6346 Local UidNext: 
> > > 6322
> > > akonadi_imap_resource_0(19688)/kdepimlibs (kimap)
> > > RetrieveItemsTask::onFinalSelectDone: HighestModSeq:  0 Local
> > > HighestModSeq:  0 akonadi_imap_resource_0(19688)
> > > RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> > > cache, we're missing some messages. Server:  215  Local:  190
> > > akonadi_imap_resource_0(19688) RetrieveItemsTask::onFinalSelectDone:
> > > Refetching complete mailbox.
> > > 
> > > Can you explain what this means? The fact that I only had 190 emails
> > > locally and there were 215 on the server isn't an inconsistency, it's
> > > simply the result of not being online for one day. Surely that's not
> > > reason enough to refetch the mailbox from scratch?
> > 
> > The algorithm detects:
> > maxAvailableMessages=6346-6322=24
> > localCount=190
> > remoteCount=215
> > 
> > * case A: (localCount+maxAvailableMessages == remoteCount) => we can fetch
> > new messages: We know only new messages arrived, so we fetch only the new
> > ones and sync the flags for the rest.
> > 
> > * case B: (localCount+maxAvailableMessages < remoteCount) => inconsistency
> > detected: because locally deleted items are always first replayed to the
> > server, this should never happen, but is what you're running into. This
> > *does* happen if we failed to replay a removed item to the server.
> > 
> > * case C: (localCount+maxAvailableMessages > remoteCount) => in this case
> > messages were deleted on the server, and potentially some removed as well.
> > We fetch the new messages in the interval localUidNext:remoteUidNext and
> > fetch flags for the rest to detect removed messages.
> > 
> > You are running into case B. Unfortunately I don't know any better way of
> > recovering from failing to replay a message removal, and I think that the
> > only thing we can fix. It is absolutely necessary that we catch case B,
> > otherwise we end up fetching flags-only for the message, resulting in an
> > item without payload and only flags in akonadi reappearing.
> > 
> > If you have any ideas on how to improve the algorithm, please let me know.
> 
> Well, I assume the problem isn't the algorithm, but the root cause for the
> difference in the number of messages. I don't really understand all this,
> but I just upgraded my wife's computer to kdepim 4.14.9 and to use
> akonadi_kolab_resource, and this message showed *442* times in just today
> on her machine !
> 

Jup, that sucks.

> 14:46:48 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  47  Local:  45 14:46:49
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  50  Local:  49 14:46:49 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  194  Local:  193 14:46:50
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  164  Local:  155 14:46:51 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  719  Local:  607 14:46:55
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  444  Local:  441 14:46:58 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  853  Local:  750 14:51:42
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  853  Local:  750 14:51:46 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  444  Local:  441 14:51:48
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  719  Local:  607 14:51:51 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  164  Local:  155 14:51:52
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  194  Local:  193 14:51:53 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  50  Local:  49 14:51:54
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  47  Local:  45 14:51:55 akonadi_kolab_resource_0(2616)
> RetrieveItemsTask::onFinalSelectDone: Detected inconsistency in local
> cache, we're missing some messages. Server:  811  Local:  641 14:56:43
> akonadi_kolab_resource_0(2616) RetrieveItemsTask::onFinalSelectDone:
> Detected inconsistency in local cache, we're missing some messages. Server:
>  811  Local:  641
> 
> This leads to a lot of akonadi activity and notifications, which leads to
> plasma-destop being busy (100% CPU) in the calendar applet, making plasma
> unusable for entire blocks of 10s, making my wife yell at me :-)
> 

Wow, the activity should be mostly on the resource and mysql, not sure why 
you're getting so many notifications.

> One thing's for sure, this debug output should include the collection number
> or name. 

True.

> But apart from that .... can you check if you see the issue on a
> KDE4 system? Does any other user of the kolab resource see this as well?
> What can be done to debug it?
> 

The system has consequently more messages on the server than locally, so the 
most likely cause is that this happens in groupware folders and the kolab 
resource fails to parse certain objects (as indicated by the messages below).

You can try synchronizing a single collection to figure out where the error 
happens, and then look at the content of the offending MIME messages. The IMAP 
UID is the remote id of the item. You could i.e. look at the message by 
setting up an IMAP resource to get the messages.

Regarding the fix, the kolab resource is supposed to create an error item for 
every message it fails to parse to avoid this problem. An error item has the 
required type (mail, calendar, todo,...), but contains a dummy message ("We 
failed to parse this event.... " or so). So I'm not sure how that situation 
can persist (it should fix itself after a single sync, which it obviously 
doesn't for you).

You can also enable more debug output by defining IMAP_TRACE=1 as an 
environment variable, perhaps that already gives you a bit more insight.

Cheers,
Christian

> 
> This also happens a lot:
> 
> (10:29:12) kolabobject.cpp(353):        message has no contents (we likely
> failed to parse it correctly) (10:29:12) kolabobject.cpp(205):       
> MessageId:  ""
> (10:29:12) kolabobject.cpp(206):        Subject:  ""
> 10:29:12 akonadi_kolab_resource_0(2616) KolabHelpers::checkForErrors: Error
> on item with id:  -1  remote id:  "66" : "message has no contents (we
> likely failed to parse it correctly)
> 
> $ grep KolabHelpers::checkForErrors ~/.xsession-errors  | wc -l
> 2451
> 
> How to debug? It doesn't give me a collection number....

_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


More information about the kde-pim mailing list