[Kde-pim] Batch notifications vs. partial failures

Daniel Vrátil dvratil at kde.org
Wed Feb 17 09:18:54 GMT 2016


On Tuesday, February 16, 2016 11:13:27 PM CET Krzysztof Nowicki wrote:
> Hi,

Hi Chris,

> 
> I'm not sure if this is by design, but when handling itemMoved() I've
> noticed that there is no such thing like "abort" or "rollback". Akonadi
> seems to be moving items internally and after that notify the resource
> assuming that it will politely perform the desired actions in the backend.
> Regardless however if the resource cancels or commits the items are moved
> in the database. The only difference in committing seems to be the ability
> to set the remote id and revision. I haven't found any way to tell Akonadi
> to rollback the move in case something failed.

Although not perfect, the error handling (or lack there of) is by design and 
there is a reason for it. The Akonadi server often will (and is designed to) 
go out-of-sync with the remote storage - for example when you are offline. In 
such case any changes user does (moving emails, etc.) get immediately stored 
in database and a change notification describing the change is sent to the 
resource. However if the resource is offline (in terms of internet 
connectivity) the change will be stored by ChangeRecorder inside ResourceBase 
and not delivered to the actual implementation. When the resource goes online 
again the ChangeRecorder will start replaying the notification queue and 
passing it to the actual implementation. This can however happen hours after 
the actual change and you can even turn off Akonadi in between without losing 
any of those changes. This obviously prevents us from holding any kind of 
database transaction open for the duration of the change, and unfortunately 
the server generates notification in fire-and-forget manner so we have no way 
of signaling back to server to "revert" the change.

In the future once we have server-side change recording the server will be 
able to revert changes that the resource has failed to apply to remote 
storage, making this system more robust.

> So the way I have done that for now is in case some items fail to move I'm
> issuing an ItemMoveJob to move them back where they came from, which
> corresponds to a rollback scenario. It's not ideal as the item could fail to
> move for various reasons including an attempt to move an item that is no
> longer in EWS, but sue to some sync failure is still kept stale in the
> Akonadi database.
>
> Ideally I would want to check if the failed item still exists and only move
> it back if it does and delete it if it doesn't, but the more I think about
> it the more it smells like a nasty workaround for what should be done in a
> transactional way.

You generally don't need to do that, as the Item that has failed to move in 
the backend will get synced on next folder sync. This is a behavior that for 
example the IMAP resource relies on.

The only problem with this is with newly created items, where we currently 
have no way of trying to sync them again later if the initial creation on 
server fails. The item will then sit in Akonadi, but won't ever get synced 
again to the resource (not even when changed).


Dan

> Regards
> Chris
> 
> Dnia środa, 10 lutego 2016 22:36:55 CET Krzysztof Nowicki pisze:
> > Dnia środa, 10 lutego 2016 19:00:28 CET Daniel Vrátil pisze:
> > > On Wednesday, February 10, 2016 12:50:53 PM CET Krzysztof Nowicki wrote:
> > > > Hi,
> > > 
> > > Hi!
> > 
> > Hi,
> > 
> > > > During implementation of Akonadi Exchange EWS Resource I've stumbled
> > > > upon
> > > > a
> > > > dilemma on how to handle item moves/removals. EWS supports request
> > > > that
> > > > can
> > > > handle multiple items, so implementing this via the ObserverV3
> > > > interface
> > > > would be the obvious choice. However when a number of items have been
> > > > moved/removed using a single request the Exchange response contains
> > > > status
> > > > information for each item and it can happen that an operation will
> > > > succeed
> > > > on one item, but fail for another.
> > > 
> > > Indeed. This is currently a limitation of the API and the protocol.
> > > 
> > > > When studying the Akonadi Resource API I couldn't find any way to
> > > > report
> > > > failures of single items, and at the same time commit changes of other
> > > > ones. The only choice I see is to commit all item changes or abort the
> > > > whole operation. However in case of a partial success/failure this
> > > > would
> > > > cause Akonadi to go out of sync against the server.
> > > 
> > > I think that Akonadi and EWS being out of sync is not a critical issue,
> > > although I would preferably take reverse approach: if at least one of
> > > the
> > > items in the batch is successfully changed on the server, you succeed
> > > the
> > > entire batch in Akonadi. Then on next sync the failed items will
> > > reappear
> > > in their original folders/state etc. Sucks, but better than nothing :)
> > 
> > Seems like a fair approach. A sync will actually happen instantly as the
> > move/ delete operation will trigger the change event subscription which
> > will trigger a resync of the affected folders.
> > 
> > > > Could you provide me any hints on how to resolve this?
> > > 
> > > I see two more obvious alternatives to the solution above, both suck:
> > > 
> > > 1) You use ObserverV2 without batch changes. This solves your problem,
> > > but
> > > performance-wise it sucks, especially for flag changes.
> > > 
> > > 2) You use ObserverV3 and in case some of the items fail to store on the
> > > server, you manually revert them there and fail the entire batch in
> > > Akonadi. This way you keep Akonadi in sync with server, but the cost of
> > > implementing this in the resource is I think too high.
> > 
> > ObserverV2 would not be an option as I intend to eventually migrate to
> > ObserverV4 in order to keep tags in the Exchange database.
> > 
> > > I'm currently working on a new notification system where there will be
> > > no
> > > batches, just a stream of per-item notifications that you will be able
> > > to
> > > selectively succeed or fail to process in the resource and batch them
> > > locally just for the communication with server.
> > 
> > Cool, that would be great in the long term.
> > 
> > Thanks for the hints.
> > 
> > Regards
> > Chris
> > 
> > > Cheers,
> > > Daniel
> > > 
> > > > Regards
> > > > Chris
> > > > 
> > > > _______________________________________________
> > > > KDE PIM mailing list kde-pim at kde.org
> > > > https://mail.kde.org/mailman/listinfo/kde-pim
> > > > KDE PIM home page at http://pim.kde.org/
> > 
> > _______________________________________________
> > KDE PIM mailing list kde-pim at kde.org
> > https://mail.kde.org/mailman/listinfo/kde-pim
> > KDE PIM home page at http://pim.kde.org/
> 
> _______________________________________________
> KDE PIM mailing list kde-pim at kde.org
> https://mail.kde.org/mailman/listinfo/kde-pim
> KDE PIM home page at http://pim.kde.org/


-- 
Daniel Vrátil
www.dvratil.cz | dvratil at kde.org
IRC: dvratil on Freenode (#kde, #kontact, #akonadi, #fedora-kde)

GPG Key: 0x4D69557AECB13683
Fingerprint: 0ABD FA55 A4E6 BEA9 9A83 EA97 4D69 557A ECB1 3683
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/kde-pim/attachments/20160217/f2e8a59a/attachment.sig>
-------------- next part --------------
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


More information about the kde-pim mailing list