[Kde-pim] Batch notifications vs. partial failures

Daniel Vrátil dvratil at kde.org
Wed Feb 17 10:54:37 GMT 2016


On Wednesday, February 17, 2016 11:46:23 AM CET Krzysztof Nowicki wrote:
> W dniu 2016-02-17 10:18:54 użytkownik Daniel Vrátil <dvratil at kde.org> 
napisał:
> > Hi Chris,
> 
> Hi Daniel,
> 
> [snip]
> 
> > > So the way I have done that for now is in case some items fail to move
> > > I'm
> > > issuing an ItemMoveJob to move them back where they came from, which
> > > corresponds to a rollback scenario. It's not ideal as the item could
> > > fail to move for various reasons including an attempt to move an item
> > > that is no longer in EWS, but sue to some sync failure is still kept
> > > stale in the Akonadi database.
> > > 
> > > Ideally I would want to check if the failed item still exists and only
> > > move
> > > it back if it does and delete it if it doesn't, but the more I think
> > > about
> > > it the more it smells like a nasty workaround for what should be done in
> > > a
> > > transactional way.
> > 
> > You generally don't need to do that, as the Item that has failed to move
> > in
> > the backend will get synced on next folder sync. This is a behavior that
> > for example the IMAP resource relies on.
> 
> That's fair, however a folder sync can get expensive if you have a lot of
> items inside. In EWS there is a way to optimize this by using incremental
> syncs. This however relies on a state that is identical on both sides and
> the server will only send information about changes. If Akonadi goes out of
> sync there is no way to figure out about it unless we ignore the sync state
> and do a full sync, which as I said - can be expensive. This is why I'm
> doing all I can to avoid it.

I agree, full syncs are expensive, but they should only happen in case the 
local cache is inconsistent which should not happen very often and so full 
sync should only be needed rarely. In most cases uploading changes to server 
will be a success and then you can just go with incremental sync.

I don't know EWS API so I don't know if this is possible, but one trick we are 
using in some resources is to store timestamp of last collection sync in its 
remoteRevision. In case of failure - like failing to update item flags, you 
would just update the item's parent collection's remoteRevision to be that of 
the failed item's timestamp. This way you can still use incremental sync for 
the collection, you will just sync a larger set, including the failed item.

Dan


> 
> Regards
> Chris
> 
> > The only problem with this is with newly created items, where we currently
> > have no way of trying to sync them again later if the initial creation on
> > server fails. The item will then sit in Akonadi, but won't ever get synced
> > again to the resource (not even when changed).
> > 
> > 
> > Dan
> > 
> > > Regards
> > > Chris
> > > 
> > > Dnia środa, 10 lutego 2016 22:36:55 CET Krzysztof Nowicki pisze:
> > > > Dnia środa, 10 lutego 2016 19:00:28 CET Daniel Vrátil pisze:
> > > > > On Wednesday, February 10, 2016 12:50:53 PM CET Krzysztof Nowicki 
wrote:
> > > > > > Hi,
> > > > > 
> > > > > Hi!
> > > > 
> > > > Hi,
> > > > 
> > > > > > During implementation of Akonadi Exchange EWS Resource I've
> > > > > > stumbled
> > > > > > upon
> > > > > > a
> > > > > > dilemma on how to handle item moves/removals. EWS supports request
> > > > > > that
> > > > > > can
> > > > > > handle multiple items, so implementing this via the ObserverV3
> > > > > > interface
> > > > > > would be the obvious choice. However when a number of items have
> > > > > > been
> > > > > > moved/removed using a single request the Exchange response
> > > > > > contains
> > > > > > status
> > > > > > information for each item and it can happen that an operation will
> > > > > > succeed
> > > > > > on one item, but fail for another.
> > > > > 
> > > > > Indeed. This is currently a limitation of the API and the protocol.
> > > > > 
> > > > > > When studying the Akonadi Resource API I couldn't find any way to
> > > > > > report
> > > > > > failures of single items, and at the same time commit changes of
> > > > > > other
> > > > > > ones. The only choice I see is to commit all item changes or abort
> > > > > > the
> > > > > > whole operation. However in case of a partial success/failure this
> > > > > > would
> > > > > > cause Akonadi to go out of sync against the server.
> > > > > 
> > > > > I think that Akonadi and EWS being out of sync is not a critical
> > > > > issue,
> > > > > although I would preferably take reverse approach: if at least one
> > > > > of
> > > > > the
> > > > > items in the batch is successfully changed on the server, you
> > > > > succeed
> > > > > the
> > > > > entire batch in Akonadi. Then on next sync the failed items will
> > > > > reappear
> > > > > in their original folders/state etc. Sucks, but better than nothing
> > > > > :)
> > > > 
> > > > Seems like a fair approach. A sync will actually happen instantly as
> > > > the
> > > > move/ delete operation will trigger the change event subscription
> > > > which
> > > > will trigger a resync of the affected folders.
> > > > 
> > > > > > Could you provide me any hints on how to resolve this?
> > > > > 
> > > > > I see two more obvious alternatives to the solution above, both
> > > > > suck:
> > > > > 
> > > > > 1) You use ObserverV2 without batch changes. This solves your
> > > > > problem,
> > > > > but
> > > > > performance-wise it sucks, especially for flag changes.
> > > > > 
> > > > > 2) You use ObserverV3 and in case some of the items fail to store on
> > > > > the
> > > > > server, you manually revert them there and fail the entire batch in
> > > > > Akonadi. This way you keep Akonadi in sync with server, but the cost
> > > > > of
> > > > > implementing this in the resource is I think too high.
> > > > 
> > > > ObserverV2 would not be an option as I intend to eventually migrate to
> > > > ObserverV4 in order to keep tags in the Exchange database.
> > > > 
> > > > > I'm currently working on a new notification system where there will
> > > > > be
> > > > > no
> > > > > batches, just a stream of per-item notifications that you will be
> > > > > able
> > > > > to
> > > > > selectively succeed or fail to process in the resource and batch
> > > > > them
> > > > > locally just for the communication with server.
> > > > 
> > > > Cool, that would be great in the long term.
> > > > 
> > > > Thanks for the hints.
> > > > 
> > > > Regards
> > > > Chris
> > > > 
> > > > > Cheers,
> > > > > Daniel
> > > > > 
> > > > > > Regards
> > > > > > Chris
> > > > > > 
> > > > > > _______________________________________________
> > > > > > KDE PIM mailing list kde-pim at kde.org
> > > > > > https://mail.kde.org/mailman/listinfo/kde-pim
> > > > > > KDE PIM home page at http://pim.kde.org/
> > > > 
> > > > _______________________________________________
> > > > KDE PIM mailing list kde-pim at kde.org
> > > > https://mail.kde.org/mailman/listinfo/kde-pim
> > > > KDE PIM home page at http://pim.kde.org/
> > > 
> > > _______________________________________________
> > > KDE PIM mailing list kde-pim at kde.org
> > > https://mail.kde.org/mailman/listinfo/kde-pim
> > > KDE PIM home page at http://pim.kde.org/
> 
> _______________________________________________
> KDE PIM mailing list kde-pim at kde.org
> https://mail.kde.org/mailman/listinfo/kde-pim
> KDE PIM home page at http://pim.kde.org/


-- 
Daniel Vrátil
www.dvratil.cz | dvratil at kde.org
IRC: dvratil on Freenode (#kde, #kontact, #akonadi, #fedora-kde)

GPG Key: 0x4D69557AECB13683
Fingerprint: 0ABD FA55 A4E6 BEA9 9A83 EA97 4D69 557A ECB1 3683
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/kde-pim/attachments/20160217/298cee9e/attachment.sig>
-------------- next part --------------
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


More information about the kde-pim mailing list