[Kde-pim] Batch notifications vs. partial failures

Krzysztof Nowicki krissn at op.pl
Wed Feb 10 21:36:55 GMT 2016


Dnia środa, 10 lutego 2016 19:00:28 CET Daniel Vrátil pisze:
> On Wednesday, February 10, 2016 12:50:53 PM CET Krzysztof Nowicki wrote:
> > Hi,
> 
> Hi!

Hi,

> 
> > During implementation of Akonadi Exchange EWS Resource I've stumbled upon
> > a
> > dilemma on how to handle item moves/removals. EWS supports request that
> > can
> > handle multiple items, so implementing this via the ObserverV3 interface
> > would be the obvious choice. However when a number of items have been
> > moved/removed using a single request the Exchange response contains status
> > information for each item and it can happen that an operation will succeed
> > on one item, but fail for another.
> 
> Indeed. This is currently a limitation of the API and the protocol.
> 
> > When studying the Akonadi Resource API I couldn't find any way to report
> > failures of single items, and at the same time commit changes of other
> > ones. The only choice I see is to commit all item changes or abort the
> > whole operation. However in case of a partial success/failure this would
> > cause Akonadi to go out of sync against the server.
> 
> I think that Akonadi and EWS being out of sync is not a critical issue,
> although I would preferably take reverse approach: if at least one of the
> items in the batch is successfully changed on the server, you succeed the
> entire batch in Akonadi. Then on next sync the failed items will reappear in
> their original folders/state etc. Sucks, but better than nothing :)

Seems like a fair approach. A sync will actually happen instantly as the move/
delete operation will trigger the change event subscription which will trigger 
a resync of the affected folders.

> > Could you provide me any hints on how to resolve this?
> 
> I see two more obvious alternatives to the solution above, both suck:
> 
> 1) You use ObserverV2 without batch changes. This solves your problem, but
> performance-wise it sucks, especially for flag changes.
> 
> 2) You use ObserverV3 and in case some of the items fail to store on the
> server, you manually revert them there and fail the entire batch in Akonadi.
> This way you keep Akonadi in sync with server, but the cost of implementing
> this in the resource is I think too high.
> 
ObserverV2 would not be an option as I intend to eventually migrate to 
ObserverV4 in order to keep tags in the Exchange database.
> 
> I'm currently working on a new notification system where there will be no
> batches, just a stream of per-item notifications that you will be able to
> selectively succeed or fail to process in the resource and batch them
> locally just for the communication with server.
> 
Cool, that would be great in the long term.

Thanks for the hints.

Regards
Chris
> 
> Cheers,
> Daniel
> 
> > Regards
> > Chris
> > 
> > _______________________________________________
> > KDE PIM mailing list kde-pim at kde.org
> > https://mail.kde.org/mailman/listinfo/kde-pim
> > KDE PIM home page at http://pim.kde.org/


_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


More information about the kde-pim mailing list