[Kde-pim] Batch notifications vs. partial failures

Krzysztof Nowicki krissn at op.pl
Tue Feb 16 22:13:27 GMT 2016


Hi,

I'm not sure if this is by design, but when handling itemMoved() I've noticed 
that there is no such thing like "abort" or "rollback". Akonadi seems to be 
moving items internally and after that notify the resource assuming that it 
will politely perform the desired actions in the backend. Regardless however 
if the resource cancels or commits the items are moved in the database. The 
only difference in committing seems to be the ability to set the remote id and 
revision. I haven't found any way to tell Akonadi to rollback the move in case 
something failed.

So the way I have done that for now is in case some items fail to move I'm 
issuing an ItemMoveJob to move them back where they came from, which 
corresponds to a rollback scenario. It's not ideal as the item could fail to 
move for various reasons including an attempt to move an item that is no 
longer in EWS, but sue to some sync failure is still kept stale in the Akonadi 
database.

Ideally I would want to check if the failed item still exists and only move it 
back if it does and delete it if it doesn't, but the more I think about it the 
more it smells like a nasty workaround for what should be done in a 
transactional way.

Regards
Chris

Dnia środa, 10 lutego 2016 22:36:55 CET Krzysztof Nowicki pisze:
> Dnia środa, 10 lutego 2016 19:00:28 CET Daniel Vrátil pisze:
> > On Wednesday, February 10, 2016 12:50:53 PM CET Krzysztof Nowicki wrote:
> > > Hi,
> > 
> > Hi!
> 
> Hi,
> 
> > > During implementation of Akonadi Exchange EWS Resource I've stumbled
> > > upon
> > > a
> > > dilemma on how to handle item moves/removals. EWS supports request that
> > > can
> > > handle multiple items, so implementing this via the ObserverV3 interface
> > > would be the obvious choice. However when a number of items have been
> > > moved/removed using a single request the Exchange response contains
> > > status
> > > information for each item and it can happen that an operation will
> > > succeed
> > > on one item, but fail for another.
> > 
> > Indeed. This is currently a limitation of the API and the protocol.
> > 
> > > When studying the Akonadi Resource API I couldn't find any way to report
> > > failures of single items, and at the same time commit changes of other
> > > ones. The only choice I see is to commit all item changes or abort the
> > > whole operation. However in case of a partial success/failure this would
> > > cause Akonadi to go out of sync against the server.
> > 
> > I think that Akonadi and EWS being out of sync is not a critical issue,
> > although I would preferably take reverse approach: if at least one of the
> > items in the batch is successfully changed on the server, you succeed the
> > entire batch in Akonadi. Then on next sync the failed items will reappear
> > in their original folders/state etc. Sucks, but better than nothing :)
> Seems like a fair approach. A sync will actually happen instantly as the
> move/ delete operation will trigger the change event subscription which
> will trigger a resync of the affected folders.
> 
> > > Could you provide me any hints on how to resolve this?
> > 
> > I see two more obvious alternatives to the solution above, both suck:
> > 
> > 1) You use ObserverV2 without batch changes. This solves your problem, but
> > performance-wise it sucks, especially for flag changes.
> > 
> > 2) You use ObserverV3 and in case some of the items fail to store on the
> > server, you manually revert them there and fail the entire batch in
> > Akonadi. This way you keep Akonadi in sync with server, but the cost of
> > implementing this in the resource is I think too high.
> 
> ObserverV2 would not be an option as I intend to eventually migrate to
> ObserverV4 in order to keep tags in the Exchange database.
> 
> > I'm currently working on a new notification system where there will be no
> > batches, just a stream of per-item notifications that you will be able to
> > selectively succeed or fail to process in the resource and batch them
> > locally just for the communication with server.
> 
> Cool, that would be great in the long term.
> 
> Thanks for the hints.
> 
> Regards
> Chris
> 
> > Cheers,
> > Daniel
> > 
> > > Regards
> > > Chris
> > > 
> > > _______________________________________________
> > > KDE PIM mailing list kde-pim at kde.org
> > > https://mail.kde.org/mailman/listinfo/kde-pim
> > > KDE PIM home page at http://pim.kde.org/
> 
> _______________________________________________
> KDE PIM mailing list kde-pim at kde.org
> https://mail.kde.org/mailman/listinfo/kde-pim
> KDE PIM home page at http://pim.kde.org/


_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


More information about the kde-pim mailing list