[Kde-pim] fixing mbox - general Akonadi problem ?

Martin Koller kollix at aon.at
Wed Feb 12 17:22:40 GMT 2014


On Wednesday 12 February 2014 17:41:26 Daniel Vrátil wrote:

> When you delete a mail from MboxResource::itemRemoved() and there are more 
> items waiting for deletion in ChangeRecorder's pipeline, it's necessary to 
> modify RIDs of all emails affected by the compaction BEFORE changeProcessed() 
> is called. This should cause all other items pending deletion in 
> ChangeRecorder's pipeline to be refetched from Akonadi with updated RID.

I tried that but that still fails because I try to modify all other moved entries via the following code
where I want to retrieve the other Akonadi Items via an ItemFetchJob but obviously during
the exec() event loop, I just get the next call to MboxResource::itemRemoved() :-(

Is that completely wrong what I do here ?


    // change all remoteIds of the moved items (all items after the deleted one change their offset in the file)
    if ( !movedEntries.isEmpty() ) {
      qDebug() << "#movedEntries=" << movedEntries.count();
      Item::List movedItems;
      const QString colId = QString::number( mboxCollection.id() );
      const QString colRid = mboxCollection.remoteId();
      foreach (const KMBox::MBoxEntry::Pair &p, movedEntries) {
        Item movedItem;
        movedItem.setRemoteId( colId + QLatin1String("::") + colRid + QLatin1String("::") + QString::number( p.first.messageOffset() ) );
        movedItems << movedItem;
      }
      ItemFetchJob *fetchItemsJob = new ItemFetchJob( movedItems );
      fetchItemsJob->setCollection( mboxCollection );
      if ( !fetchItemsJob->exec() ) {
        cancelTask( i18n( "Could not fetch ids of items to be moved: %1", fetchItemsJob->errorString() ) );
        return;
      }

      qDebug() << "#movedEntries=" << movedEntries.count() << " fetched items=" <<fetchItemsJob->items().count();
      Q_ASSERT( movedEntries.count() == fetchItemsJob->items().count() );
      for (int i = 0; i < movedEntries.count(); i++) {
        Item itemToMove = fetchItemsJob->items()[i];
        qDebug() << "item to move:" << itemToMove.remoteId() << itemToMove.id();
        const KMBox::MBoxEntry::Pair &p = movedEntries[i];
        itemToMove.setRemoteId( colId + QLatin1String("::") + colRid + QLatin1String("::") + QString::number( p.second.messageOffset() ) );
        qDebug() << "item to move new rid:" << itemToMove.remoteId();
        movedItems[i] = itemToMove;
      }

      changesCommitted(movedItems);   // is that correct ?


> Porting MBox resource to ObserverV3 which supports batch deletions and flags 
> changes could help too. It should speed up the resource notably, because it 
> could do compaction after the entire batch is processed, preventing 
> ChangeRecorder from flushing and refetching the entire cache for pipelined 
> notifications after every single item being processed.

will have a look into this when deletion finally works.

-- 
Best regards/Schöne Grüße

Martin
A: Because it breaks the logical sequence of discussion
Q: Why is top posting bad?

()  ascii ribbon campaign - against html e-mail 
/\  www.asciiribbon.org   - against proprietary attachments

Geschenkideen, Accessoires, Seifen, Kulinarisches: www.bibibest.at
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/



More information about the kde-pim mailing list