[Kde-pim] fixing mbox - general Akonadi problem ?

Martin Koller kollix at aon.at
Wed Mar 19 21:18:39 GMT 2014


No that my DAV resource fixes are in, I'm again looking into the mbox problems:

On Wednesday 12 February 2014 18:54:43 Daniel Vrátil wrote:
> On Wednesday 12 of February 2014 18:22:40 Martin Koller wrote:
> > On Wednesday 12 February 2014 17:41:26 Daniel Vrátil wrote:
> > > When you delete a mail from MboxResource::itemRemoved() and there are more
> > > items waiting for deletion in ChangeRecorder's pipeline, it's necessary to
> > > modify RIDs of all emails affected by the compaction BEFORE
> > > changeProcessed() is called. This should cause all other items pending
> > > deletion in
> > > ChangeRecorder's pipeline to be refetched from Akonadi with updated RID.
> > 
> > I tried that but that still fails because I try to modify all other moved
> > entries via the following code where I want to retrieve the other Akonadi
> > Items via an ItemFetchJob but obviously during the exec() event loop, I
> > just get the next call to MboxResource::itemRemoved() :-(
> > 
> > Is that completely wrong what I do here ?
> > 
> >     // change all remoteIds of the moved items (all items after the deleted
> > one change their offset in the file)
> >   if ( !movedEntries.isEmpty() ) {
> >       qDebug() << "#movedEntries=" << movedEntries.count();
> >       Item::List movedItems;
> >       const QString colId = QString::number( mboxCollection.id() );
> >       const QString colRid = mboxCollection.remoteId();
> >       foreach (const KMBox::MBoxEntry::Pair &p, movedEntries) {
> >         Item movedItem;
> 
> If it's not too difficult (and slow) to extract Message-ID headers from each 
> message, you might want to use setGID() instead of setRemoteId(). The reason 
> is that in the database the GID column has an index, while remoteId column 
> does not, so looking up the items based on GID will be faster.
> 
> >         movedItem.setRemoteId( colId + QLatin1String("::") + colRid +
> > QLatin1String("::") + QString::number( p.first.messageOffset() ) );
> >	      movedItems << movedItem;
> >       }
> >       ItemFetchJob *fetchItemsJob = new ItemFetchJob( movedItems );
> 
> You can optimize the fetch scope a little:
> 		fetchItemJob->fetchScope().setCacheOnly(true);
> 		// Don't fetch MTIME attribute
> 		fetchItemJob->fetchScope().setFetchModificationTime(false);
> 		// Don't fetch remote ID and remote revision attributes (you don't 
> 		// need them now)
> 		fetchItemJob->fetchScope().setFetchRemoteIdentification(false);
> 
> >       fetchItemsJob->setCollection( mboxCollection );
> >       if ( !fetchItemsJob->exec() ) {
> >         cancelTask( i18n( "Could not fetch ids of items to be moved: %1",
> > fetchItemsJob->errorString() ) ); return;
> >       }
> > 
> >       qDebug() << "#movedEntries=" << movedEntries.count() << " fetched
> > items=" <<fetchItemsJob->items().count();
> 
> *theoretically* (and I'm really not sure about it) - some items could have 
> been deleted on server in the meanwhile, causing the fetch job to return fewer 
> items

There is something happening I do not understand:
I have en empty mbox file and defined a filter which shall move every message into
the local maildir's inbox.
Now I start an external program which fills 5 mails into the mbox file (procmail locked).
The following happens:

virtual bool MboxResource::readFromFile(const QString&)
The MBox file was changed by another program. 
virtual void MboxResource::retrieveItems(const Akonadi::Collection&)
entryListSize 5
"14::file:///home/kdetrunk/mbox::0" 
"14::file:///home/kdetrunk/mbox::645" 
"14::file:///home/kdetrunk/mbox::1290" 
"14::file:///home/kdetrunk/mbox::1935" 
"14::file:///home/kdetrunk/mbox::2580" 
itemsRetrieved. 
virtual bool MboxResource::retrieveItem(const Akonadi::Item&, const QSet<QByteArray>&)
"14::file:///home/kdetrunk/mbox::0" QSet("RFC822") 
virtual bool MboxResource::retrieveItem(const Akonadi::Item&, const QSet<QByteArray>&)
"14::file:///home/kdetrunk/mbox::645" QSet("RFC822") 
virtual bool MboxResource::retrieveItem(const Akonadi::Item&, const QSet<QByteArray>&)
"14::file:///home/kdetrunk/mbox::1290" QSet("RFC822") 
virtual void MboxResource::itemRemoved(const Akonadi::Item&)
"14::file:///home/kdetrunk/mbox::0" 
#movedEntries= 4 
Moved item: "14::file:///home/kdetrunk/mbox::645" 
Moved item: "14::file:///home/kdetrunk/mbox::1290" 
Moved item: "14::file:///home/kdetrunk/mbox::1935" 
Moved item: "14::file:///home/kdetrunk/mbox::2580" 

===> here  fetchItemsJob->exec() is running. after that I get:

#movedEntries= 4  fetched items= 2 

How can it be that the fetch job gets only 2 items in return, whereby I'd need to move
all 4 mails upwards in the mbox file ?
 
Can you explain the dataflow between agents/resources, please ?

I assume the 3 calls to MboxResource::retrieveItem() happen because the filter agent is
asking ?
But why are there only 2 items "left" to be answered in the fetch job ?

-- 
Best regards/Schöne Grüße

Martin
A: Because it breaks the logical sequence of discussion
Q: Why is top posting bad?

()  ascii ribbon campaign - against html e-mail 
/\                        - against proprietary attachments

Geschenkideen, Accessoires, Seifen, Kulinarisches: www.bibibest.at
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/



More information about the kde-pim mailing list