[Kde-pim] Problem with bulk fetching of items with 4.8

Shaheed Haque srhaque at theiet.org
Sun Apr 1 23:08:08 BST 2012


Can anybody summarise, or point to a summary of, the parallelism introduced
by Akonadi Jobs versus

    QMetaObject::invokeMethod(this, "myHandler", Qt::QueuedConnection);

I'm trying to work out what locking if any I need between these two types
of "parallel" activity. I'm assuming that Akonadi jobs are effectively
serialised, so no locking is needed between say, a retrieveItem() and a
scheduleCustomTask()?

2012/2/10 Kevin Krammer <kevin.krammer at gmx.at>

> Hi Shaheed,
>
> On Saturday, 2012-02-04, Shaheed Haque wrote:
> > Hi Kevin,
> >
> > See below...
> >
> > 2012/2/4 Kevin Krammer <kevin.krammer at gmx.at>:
> > > Hi Shaheed,
> > >
> > > I am not entirely sure what problem you are trying to solve.
> > >
> > > My interpretation is that Exchange does not allow you to query for
> items
> > > that have been created/modified/deleted since the last sync.
> >
> > Correct.
> >
> > > Since that would mean there is no such thing as the concept "remote
> > > revision", why would you want to store one?
> > >
> > > Without the option of just getting the updates you will have two set of
> > > items, one from Akonadi and one from Exchange.
> > >
> > > Any item in E but not in A is new.
> > > Any item in E and in A is potentially modified.
> > > Any item not in E but in A has been deleted
> >
> > Also correct.
> >
> > > But as I said I think I don't understands the problem.
> >
> > The piece you are missing is the amount of data, and the speed with
> > which it can be fetched. On my system, I can fetch about 500 items
> > every 50 seconds, and there are about 500k items to fetch, so a full
> > download takes ~50k seconds or about 14 hours. Both the number of
> > items, and the download time mean that I cannot realistically do the
> > usual thing of building two lists for E and A, and subtracting one
> > from the other.
>
> I see, that indeed changes things.
>
> > Instead, I have this design in mind...
> >
> > 1. When I start fetching the collection, I will note the starting time
> > using a collection attribute to persist the information (in case of
> > needing to restart the machine).
>
> You could alternatively use Collection::remoteRevision
>
> > 2. I have an incremental fetch phase during which I fetch data in
> > batches (of 500 items). After each batch, I "bookmark" where I got to.
> > If I shutdown my machine, on restart, I resume the fetch using the
> > bookmark.
> >
> > 3. When I get to the end (I've never actually managed to get to that
> > point yet!), I hope to delete all the items with a creation date prior
> > to the recorded start time.
> >
> > I hope that make sense? Anyway, it is the query for this last part
> > that I am stuck on - or some other better idea!
>
> The only alternative I can come up with is updating each processed Akonadi
> item's remote revision so that all non updated once remain at the old one.
>
> I.e.
>
> foreach( item from exchange ) {
>  fetch akonadi item
>  depending on result either create/modify
>    use new timestamp as item remote revision
> }
> once full sync is done, fetch all akonadi items again and delete all which
> still have the original timestamp.
>
> Cheers,
> Kevin
>
> --
> Kevin Krammer, KDE developer, xdg-utils developer
> KDE user support, developer mentoring
>
> _______________________________________________
> KDE PIM mailing list kde-pim at kde.org
> https://mail.kde.org/mailman/listinfo/kde-pim
> KDE PIM home page at http://pim.kde.org/
>
_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/



More information about the kde-pim mailing list