[Kde-pim] [Discussion] Search in Akonadi

Stephen Kelly steveire at gmail.com
Thu Aug 20 14:01:15 BST 2009


Tobias Koenig wrote:

> On Thu, Aug 20, 2009 at 11:59:45AM +0200, Volker Krause wrote:
>> Hi,
> Hej,
> 
>> >   1) Loading everything into a model, iterate over the model, filter
>> >   out
>> > everything you don't need -> performance and memory problems

Agreed.

>> 
>> sure, but not worse than before. Which means it could be an acceptable
>> intermediate solution until the search problem has been solved for real.
> Right, but that could be a real performance problem, as Akonadi is
> supposed to handle more data than the previous libs.
> 
>> >   2) Having the search implemented as separated engine, which returns
>> >   only
>> > the Akonadi UIDs of the items that matches -> sounds perfect
>> 
>> Yep, although I don't really like the UID-list interface. As a developer
>> you don't need UIDs, you need Items. So, I'd rather suggest an interface
>> similar to ItemFetchJob which can be configured using ItemFetchScope and
>> returns as much payload data as you need for your current task. This also
>> saves you some additional roundtrips to the Akonadi server.
> Right, that would be an additional extension.

Currently my idea on how search would work is that it would be external 
search query up to a certain point. If too many results come from the remote 
server lookup or nepomuk lookup or whatever, you're told how many results 
there are, but the results are not fetched. There could possibly be a 'fetch 
results anyway' action. Results for the others would be put into a virtual 
collection with a known name and shown in the results.

http://img29.imageshack.us/img29/3851/searchmockup.png

Edulix had simliar ideas and mockups for how to do searching/completion of 
bookmarks/'websites'.

As you type, the SPARQL query or whatever is updated, and the virtual 
collection gets different linked items etc. At some point though, under 1000 
results for example, we'd stop updating the SPARQL query, and just do local 
further filtering. That way we would cut down on some round trips.

That's a round-about way of saying that I don't think the solution is 1) or 
2), but some mixture.

> <snip>

For the rest I'm not really certain about most of the systems being 
mentioned etc, but if we can make this work with a java system, I don't see 
any problem with requiring that. until there is another solution for the 
politiking.

For the SPARQL being too mighty, I'm not sure that's a big problem. We can 
document how developers should use the simple ones, and maybe even make some 
convenience api for it.

All the best,

Steve.




_______________________________________________
KDE PIM mailing list kde-pim at kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/



More information about the kde-pim mailing list