Help proposal for Akregator

Pierre pinaraf at pinaraf.info
Thu May 3 21:22:49 BST 2018


On Thursday, May 3, 2018 5:02:52 PM CEST you wrote:
> Hey,
> 
> > I plan to do that yes. I don't think it's going to be worse than what we
> > currently have (the Metakit files already store the entire articles).
> > For huge archives (>2GB maybe ?) SQLite may hit some limits and something
> > stronger like PostgreSQL could be helpful.
> > But I don't see how it could not be far better than our current Metakit. I
> > also think that the current system has huge performance issues that we
> > will
> > solve using SQL queries to order articles instead of fetching every
> > article, filtering and sorting using QSortFilterProxyModel. This will
> > take a few months, but after I write a data migration solution I hope to
> > be quickly able to demonstrate the final benefits using my archives.
> 
> Great, that you are started the task of maintaining Akregator.
> 
> I still think that we should not forget, that Akonadi is exactly created for
> handling the storage and accessing big data sources like it is done for
> mails. That's why it makes sense if you would share your ideas of how your
> new archive system should work. Maybe we find a solution, where also a
> later Akonadi replacement would benefit from. I just thinking, that a
> redesign of Aktrgator could push the code in the right direction. For
> switching backends easily, it makes sense to have a clear layer between
> interaction of SQL database and application and do not add direct SQL
> commands everywhere in the code...

The current Akregator code already has the archiving code in plugins.
I have started a git branch with a QSQLITE based archive backend. It is 
already working, just «a bit» buggy. Since I could not migrate my MK archives, 
I am not using it right now, but I hope to start doing that soon.
The SQL schema I planned so far is really simple. It's a basic mapping of the 
archive objects and calls. It is not optimal, but I will have to migrate 
everything first to be able to inspect the real data.
A jump to Akonadi could be possible, but doing so immediately would be really 
hard. The metakit storage had a serious impact on the API design (see the 
number of API calls I removed recently), and writing an akonadi storage would 
just kill the akonadiserver with a query-storm. I will first have a complete, 
working, optimized SQLite storage, and then we will see about the next step.

> Btw. a QSortFilterProxyModel is a very powerful thing and you won't miss
> this and it can be really fast is used correctly. Get in contact with
> Volker Krause and Milian Wolff, both are know the tools to find bottlenecks
> on Qt stack very good. Often those have found bottleneck nobody had
> expected so far and with only some small changes the applications got a lot
> of faster.

Ho I never doubted the performance of our dear Qt tools :)
But no matter how performant it is, it can not beat a no-op. If I push back 
ordering and filtering to the storage level, the model could have real lazy-
loading. Instead, it currently fetches at least 3 fields (status, publication 
date and guid), and uses publication date for ordering. Even with MK it should 
be possible to push down ordering. My performance enhancement patches make 
this worse memory-wise by fetching also the title, but doing that in a single 
call makes it faster. And I am tired of seeing Akregator eating about 1GB of 
RAM on my computer, I won't be happy until I see it below 200MB… (You have no 
idea how mad the modern web browsers make me…)

 Pierre
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/kde-pim/attachments/20180503/f14ba206/attachment.sig>


More information about the kde-pim mailing list