[kdepim-users] Tbird versus Kmail, performance

Pablo Sanchez pablo at blueoakdb.com
Sat Nov 15 22:17:59 GMT 2014


[ Comments below, in-line ]

On 11/15/2014 05:05 PM, René J.V. Bertin wrote:
> On Saturday November 15 2014 16:38:29 Pablo Sanchez wrote:
> 
> Hi,

Hi Rene,

>> When I engage in performance tuning efforts, unless a previous
>> version's changes are minimal, I only look at the present.  It's a
>> bit easier when a problem is glaring.
> 
> I'm not exactly sure what you're saying here. 

Oh, sorry for being unclear.  What I was trying to say is given the
set of changes from one version to others, it's usually most fruitful
to look at the current s/w and fix it.

> I know one has to be very careful with subjective impressions like
> "everything was better before", but in this case I was pretty sure
> that the real issues started after updating to 4.14

I'm not denying the above.  I suspect your impressions are right.
Ultimately the goal is to fix the existing s/w so that's why I tend to
focus on it (when I'm in charge of fixing P&T issues.  :)

> after Daniel's explanations from yesterday which suggest that the
> database is not the bottleneck.

The data doesn't indicate the above at all.  What I saw is we're
hitting the database very hard.  Doing some crude sampling against the
database I showed the queries.  These queries, as I mentioned, are
nearly unbounded:  no timestamp is used to reduce their database
bandwidth demand.

I also indicated how it's possible to add a timestamp and set it when
the row is added to the table.  This timestamp is used to reduce the
demand against the database to improve scalability.

Let me give you an example.  If you run a query against a table and
crunch through 100,000 rows (let's just ignore how much space they
require) and if all the data is in cache, from the O/S perspective,
the DBMS will appear as it's only consuming CPU.

If you repeatedly run the query many times, the DBMS will be CPU bound
as it services the query, over and over again.

From what I saw, we have /several/ of these queries running over and
over again.

Now, if you take the same query but add a timestamp filter to only
return the rows between the last time it ran and /now/, it may be that
we only loaded, say, 100 mail messages between queries.  Crunching 100
rows rather than 100,000 rows ... well .. it's obvious right?  Less
demand and thus we improve scalability.

I hope the above helps explain it.

Daniel didn't respond to me personally so I assume either a) he's away
or b) not interested in my help.  :)

> Anyway, I kept akonadi at the latest version (a recent git
> clone). Talking about database options:  I came across a suggestion
> on one of the Arch wikipages to unset an innodb setting related to
> aio_write on ZFS. Not that I ever saw the error in question, but I
> guess I should figure out what that setting does (I just suspended
> my Linux rig so I don't have the exact name handy right now).

The database is not I/O bound.  It's CPU bound so unfortunately no
amount of I/O tuning is going to help.

Cheers,
--
Pablo Sanchez - Blueoak Database Engineering, Inc
Ph:    819.459.1926         Blog:  http://pablo-blog.blueoakdb.com
iNum:  883.5100.0990.1054

_______________________________________________
KDE PIM users mailing list
Subscription management: https://mail.kde.org/mailman/listinfo/kdepim-users


More information about the kdepim-users mailing list