[Kst] Plotting large datasets in real time

Andrew Walker arwalker at sumusltd.com
Sat May 2 04:14:48 CEST 2009

Hi Michael,

This sounds like something that Kst should be able to handle.

That said, the csv file format is probably the most inefficient that there 
is, and
it sounds as if you're encountering that in what you're trying to do.

The best approach would be to create your own file format and data source.
Kst has an extensible system for reading data files, which is pretty simply 
to use.

There is fairly extensive documentation available in the Kst Handbook.


----- Original Message ----- 
From: <bug.zilla.vynce at neverbox.com>
To: <kst at kde.org>
Sent: Friday, May 01, 2009 5:31 PM
Subject: [Kst] Plotting large datasets in real time

>I have a utility that logs the memory usage of all of the processes on
> an embedded device. It currently writes one record to an ASCII CSV log
> file every second. Memory usage is logged while running a stability
> test which can last for several days. This produces on the order of 5
> million samples per day (~50 processes logging once per second).
> Memory usage can become a problem with longer runs, but the bigger
> problem right now is CPU usage. The CPU remains pegged at 100% and the
> Kst UI becomes very laggy.
> I've increased the 'plot update timer' to 9999ms (the max) and that
> helps a bit, but it still pegs the CPU for several seconds after each
> update. This change at least makes the UI more responsive between
> updates.
> I'm trying to come up with ways to reduce the number of samples and/or
> find a more efficient data file format/type. Since memory usage of a
> process can often remain fairly stable over time, there is no reason
> to log the same values over and over again. So I could just avoid
> logging values that haven't changed. In theory, this would
> significantly reduce the number of data points while still maintaining
> the same resolution. However, due to the structure of a CSV file, it
> won't actually help much -- I have to write out a full record to the
> log file every time the memory usage of *any* process has changed.
> Based on some tests, it looks like most of the time is being spent on
> reading/parsing the CSV file. Even if I only create one DataVector
> from the file and only plot that one curve, it still uses 100% CPU. I
> tried setting it to only read the last n records and/or to only read
> every mth record and that doesn't seem to help either.
> I may give Dirfiles a try, but I'm not sure I like the idea of having
> a whole directory of files for each graph instead of a single file. It
> seems like this would get around most of the issues I'm having with
> CSV files though.
> The logging utility is written in C and I'm free to change the logging
> format to anything. I'm using Kst 1.7.0.
> Is there a better way for me to handle this scenario? If Kst isn't the
> right graphing utility to use for this task, can you suggest others
> that may work better?
> Thanks,
> Michael
> _______________________________________________
> Kst mailing list
> Kst at kde.org
> https://mail.kde.org/mailman/listinfo/kst

More information about the Kst mailing list