[Kst] 2.0.8
barth.netterfield at gmail.com
barth.netterfield at gmail.com
Sun Feb 23 20:53:55 UTC 2014
If you exit and restart kst does that revert to good behavior?
Are you in count from end mode (how many samples?) or read to end (how much of the file?)?
I'll try to reproduce the slowdown here.
Barth Netterfield
416-845-0946
Original Message
From: Ben Lewis
Sent: Sunday, February 23, 2014 3:41 PM
To: netterfield at astro.utoronto.ca
Reply To: kst at kde.org
Cc: kst at kde.org
Subject: Re: [Kst] 2.0.8
Hi Barth,
Your fix seems to have improved things some what, but there still seems to be a problem. It would
be much easier to test if Peter's console log was enabled.
Since your fix, I have noticed that there are no long delays between updates. This is great, and
indicates that the problem has been solved. However, the period between updates gets longer as the
size of the data file grows.
For example, with an update interval of 1000 ms, I count 40 updates per minute at the start of a log
file and only 6 updates per minute after 8 hours of logging. The UI starts off being reasonably
responsive (panning, zooming etc) but after 8 hours it is too slow for practical use. The decline in
update rate is noticeable after only ten minutes of logging and gradually decreases as the file size
gets larger.
I suspect that if the console log was enabled I would not see the complete read of Vector 1 (as in
the past), but then something else is causing the update periods to slow down and make the system
unresponsive.
Regards, Ben
On 23/02/2014 7:15 AM, Barth Netterfield wrote:
> Ben,
>
> I have pushed a hack. See if it improves the situation. If it does, I can
> try to clean it up a bit.
>
> cbn
>
> On February 22, 2014 1:14:18 PM Barth Netterfield wrote:
>
>> The fact that the problems happen with the configuration, below, and not on
>> my system re-enforces my belief that this is file system race-condition
>> related.
>>
>> kst decides that a file has been re-written/replaced and needs to be re-read
>> from the beginning when it has shrunk. Apparently, in the configuration
>> below, if a file is being written to, it can appear smaller than it was.
>> As far as I know, this can not happen in Linux.
>>
>> As a hack (as opposed to a fix, which might require re-writing the file
>> system I suspect) we could tell the data vector to delay reading a few
>> times if the data source reports back with a shrunk file - and not try a
>> re-read until it has failed a few times. This will (dramatically?) reduce
>> the rate of failures but since the problems are unpredictable, it won't
>> formally fully fix the problem.
>>
>> I will commit a hack shortly.
>>
>> The second problem (reading zeros) is harder to fix (but probably caused by
>> the same thing). I will think about it.
>>
>> cbn
>>
>> On February 22, 2014 12:22:47 PM Ben Lewis wrote:
>>> I have a USB memory stick plugged into the PC where the data is generated.
>>> Data is collected to a RAM buffer and then written to a CSV file on the
>>> memory stick. The memory stick has Windows file sharing enabled so that it
>>> is accessible over the local network.
>>> Kst runs on a different PC. The shared drive (memory stick) on the remote
>>> PC is mapped to a local drive. Kst then reads the CSV file as if it were
>>> a local file (with update type set to "time interval") The connection
>>> between the two PCs is either via a LAN cable or an Ad-hoc WiFi
>>> connection. The problem exists under both cases.
>>>
>>> Remote System (where CSV data file is generated)
>>> ----------------------------------------------------------------
>>> OS: WindowsXP embedded (32bit)
>>> File System: Data is written to a USB memory stick, formatted with NTFS
>>>
>>> Data Accumulation Rate:
>>> fields/row: 5
>>> characters per field: 7
>>> bytes per row: 41
>>> rows per second: 1500
>>>
>>> I've attached the first 100 lines of a data file.
>>>
>>> Local System (where Kst is run)
>>> ----------------------------------------
>>> OS: Windows7 (64bit)
>>> File System: Remote NTFS drive is mapped to local drive
>>> Kst: 64bit build
>>>
>>>>> * "Out of Memory" error
>>>>> http://kde.6490.n7.nabble.com/Out-of-memory-error-td1555215.html The
>>>>> error
>>>>> message has been improved but the message still appears when it
>>>>> shouldn't.
>>>> I also can't reproduce this. Can you give me details (OS, file size)?
>>>> Is
>>>> the data being updated real time during the read? At what rate?
>>> Same as above
>>>
>>>>> * I sometimes get snippets of data missing in a live plot. If I
>>>>> restart
>>>>>
>>>>> Kst and reload the data there are no missing bits. This seems to happen
>>>>> when there is a large amount of network traffic (my data file is not on
>>>>> a
>>>>> local disk). This is hard to reproduce so it's probably not worth
>>>>> worrying
>>>>> about at this stage.
>>>> Yes.... are you using smb or nfs? Is it at all correlated with the
>>>> first
>>>> bug?
>>> I'm using Window file sharing.
>>> It could very well be related to the first bug but I have no way of
>>> telling.
_______________________________________________
Kst mailing list
Kst at kde.org
https://mail.kde.org/mailman/listinfo/kst
More information about the Kst
mailing list