KStars v3.5.0 Release Date?
Hy Murveit
murveit at gmail.com
Wed Oct 21 05:24:35 BST 2020
BTW, FWIW, I started running again. It finished the first image and went
into its 3-minute-long (this time) star detection
on the sub that I described above as 'Major Problem'. I noticed that it was
using 3 or 4 CPUs at near 100% to do this.
I doubt that this was ever multi-threaded before. So, apparently, in
addition to the SEP library running in a separate thread,
the SEP library is also running multiple processes on its own. These both
seem like likely suspects for these memory issues.
Hy
On Tue, Oct 20, 2020 at 9:01 PM Hy Murveit <murveit at gmail.com> wrote:
> Started testing Jasem's change tonight, 30 minutes ago.
> TL;DR It runs better than before, but unfortunately still crashes. A few
> other bugs too.
>
> *ISSUES*
>
> *Most Major problem*. Worked for a little while (a few minutes?) which is
> much better than before when it crashed pretty much at the start of
> guiding, but then still segv'd. Started again in gdb in hopes of getting a
> stack trace. Running for a while...segv'd after about 10 minutes. See
> pretty unhelpful stack track below.
>
> In summary, I ran it twice, crashed after a couple minutes the first time,
> crashed after 10 minutes the 2nd time. Always seems to crash in the
> guider's request for star extraction. Can we run star extraction on the
> normal thread instead of the threading we're doing?
>
> *Major problem*. Computing the HFR after my sub (8-minute Ha image) used
> to take 1-5 seconds. It needs to detect stars, calculate individual HFRs,
> etc
> I did some work to optimize the time for this. It took 130+ seconds for
> my first image with stellarsolver! I can try and help fix that somehow.
>
> *Minor problem*. I noticed is that in the guideview, even though I
> checked and am 100% sure it was off, the little button called 'Detect Stars
> in Image' in the guideview window got enabled and there are little red
> circles over the detected stars. This is a little bug--didn't used to be
> this way.
>
>
> Wish I had better news,
> Hy
>
> *End of GDB output including crash and backtrace.*
>
> [New Thread 0x9ec56090 (LWP 16299)]
> [Thread 0x9ec56090 (LWP 16299) exited]
> [New Thread 0x9ec56090 (LWP 16333)]
> [Thread 0x9ec56090 (LWP 16333) exited]
> [New Thread 0x9ec56090 (LWP 16364)]
> [Thread 0x9ec56090 (LWP 16364) exited]
> [New Thread 0x9ec56090 (LWP 16398)]
> [Thread 0x9ec56090 (LWP 16398) exited]
> [New Thread 0x9ec56090 (LWP 16432)]
> [Thread 0x9ec56090 (LWP 16432) exited]
> [New Thread 0x9ec56090 (LWP 16462)]
> [Thread 0x9ec56090 (LWP 16462) exited]
> [New Thread 0x9ec56090 (LWP 16492)]
> [Thread 0x9ec56090 (LWP 16492) exited]
> [New Thread 0x9ec56090 (LWP 16526)]
> [Thread 0x9ec56090 (LWP 16526) exited]
> [New Thread 0x9ec56090 (LWP 16531)]
> [Thread 0x9ec56090 (LWP 16531) exited]
> [New Thread 0x9ec56090 (LWP 16560)]
> [Thread 0x9ec56090 (LWP 16560) exited]
> [New Thread 0x9ec56090 (LWP 16587)]
>
> Thread 35 "Thread (pooled)" received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0xa4239090 (LWP 5157)]
> 0xb6fbc14c in memset () from /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so
> (gdb) bt
> #0 0xb6fbc14c in memset () from
> /usr/lib/arm-linux-gnueabihf/libarmmem-v7l.so
> #1 0xb421a500 in SEP::Deblend::deblend (this=0x67985398,
> objlistin=objlistin at entry=0xa423855c, l=l at entry=0,
> objlistout=objlistout at entry=0xa4238454,
> deblend_nthresh=deblend_nthresh at entry=32,
> deblend_mincont=0.0050000000000000001, minarea=minarea at entry=5,
> lutz=0x4d348778)
> at /home/pi/Projects2/stellarsolver/stellarsolver/sep/deblend.cpp:77
> #2 0xb421ae84 in SEP::Extract::sortit (this=this at entry=0x7fef5478,
> info=info at entry=0x66ae1218,
> objlist=0xa423855c, objlist at entry=0xa4238554, minarea=5, minarea at entry=-1541175988,
> finalobjlist=0x66be0218,
> finalobjlist at entry=0x0, deblend_nthresh=32, deblend_nthresh at entry=3,
> deblend_mincont=0.0050000000000000001,
> gain=1) at /usr/include/c++/8/bits/unique_ptr.h:342
> #3 0xb421cbc0 in SEP::Extract::sep_extract (this=0x7fef5478, this at entry=0xa4238896,
> image=0x1a5e8308,
> image at entry=0xa4238948, thresh=thresh at entry=8.42488956,
> thresh_type=thresh_type at entry=1, minarea=-1541175988,
> minarea at entry=5, conv=conv at entry=0x66be01b8, convw=<optimized out>,
> convw at entry=3, convh=convh at entry=3,
> filter_type=<optimized out>, filter_type at entry=0, deblend_nthresh=32,
> deblend_cont=0.0050000000000000001,
> clean_flag=1, clean_param=1, catalog=0xa423889c, catalog at entry
> =0xa4238894)
> at /home/pi/Projects2/stellarsolver/stellarsolver/sep/extract.cpp:643
> #4 0xb41ebee8 in InternalSextractorSolver::extractPartition
> (this=0x76e6468, parameters=...)
> at /usr/include/c++/8/cmath:475
> #5 0xb41efa6c in non-virtual thunk to
> QtConcurrent::RunFunctionTask<QList<FITSImage::Star> >::run() ()
> at /usr/include/c++/8/bits/exception.h:63
> #6 0xb4855f30 in ?? () from /usr/lib/arm-linux-gnueabihf/libQt5Core.so.5
> #7 0xb485eb58 in ?? () from /usr/lib/arm-linux-gnueabihf/libQt5Core.so.5
> #8 0xb42c2494 in start_thread (arg=0xa4239090) at pthread_create.c:486
> #9 0xb3979578 in ?? () at ../sysdeps/unix/sysv/linux/arm/clone.S:73 from
> /lib/arm-linux-gnueabihf/libc.so.6
> Backtrace stopped: previous frame identical to this frame (corrupt stack?)
> (gdb)
>
> *Here's the end of the log. Clearly in a guide-frame's star extraction.*
>
> [2020-10-20T20:39:36.043 PDT DEBG ][ org.kde.kstars.indi] -
> EQMod Mount : "[SCOPE] GetRAEncoder() = 10576388 "
> [2020-10-20T20:39:36.060 PDT DEBG ][ org.kde.kstars.indi] -
> EQMod Mount : "[SCOPE] Current encoders RA=10576388 DE=9885062 "
> [2020-10-20T20:39:37.115 PDT DEBG ][ org.kde.kstars.indi] -
> processBLOB() mode 2
> [2020-10-20T20:39:37.122 PDT DEBG ][ org.kde.kstars.fits] -
> Reading FITS file buffer ( "1.2 MiB" )
> [2020-10-20T20:39:37.188 PDT DEBG ][ org.kde.kstars.ekos.guide] -
> Received guide frame.
> [2020-10-20T20:39:37.188 PDT DEBG ][ org.kde.kstars.ekos.guide] -
> Multistar: findTopStars 25
> [2020-10-20T20:39:37.192 PDT DEBG ][ org.kde.kstars.indi] -
> EQMod Mount : "[SCOPE] Compute local time: lst=21.51869180 (21:31:07.29) -
> julian date=2459143.65248889 "
> [2020-10-20T20:39:37.209 PDT DEBG ][ org.kde.kstars.indi] -
> EQMod Mount : "[SCOPE] GetRAEncoder() = 10576506 "
> [2020-10-20T20:39:37.211 PDT DEBG ][ org.kde.kstars.indi] -
> EQMod Mount : "[SCOPE] Current encoders RA=10576506 DE=9885062 "
>
>
> *Odd message*: Saw this as the last log line in the first crash:
> [2020-10-20T20:14:08.358 PDT WARN ][ default] -
> QObject::~QObject: Timers cannot be stopped from another thread
>
> also saw these earlier in a log, never seen them before, dont know if
> relevant:
> [2020-10-20T20:15:57.329 PDT WARN ][ default] -
> QObject::connect: invalid null parameter
> [2020-10-20T20:15:57.329 PDT WARN ][ default] -
> QObject::connect: invalid null parameter
> [2020-10-20T20:15:58.014 PDT WARN ][ default] -
> QObject::connect: signal not found in Ekos::Mount
> [2020-10-20T20:15:58.015 PDT DEBG ][ org.kde.kstars.ekos.capture] -
> Registering new Module ( "Mount" )
> [2020-10-20T20:15:58.017 PDT WARN ][ default] -
> QObject::connect: invalid null parameter
> [2020-10-20T20:15:58.017 PDT WARN ][ default] -
> QObject::connect: invalid null parameter
> [2020-10-20T20:15:59.315 PDT WARN ][ default] -
> QObject::disconnect: Unexpected null parameter
>
> On Tue, Oct 20, 2020 at 1:17 PM Robert Lancaster <rlancaste at gmail.com>
> wrote:
>
>> Yes, please, I agree! I am working on fixing up the options profile now.
>>
>> On Oct 20, 2020, at 12:11 PM, Hy Murveit <murveit at gmail.com> wrote:
>>
>> I'd like to suggest that since the main KStars branch still core dumps
>> while guiding (every time for me) and perhaps there are other issues with
>> flats, we should push back the release date for at least two weeks. I
>> believe we need a stable two weeks of user testing before any major release.
>>
>> I know Jasem and Robert are working very hard to fix this, and in no way
>> is this a knock on their work, just a desire to make this a stable release.
>>
>> I would propose that there only be bug fix commits during those two weeks.
>>
>> Comments?
>> Hy
>>
>> On Thu, Oct 15, 2020, 9:55 PM Hy Murveit <murveit at gmail.com> wrote:
>>
>>> Core dump again with the 'load all indexes' flag unchecked. Am running
>>> in gdb now.
>>> ...
>>>
>>> Also, it doesn't seem to remember that checkmark, nor the profile type
>>> (top menu under ""Options Profiles")--please add that to the todo list too.
>>> Also, it's my impression that the solver takes longer than it used to.
>>> Could it be that the binning isn't being honored?
>>> Perhaps you derived your speed by threading, and I'm defeating that by
>>> unchecking the box?
>>> ...
>>>
>>> Here's the backtrace, as it crashed again--this time in GDB.
>>>
>>> I guess I'm done testing for the night.
>>> Hy
>>>
>>> log-odds ratio 0.335153 (1.39815), 0 match, 0 conflict, 0 distractors,
>>> 39 index.
>>> RA,Dec = (318.284,63.9039), pixel scale 3.84851 arcsec/pix.
>>> log-odds ratio 0.335153 (1.39815), 0 match, 0 conflict, 0 distractors,
>>> 39 index.
>>> RA,Dec = (318.284,63.9039), pixel scale 3.84851 arcsec/pix.
>>> log-odds ratio 34.6274 (1.09262e+15), 18 match, 0 conflict, 120
>>> distractors, 37 index.
>>> RA,Dec = (325.282,58.0472), pixel scale 3.97646 arcsec/pix.
>>> Hit/miss: Hit/miss:
>>> ----+-----------+--+--+------+---------------+---+------+----+--+--+---------------+-----------+----
>>> [Thread 0x83925090 (LWP 1384) exited]
>>>
>>> Thread 38 "Thread (pooled)" received signal SIGSEGV, Segmentation fault.
>>> [Switching to Thread 0x92c2c090 (LWP 31265)]
>>> belong (corenb=corenb at entry=0, coreobjlist=coreobjlist at entry=0xb42b9afc
>>> <debobjlist>, shellnb=shellnb at entry=0,
>>> shellobjlist=0x47d6cbc) at
>>> /home/pi/Projects/stellarsolver/stellarsolver/sep/deblend.c:378
>>> 378 int xc=PLIST(cpl+cobj->firstpix,x),
>>> yc=PLIST(cpl+cobj->firstpix,y);
>>> (gdb) bt
>>> #0 belong (corenb=corenb at entry=0, coreobjlist=coreobjlist at entry=0xb42b9afc
>>> <debobjlist>,
>>> shellnb=shellnb at entry=0, shellobjlist=0x47d6cbc)
>>> at /home/pi/Projects/stellarsolver/stellarsolver/sep/deblend.c:378
>>> #1 0xb4217178 in deblend (objlistin=0xa8cb2b00, objlistin at entry=0x92c2b1c4,
>>> l=l at entry=0, objlistout=0x0,
>>> objlistout at entry=0x92c2b0c4, deblend_nthresh=75328520,
>>> deblend_nthresh at entry=0,
>>> deblend_mincont=0.0050000000000000001, minarea=minarea at entry=5)
>>> at /home/pi/Projects/stellarsolver/stellarsolver/sep/deblend.c:125
>>> #2 0xb4217b4c in sortit (info=info at entry=0xa8cc52b0,
>>> objlist=0x92c2b1c4, objlist at entry=0x92c2b1bc, minarea=5,
>>> minarea at entry=2, finalobjlist=0xa8c296d8, finalobjlist at entry=0x0,
>>> deblend_nthresh=32,
>>> deblend_nthresh at entry=3, deblend_mincont=0.0050000000000000001,
>>> gain=1)
>>> at /home/pi/Projects/stellarsolver/stellarsolver/sep/extract.c:777
>>> #3 0xb4219e80 in sep_extract (image=0x5b8d6c
>>> <NewFOV::slotDetectFromINDI()+324>, image at entry=0x8ecf2008,
>>> thresh=thresh at entry=54.3094788, thresh_type=thresh_type at entry=1,
>>> minarea=2, minarea at entry=5,
>>> conv=0xa8cc7100, convw=<optimized out>, convw at entry=3,
>>> convh=convh at entry=3, filter_type=<optimized out>,
>>> filter_type at entry=0, deblend_nthresh=32,
>>> deblend_cont=0.0050000000000000001, clean_flag=1, clean_param=1,
>>> catalog=0x92c2b514, catalog at entry=0x92c2b50c)
>>> at /home/pi/Projects/stellarsolver/stellarsolver/sep/extract.c:632
>>> #4 0xb41edc34 in InternalSextractorSolver::runSEPSextractor
>>> (this=this at entry=0xa8cb9e70)
>>> at /usr/include/c++/8/cmath:475
>>> #5 0xb41eed90 in InternalSextractorSolver::sextract (this=0xa8cb9e70)
>>> at
>>> /home/pi/Projects/stellarsolver/stellarsolver/internalsextractorsolver.cpp:94
>>> #6 InternalSextractorSolver::run (this=0xa8cb9e70)
>>> at
>>> /home/pi/Projects/stellarsolver/stellarsolver/internalsextractorsolver.cpp:137
>>> #7 0xb420ddd4 in StellarSolver::extract (this=0x9e403bd0,
>>> calculateHFR=calculateHFR at entry=true, frame=...)
>>> at /usr/include/c++/8/bits/atomic_base.h:390
>>> #8 0x00c4c4c8 in FITSSEPDetector::findSourcesAndBackground
>>> (this=0x92c2b89c, boundary=..., bg=bg at entry=0x0)
>>> at /usr/include/arm-linux-gnueabihf/qt5/QtCore/qrect.h:60
>>> #9 0x00c4a32c in QtConcurrent::StoredMemberFunctionPointerCall2<bool,
>>> FITSSEPDetector, QRect const&, QRect, SkyBackground*,
>>> decltype(nullptr)>::runFunctor() (this=0x424be80)
>>> at
>>> /usr/include/arm-linux-gnueabihf/qt5/QtConcurrent/qtconcurrentstoredfunctioncall.h:911
>>> #10 0x004df0f0 in QtConcurrent::RunFunctionTask<bool>::run
>>> (this=0x424be80)
>>> at /usr/include/arm-linux-gnueabihf/qt5/QtCore/qmutex.h:240
>>> #11 0xb4855f30 in ?? () from /usr/lib/arm-linux-gnueabihf/libQt5Core.so.5
>>> #12 0xb485eb58 in ?? () from /usr/lib/arm-linux-gnueabihf/libQt5Core.so.5
>>> #13 0xb42c2494 in start_thread (arg=0x92c2c090) at pthread_create.c:486
>>> #14 0xb397c578 in ?? () at ../sysdeps/unix/sysv/linux/arm/clone.S:73
>>> from /lib/arm-linux-gnueabihf/libc.so.6
>>> Backtrace stopped: previous frame identical to this frame (corrupt
>>> stack?)
>>> (gdb)
>>>
>>>
>>>
>>>
>>> On Thu, Oct 15, 2020 at 9:36 PM Hy Murveit <murveit at gmail.com> wrote:
>>>
>>>> OK, you knew this would happen 10 minutes after I sent that...
>>>>
>>>> log-odds ratio 0.69992 (2.01359), 0 match, 0 conflict, 0 distractors,
>>>> 17 index.
>>>> RA,Dec = (309.748,70.6162), pixel scale 4.38547 arcsec/pix.
>>>> log-odds ratio 0.69992 (2.01359), 0 match, 0 conflict, 0 distractors,
>>>> 17 index.
>>>> RA,Dec = (309.748,70.6162), pixel scale 4.38547 arcsec/pix.
>>>> log-odds ratio 6.76328 (865.472), 0 match, 0 conflict, 0 distractors,
>>>> 14 index.
>>>> RA,Dec = (327.788,50.727), pixel scale 4.50653 arcsec/pix.
>>>> log-odds ratio 6.76328 (865.472), 0 match, 0 conflict, 0 distractors,
>>>> 14 index.
>>>> RA,Dec = (327.788,50.727), pixel scale 4.50653 arcsec/pix.
>>>> log-odds ratio 53.7431 (2.18938e+23), 18 match, 0 conflict, 100
>>>> distractors, 37 index.
>>>> RA,Dec = (325.28,58.0466), pixel scale 3.97665 arcsec/pix.
>>>> Hit/miss: Hit/miss:
>>>> +------+----+--------++------------+-+---+-----+--------++--+-----------------++--------------+-----
>>>> Segmentation fault
>>>>
>>>>
>>>> It's hard to say what caused it, but if I had to guess, I'd say it was
>>>> run out of memory.
>>>> It happened during a meridian flip, and perhaps StellarSolver is too
>>>> greedy with memory.
>>>> I'll uncheck that load all indexes flag and continue.
>>>>
>>>> Hy
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Oct 15, 2020 at 9:23 PM Hy Murveit <murveit at gmail.com> wrote:
>>>>
>>>>> Some good news.
>>>>>
>>>>> Tonight, for the first time, I was able to start up the latest (built
>>>>> from latest source as of an hour before I wrote this) KStars and
>>>>> StellarSolver on my RPi4, without issues (so far).
>>>>> I've polar aligned, plate solved, focused, guiding, ... and so far all
>>>>> good. I have never been able to get to this point with this new software,
>>>>> so there's definitely been progress.
>>>>>
>>>>> Still need to bang on it for several nights, I'm sure, but thanks for
>>>>> the hard work!
>>>>>
>>>>> FWIW, I am not using any of the safeguards I was asked to put in (e.g.
>>>>> I have "Use Scale" and "Load All Indexes In Memory" both checked).
>>>>>
>>>>> Hy
>>>>>
>>>>> On Wed, Oct 14, 2020 at 12:35 PM Hy Murveit <murveit at gmail.com> wrote:
>>>>>
>>>>>> Rob,
>>>>>>
>>>>>> Thanks for your hard work on this!
>>>>>>
>>>>>> As we discussed privately, I have been unable to get plate-solving to
>>>>>> work with the internal StellarSolver on my RPi4.
>>>>>> Do you think this is unique to me--that is, have others been able to
>>>>>> do so? If this a general problem, then I think Oct 22 is too aggressive a
>>>>>> release target, as we should give our advanced users at least a few weeks
>>>>>> of using the software more-or-less successfully on their target
>>>>>> environments. If this is something specific to me, but others are
>>>>>> successfully plate-solving on RPi's, then perhaps I'm over-reacting.
>>>>>>
>>>>>> FWIW, I just tried to send the full bug report I sent you out to this
>>>>>> devel list, but it got blocked (I guess one of my screenshots was too
>>>>>> large).
>>>>>> Hopefully the moderator will accept my thread soon, and it will be
>>>>>> available to all.
>>>>>>
>>>>>> Until that happens, where should we post bugs related to 3.5.0? This
>>>>>> thread? New threads on kstars-devel?
>>>>>>
>>>>>> Again, I want to emphasize that this is a huge undertaking and I
>>>>>> really appreciate the effort,
>>>>>> Hy
>>>>>>
>>>>>>
>>>>>> On Wed, Oct 14, 2020 at 5:47 AM Robert Lancaster <rlancaste at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hey guys,
>>>>>>>
>>>>>>> I also agree 100%, we are not ready by Oct. 15th. There are still
>>>>>>> several loose ends that need to be tied up on my end, as well as writing up
>>>>>>> documentation for how to use StellarSolver. Since it is a major release of
>>>>>>> KStars (3.5.0 vs 4.4.4), I think major changes in the UI and the functions
>>>>>>> are perfectly fine, but I agree we don’t want to change everything on the
>>>>>>> user without warning, so we need time to write up documentation and make
>>>>>>> sure everything is clear and simple for the user to use. So please let me
>>>>>>> know what is confusing so that we can try to improve that. As for your
>>>>>>> suggestion that we allow the old code to work during a transition period,
>>>>>>> the issue with that is that the old code is not compatible really. But
>>>>>>> that doesn’t mean that we couldn’t make the UI in the Align module *LOOK*
>>>>>>> like it used to with the combo boxes in align. I put the options for which
>>>>>>> source extraction method, which solving method, and which profile to use
>>>>>>> inside the Options dialog because I thought people would access them less
>>>>>>> often than the other settings in the align module and I also want those to
>>>>>>> be available if we plate solve in other places in KStars. But we could
>>>>>>> also put those same options at the bottom in the align module as well,
>>>>>>> which would make it look close to what it used to look like. There is no
>>>>>>> reason they couldn’t be in both places. It might lead to a better
>>>>>>> transition.
>>>>>>>
>>>>>>> On that note I also agree that we weren’t really ready to put all
>>>>>>> this in KStars master right away, especially because people rely on the
>>>>>>> master branch of the repo for nightly builds, and we needed to test it
>>>>>>> thoroughly before integrating it. I was hoping for more of a testing time
>>>>>>> period where people could test this on my fork or maybe in a different
>>>>>>> branch in the main repo and we could have a nice discussion about how to
>>>>>>> improve it before putting it in the official KStars Master repo. But,
>>>>>>> let's not be too hard on Jasem for that, he is a great guy and works
>>>>>>> incredibly hard to support KStars and INDI and us. It has caused a little
>>>>>>> more pressure on me to fix all these problems and get it all integrated,
>>>>>>> and maybe that was a good thing, because I had mostly finished
>>>>>>> StellarSolver back in June, and I was not integrating it into KStars very
>>>>>>> fast at all. This has pretty much lit a fire under me to get it done. We
>>>>>>> already did complain to him about putting it in the main repo so quickly,
>>>>>>> but it really has worked to get things up and running. So instead of
>>>>>>> focusing on that, now that its done, let's work really hard to polish
>>>>>>> everything up and get it ready for release. I think we could be on track
>>>>>>> for a Oct 22nd release. Please keep testing and I will keep trying to
>>>>>>> perfect things with StellarSolver and its integration with KStars for the
>>>>>>> release. And I am sorry if the integration has caused anybody any issues
>>>>>>> because it wasn’t ready yet.
>>>>>>>
>>>>>>> Thank you very much,
>>>>>>>
>>>>>>> Rob
>>>>>>
>>>>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.kde.org/pipermail/kstars-devel/attachments/20201020/37559531/attachment-0001.htm>
More information about the Kstars-devel
mailing list