<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">So the really awesome part is how well it does with blind solving, not really how well it does with informed solves. Try solving a JPEG image or two. With typical astrometry, the times can be several minutes for the same images that solve in several seconds in StellarSolver. For the traditional solves that have position and scale information, there might be a slight improvement, but really its insignificant.<div class=""><br class=""></div><div class="">For the development of StellarSolver, mainly I just asked myself, how can we make plate solving better. Originally I embarked on this project when back in February, I was trying to solve the problem we kept having on Macs where the <a href="http://astrometry.net" class="">astrometry.net</a> package wouldn’t work because people’s python installation was messed up. There were sometimes 4 or 5 different python3 installations on people’s computers, none of them were configured right, and it was crazy. I found that we could use Sextractor to extract the stars and avoid python altogether so we implemented that. Then I started getting reports that not only did this fix the problem, but it made plate solving MUCH faster. So I realized that SEP would work really well with astrometry and I started trying to tweak all the parameters to make it solve faster. Then I got to thinking that if it was an internal library like SEP, then it would be even faster. Then I got to thinking that if it were an internal library, we could run it on Windows. And I realized that we could avoid all the temp files. And then it spiraled on from there. So I decided to not only try to make it an internal library, but to make it a library that can use different methods of solving and use them the same way with the same parameters so that we can learn from that and make it even faster. Then I got the parallelization idea. During the course of the project, I went from a blind solving time of sometimes 2 - 3 minutes down to a blind solving time of 1 second.</div><div class=""><br class=""></div><div class="">Unfortunately when it came time to integrate it this summer into KStars, I found that I really needed to take a break after all this work and work on some other projects, so it sat for several months. I finally came back to it in October and we integrated it. And yes I know there were some problems on the integration, primarily because we found that we had to make a whole bunch of changes AFTER it got integrated into Master, which wasn’t very good. But I think it is almost back to as a good as it was back in the summer and it is now working in KStars.<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 10, 2020, at 11:49 AM, Wolfgang Reissenberger <<a href="mailto:sterne-jaeger@openfuture.de" class="">sterne-jaeger@openfuture.de</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi Robert,<div class="">Hope it comes across, I’m deeply impressed about your approach for StellarSolver!</div><div class=""><br class=""></div><div class="">Yes, indeed, it was the parallelization for solving, and when I read what you explain I see it is necessary to test things under the stars. The test I did this afternoon was re-solving an already solved image, therefore it is not representative that it was solved super fast (but it was indeed impressing!!)</div><div class=""><br class=""></div><div class="">Regarding memory handling: that goes far beyond my knowledge, I can’t assess what you and Jasem built there. But I think being conservative in the prediction, how much memory is available, is always good - at least for the default behavior. If a user intentionally drives it to the edges, he or she should know what they are doing…</div><div class=""><br class=""></div><div class="">Wolfgang</div><div class=""><br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">Am 10.11.2020 um 15:50 schrieb Robert Lancaster <<a href="mailto:rlancaste@gmail.com" class="">rlancaste@gmail.com</a>>:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">Hi Wolfgang,</div><div class=""><br class=""></div><div class="">So I just want to clarify something you said here, there are a couple of parallel things and that can be a little confusing, so I just want to make sure we are talking about the same things. The cause of the confusion is the terminology that <a href="http://astrometry.net/" class="">astrometry.net</a> uses</div><div class=""><br class=""></div><div class="">1. <span style="caret-color: rgb(0, 0, 0);" class="">Load all Indexes in Memory /</span> Load all indexes in Parallel. This is the inParallel option for <a href="http://astrometry.net/" class="">astrometry.net</a>. In the options I tried to call this “Load all Indexes in Memory” to attempt to avoid the confusion with the Parallel Algorithm. This has nothing to do with parallelization in different threads or processors. It has to do with memory management. The <a href="http://astrometry.net/" class="">astrometry.net</a> solver can load the indexes and search them one after the other, or it can try to load all the indexes at once and then solve. The second option is much much faster, but comes with risk. <a href="http://astrometry.net/" class="">astrometry.net</a> does NOT check to see if it has enough RAM before it tries to solve, They have big warnings in the documentation about using this option. If you don’t have enough RAM, it could use all the RAM and crash.</div><div class=""><br class=""></div><div class="">I programmed StellarSolver to check the available RAM prior to starting the solve. If there is not enough RAM, it is supposed to turn off the option. The user can also disable the option entirely, so that there is never a problem. But you really do want the option turned on if your system can handle it. We had some issues earlier about the RAM calculation. I think the “inParallel” option causes the greatest crash risk. I would really like it if somebody could look over the code for determining enough RAM and see if it is good now. One thought that I have is that we can make the calculation more conservative and we could change the option to have 3 choices, Auto, on, or off. So that if a user is really brave, or convinced they have enough RAM for sure, they could turn the option on regardless of the risk, If they are risk averse, they could turn it off, but most users could just leave it on auto. What do you think?</div><div class=""><br class=""></div><div class="">2. Parallelization Algorithm for solving. <span style="caret-color: rgb(0, 0, 0);" class=""> I am assuming this second option is what you meant in your email. </span>This one is entirely of my creation and is what makes StellarSolver stellar. Modern computers really have great capacity for computing in parallel and it causes a HUGE performance boost to use this capability, even on a Pi, since the PI has 4 processors. </div><div class=""><br class=""></div><div class="">I programmed StellarSolver to have 2 different parallel algorithms, one that solves simultaneously at multiple “depths” and one that solves simultaneously at different scales. If you set it to Auto, it will select the appropriate one based on whether you specified the scale or position (or neither). If the image has both scale AND position, it does NOT solve in parallel and goes back to solving with a single thread.</div><div class=""><br class=""></div><div class="">When Jasem wanted to me to de-thread the StellarSolver and make it so that just the solvers are threads, I had to make a bunch of changes and one change I forgot was to make the star extraction before parallel solving asynchronous. That does mean that when doing a parallel solve, it might look like things have frozen for a moment during the star extraction before the threads start up. I have already fixed this, but it is in the releaseExperiment branch of StellarSolver, not in Master. I would like to get this fix integrated before we release, but I will need to test this thoroughly first as I mentioned in a previous email. I am wondering if this freezing behavior was what caused the “crash” you observed?</div><div class=""><br class=""></div><div class="">Thanks,</div><div class=""><br class=""></div><div class="">Rob</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><blockquote type="cite" class=""><div class="">On Nov 10, 2020, at 8:03 AM, Wolfgang Reissenberger <<a href="mailto:sterne-jaeger@openfuture.de" class="">sterne-jaeger@openfuture.de</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">OK, I did a quick check on my RPi4 with Parallel Algorithm set to „Auto“ - and it works super fast! But since it is daytime, I can only test the „Load and Slew“ option. So maybe the WCS info in the file gave hints that are not present for normal capture and slew or sync.<div class=""><br class=""></div><div class="">I need to check it under real conditions, which might be tricky due to the fog hanging around here…</div><div class=""><br class=""></div><div class="">Wolfgang<div class=""><div class=""><blockquote type="cite" class=""><div class="">Am 10.11.2020 um 11:16 schrieb Jasem Mutlaq <<a href="mailto:mutlaqja@ikarustech.com" class="">mutlaqja@ikarustech.com</a>>:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Alright, let's look at this:<div class=""><br class=""></div><div class="">1. Parallel algorithm: This is related to SOLVER, not image partitioning. It should work fine on Rpi4 and the checks are more reliable now as Robert worked on that.</div><div class="">2. WCS Polar Align: Can this be reproduced with simulators?</div><div class=""><br clear="all" class=""><div class=""><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr" class=""><div class=""><div dir="ltr" class=""><div class="">--</div><div class="">Best Regards,<br class="">Jasem Mutlaq<br class=""></div><div class=""><br class=""></div></div></div></div></div></div><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 10, 2020 at 10:48 AM Wolfgang Reissenberger <<a href="mailto:sterne-jaeger@openfuture.de" class="">sterne-jaeger@openfuture.de</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;" class="">It wasn’t that bad. The problem was that KStars went to 100% CPU usage and died (or I killed it, do not exactly remember). I’ll try to reproduce it...<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">Am 10.11.2020 um 08:45 schrieb Hy Murveit <<a href="mailto:murveit@gmail.com" target="_blank" class="">murveit@gmail.com</a>>:</div><br class=""><div class=""><div dir="ltr" class="">OK, well I believe it was fixed a week ago, so if you can still recreate it, you should report it. <div class="">It should be fixed before release if it is still freezing the Pi.</div><div class=""><br class=""></div><div class="">Hy</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 9, 2020 at 11:42 PM Wolfgang Reissenberger <<a href="mailto:sterne-jaeger@openfuture.de" target="_blank" class="">sterne-jaeger@openfuture.de</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">OK, I have to check it. The problem occurred only a few days ago and I think I’m always on bleeding edge...<br class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">Am 10.11.2020 um 08:38 schrieb Hy Murveit <<a href="mailto:murveit@gmail.com" target="_blank" class="">murveit@gmail.com</a>>:</div><br class=""><div class=""><div dir="ltr" class="">Wolfgang: I believe Rob and/or Jasem fixed the issue with parallel algorithm bringing down the RPi4 a while back.<div class="">I have the solver on auto parallelism and load all indexes in memory, and it seems to work fine (and in parallel).</div><div class="">Similarly, for star extraction, Jasem implemented a threaded extraction that also automatically determines how many threads to use and seems fine on the RPi4.</div><div class=""><br class=""></div><div class="">Eric: I believe these parallel options are the defaults. Hopefully users won't need to configure things like this.</div><div class="">For star detection, I don't believe you can turn it off.</div><div class="">For star detection Jasem split the frame before detection (into at most num-threads parts--4 for the RPi4).</div><div class="">For align, I'm not sure how Rob divided things.</div><div class=""><br class=""></div><div class="">Hy</div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Nov 9, 2020 at 11:07 PM Wolfgang Reissenberger <<a href="mailto:sterne-jaeger@openfuture.de" target="_blank" class="">sterne-jaeger@openfuture.de</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">Hi all,<div class="">I think we are close to finishing the release. I personally would opt to wait for another week and keep an eye stability.</div><div class=""><br class=""></div><div class="">Maybe we should take another look if the default settings in the StellarSolver profiles work a) for typical camera/scope combinations and b) for all platforms.</div><div class=""><br class=""></div><div class="">For example with my RPi, I needed to change the Parallel Algorithm to „None“ because parallelity brought KStars down. Is the default setting „None“ and I changed it somewhen? With all the new parameters I would prefer having a robust setup and leave it to the user to optimize speed.</div><div class=""><br class=""></div><div class="">@Jasem: please take a closer look to MR!122, since it fixed 4(!) regressions I introduced with my capture counting fix MR!114. Hopefully now we have at least a proper coverage with automated tests...</div><div class=""><br class=""></div><div class="">Wolfgang</div><div class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">Am 09.11.2020 um 22:04 schrieb Jasem Mutlaq <<a href="mailto:mutlaqja@ikarustech.com" target="_blank" class="">mutlaqja@ikarustech.com</a>>:</div><br class=""><div class=""><div dir="ltr" class="">Hello Folks,<div class=""><br class=""></div><div class="">So back to this topic, any major blockers to the KStars 3.5.0 release now?</div><div class=""><br class=""></div><div class="">1. Remote Solver should be fixed now.</div><div class="">2. StellarSolver Profiles are more optimized now.</div><div class="">3. Handbook not updated yet, but we can probably work on this shortly.</div><div class="">4. Couple of pending MRs to take care of.</div><div class=""><br class=""></div><div class="">How about Friday the 13th?</div><div class=""><br class=""></div><div class=""><div class=""><div dir="ltr" class=""><div dir="ltr" class=""><div class=""><div dir="ltr" class=""><div class="">--</div><div class="">Best Regards,<br class="">Jasem Mutlaq<br class=""></div><div class=""><br class=""></div></div></div></div></div></div><br class=""></div></div><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at 3:41 AM Robert Lancaster <<a href="mailto:rlancaste@gmail.com" target="_blank" class="">rlancaste@gmail.com</a>> wrote:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Eric,<br class="">
<br class="">
Ok so then we would be changing the way we do version numbering with this, right?<br class="">
I believe now we typically add features in each new iteration 3.4.1, 3.4.2, etc etc<br class="">
and when it is really big like StellarSolver, then we make it a big release like 3.5.0<br class="">
<br class="">
With this new paradigm, we wouldn’t put new features into the master of the main 3.5 branch<br class="">
But instead we would work on a new 3.6 branch, and then bug fixes would go into the 3.5 branch<br class="">
to make each new minor release, like 3.5.1, 3.5.2 etc.<br class="">
<br class="">
Do I have this correct?<br class="">
<br class="">
If this is right, then it would be longer before users see new features in the main branch, but the <br class="">
tradeoff is that the main branch would have a LOT more stability. I see this as a big positive.<br class="">
<br class="">
Thanks,<br class="">
<br class="">
Rob<br class="">
<br class="">
> On Nov 4, 2020, at 5:54 PM, Eric Dejouhanet <<a href="mailto:eric.dejouhanet@gmail.com" target="_blank" class="">eric.dejouhanet@gmail.com</a>> wrote:<br class="">
> <br class="">
> Hello Hy,<br class="">
> <br class="">
> Version 3.5.0 is only the beginning of the 3.5.x series, with more<br class="">
> bugfixes on each iteration (and possibly, only bugfixes).<br class="">
> So I have no problem leaving unresolved issues in 3.5.0.<br class="">
> <br class="">
> For instance, the Focus module now has a slight and unforeseeable<br class="">
> delay after the capture completes.<br class="">
> The UI reflects the end of the capture only, not the end of the detection.<br class="">
> This makes the UI Focus test quite difficult to tweak, as running an<br class="">
> average of the HFR over multiple frames now has an unknown duration.<br class="">
> Right now, the test is trying to click the capture button too soon 2<br class="">
> out of 10 attempts.<br class="">
> But this won't block 3.5 in my opinion (and now that I understood the<br class="">
> problem, I won't work on it immediately).<br class="">
> <br class="">
> In terms of reporting problems, the official way is stil <a href="http://bugs.kde.org/" rel="noreferrer" target="_blank" class="">bugs.kde.org</a>,<br class="">
> but there's quite a cleanup/followup to do there.<br class="">
> I'd say we can use issues in <a href="http://invent.kde.org/" rel="noreferrer" target="_blank" class="">invent.kde.org</a> to discuss planned<br class="">
> development around a forum/bugzilla issue or invent proposal (like<br class="">
> agile stories).<br class="">
> There are milestones associated with several issues (although I think<br class="">
> they should be reviewed and postponed).<br class="">
> And we can certainly write a punchlist: check the board at<br class="">
> <a href="https://invent.kde.org/education/kstars/-/milestones/3" rel="noreferrer" target="_blank" class="">https://invent.kde.org/education/kstars/-/milestones/3</a><br class="">
> <br class="">
> Le mer. 4 nov. 2020 à 22:38, Hy Murveit <<a href="mailto:murveit@gmail.com" target="_blank" class="">murveit@gmail.com</a>> a écrit :<br class="">
>> <br class="">
>> Eric,<br class="">
>> <br class="">
>> I would add to your list:<br class="">
>> <br class="">
>> - KStars Handbook (review update sections to reflect 3.5.0) and finally (perhaps manually if necessary) put the latest handbook online.<br class="">
>> <br class="">
>> - Review the extraction settings. I spent a bit of time looking at the default HFR settings, and based on some experimentation (truth be told, with a limited amount of data) adjust things a little differently than my first guess (which was basically focus' settings).<br class="">
>> Rob: My intuition is that I should adjust the default StellarSolver star-extraction settings for Focus and Guide as well in stellarsolverprofile.cpp. I don't know whether you've already verified them, and want to release them as they are, or whether they are a first shot and you'd welcome adjustment?<br class="">
>> <br class="">
>> Also, Eric, I suppose I should be adding these things here: <a href="https://invent.kde.org/education/kstars/-/issues" rel="noreferrer" target="_blank" class="">https://invent.kde.org/education/kstars/-/issues</a><br class="">
>> Is that right? Sorry about that--ok, after this thread ;) But seriously, your email is a good summary, and from that link<br class="">
>> it doesn't seem as easy to see which are "must do by 3.5.0" and which are "nice to have someday".<br class="">
>> A 3.5.0 punchlist would be a nice thing to have.<br class="">
>> <br class="">
>> Hy<br class="">
>> <br class="">
>> On Wed, Nov 4, 2020 at 12:58 PM Eric Dejouhanet <<a href="mailto:eric.dejouhanet@gmail.com" target="_blank" class="">eric.dejouhanet@gmail.com</a>> wrote:<br class="">
>>> <br class="">
>>> Hello,<br class="">
>>> <br class="">
>>> Where do we stand now in terms of bugfixing towards 3.5.0?<br class="">
>>> <br class="">
>>> - StellarSolver has all features in, and 1.5 is finally out at Jasem's PPA.<br class="">
>>> - However Gitlab CI still complains about that lib package (see<br class="">
>>> <a href="https://invent.kde.org/education/kstars/-/jobs/75941" rel="noreferrer" target="_blank" class="">https://invent.kde.org/education/kstars/-/jobs/75941</a>)<br class="">
>>> - Unitary tests are being fixed progressively, mount tests are down to<br class="">
>>> ~20 minutes (yeees!)<br class="">
>>> - From my tests, the remote Astrometry INDI driver is not usable<br class="">
>>> anymore from Ekos.<br class="">
>>> - The issue raised with flat frames is confirmed fixed (at least by me).<br class="">
>>> - Meridian flip is OK (but I had not enough time to test TWO flips in a row).<br class="">
>>> - Memory leaks are still being researched in Ekos.<br class="">
>>> - There is an issue when duplicating an entry in a scheduler job,<br class="">
>>> where the sequence associated is copied from the next job.<br class="">
>>> <br class="">
>>> Could we get a 3.6 branch where we will merge development of new features?<br class="">
>>> And master for bugfixing 3.5.x until we merge 3.6 new features in?<br class="">
>>> (we'd still have to port bugfixes from master to 3.6)<br class="">
>>> I don't think the opposite, master for 3.6 and a separate living<br class="">
>>> 3.5.x, is doable in the current configuration (build, ppas, MRs...).<br class="">
>>> <br class="">
>>> --<br class="">
>>> -- <a href="mailto:eric.dejouhanet@gmail.com" target="_blank" class="">eric.dejouhanet@gmail.com</a> - <a href="https://astronomy.dejouha.net/" rel="noreferrer" target="_blank" class="">https://astronomy.dejouha.net</a><br class="">
> <br class="">
> <br class="">
> <br class="">
> -- <br class="">
> -- <a href="mailto:eric.dejouhanet@gmail.com" target="_blank" class="">eric.dejouhanet@gmail.com</a> - <a href="https://astronomy.dejouha.net/" rel="noreferrer" target="_blank" class="">https://astronomy.dejouha.net</a><br class="">
<br class="">
</blockquote></div>
</div></blockquote></div><br class=""></div></div></blockquote></div>
</div></blockquote></div><br class=""></div></blockquote></div>
</div></blockquote></div><br class=""></div></blockquote></div>
</div></blockquote></div><br class=""></div></div></div></div></blockquote></div><br class=""></div></div></blockquote></div><br class=""></div></div></div></blockquote></div><br class=""></div></body></html>