--00000000000014726805b494a16f Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable I just became free from work for a few days and I thought I'd try to get my MRs in for 3.5.0. Looks like I missed the tag :-) Regards Akarsh Am Fr., 20. Nov. 2020 um 18:18 Uhr schrieb Hy Murveit : > > > > > *> git logcommit bed10ad934e8b60c36da5a3bfeaa8c8e8284e384 (HEAD -> master= , > upstream/master)Author: Jasem Mutlaq >Date: Sat Nov 21 02:49:47 2020 +0300 Marki= ng > stable release for 3.5.0* > > > Woohoo! Congratulations!! > > On Sat, Nov 14, 2020 at 9:04 PM Hy Murveit wrote: > >> Jasem, >> >> Build is broken. >> >> To get things to compile I needed to comment out: >> lines 46, 48 859, 864 of align.h >> These are related to your recent commits. >> >> Hy >> >> PS IMHO it's better to remove all those lines you commented out in the >> recent commits. >> You can always retrieve them in git. >> >> On Sat, Nov 14, 2020 at 7:46 PM Robert Lancaster >> wrote: >> >>> Or did you say the solve succeeded with whatever profile you used? >>> Sorry this email thread is missing part of the message and I may have >>> misinterpreted it. Maybe this image was in response to your message ab= out >>> the parallel solvers not shutting down that I already responded to? >>> >>> On Nov 14, 2020, at 10:43 PM, Robert Lancaster >>> wrote: >>> >>> Hi Wolfgang, I tried solving this image with my Small Scale Solving >>> profile and it failed. I noticed that your stars are fairly small and = it >>> was downsampling by 3. So I tried turning off downsampling entirely = and >>> it succeeded in about 3 seconds. If you are having trouble with failed >>> solves, you can try disabling the auto downsample function and try 1 or= 2 >>> for the downsample. >>> >>> On Nov 14, 2020, at 6:44 PM, Wolfgang Reissenberger < >>> sterne-jaeger@openfuture.de> wrote: >>> >>> Try this one: >>> >>> https://drive.google.com/file/d/1QAq19iQjdqe_YJNuNCcOyWHaoyHQGxcE/view?= usp=3Dsharing >>> >>> >>> Am 14.11.2020 um 23:57 schrieb Jasem Mutlaq : >>> >>> Got a link to the image? >>> >>> A user sent me this log: >>> >>> [2020-11-14T02:18:16.415 UTC WARN ][ default] - >>> QObject::startTimer: Timers can only be used with threads started with >>> QThread >>> [2020-11-14T02:18:16.443 UTC WARN ][ default] - >>> QtDBus: cannot relay signals from parent >>> Phonon::AbstractAudioOutput(0x4cfbe30 "") unless they are emitted in th= e >>> object's thread QThread(0xcf9258 ""). Current thread is QThread(0x507d2= a8 >>> ""). >>> [2020-11-14T02:18:16.444 UTC WARN ][ default] - >>> QtDBus: cannot relay signals from parent QObject(0x4cfbe30 "") unless t= hey >>> are emitted in the object's thread QThread(0xcf9258 ""). Current thread= is >>> QThread(0x507d2a8 ""). >>> [2020-11-14T02:18:16.485 UTC WARN ][ default] - >>> QObject::~QObject: Timers cannot be stopped from another thread >>> >>> Anyone seen anything like this? It appears to be related to Phonon >>> playing notification sounds and not an internal error for KStars. >>> >>> -- >>> Best Regards, >>> Jasem Mutlaq >>> >>> >>> >>> On Sat, Nov 14, 2020 at 11:02 PM Wolfgang Reissenberger < >>> sterne-jaeger@openfuture.de> wrote: >>> >>>> Robert, all, >>>> I had the issue again when trying to solve a wide field image around >>>> NGC6888, which contains very dense star fields. I am using the 1-Defau= lt >>>> profile without any change. >>>> >>>> If I leave the =E2=80=9EParallel Algorithm=E2=80=9C option from the As= trometry >>>> Parameters on =E2=80=9EAuto=E2=80=9C, Kstars solves the image very fas= t, but remains on >>>> 100%. It seems that the in parallel running threads were hanging. >>>> >>>> I am using the following versions: >>>> KStars: 57c44d05c3e1f9895d84c7f4f73950975e8eddb7 >>>> StellarSolver: 2d7eba6685c1bcd77c0525e88b3d24b2fcd474a9 >>>> >>>> Anything I could test right now? >>>> >>>> Wolfgang >>>> >>>> Am 10.11.2020 um 15:50 schrieb Robert Lancaster : >>>> >>>> Hi Wolfgang, >>>> >>>> So I just want to clarify something you said here, there are a couple >>>> of parallel things and that can be a little confusing, so I just want = to >>>> make sure we are talking about the same things. The cause of the conf= usion >>>> is the terminology that astrometry.net uses >>>> >>>> 1. Load all Indexes in Memory / Load all indexes in Parallel. This is >>>> the inParallel option for astrometry.net. In the options I tried to >>>> call this =E2=80=9CLoad all Indexes in Memory=E2=80=9D to attempt to a= void the confusion >>>> with the Parallel Algorithm. This has nothing to do with parallelizat= ion >>>> in different threads or processors. It has to do with memory manageme= nt. >>>> The astrometry.net solver can load the indexes and search them one >>>> after the other, or it can try to load all the indexes at once and the= n >>>> solve. The second option is much much faster, but comes with risk. >>>> astrometry.net does NOT check to see if it has enough RAM before it >>>> tries to solve, They have big warnings in the documentation about usi= ng >>>> this option. If you don=E2=80=99t have enough RAM, it could use all t= he RAM and >>>> crash. >>>> >>>> I programmed StellarSolver to check the available RAM prior to startin= g >>>> the solve. If there is not enough RAM, it is supposed to turn off the >>>> option. The user can also disable the option entirely, so that there = is >>>> never a problem. But you really do want the option turned on if your >>>> system can handle it. We had some issues earlier about the RAM >>>> calculation. I think the =E2=80=9CinParallel=E2=80=9D option causes t= he greatest crash >>>> risk. I would really like it if somebody could look over the code for >>>> determining enough RAM and see if it is good now. One thought that I = have >>>> is that we can make the calculation more conservative and we could cha= nge >>>> the option to have 3 choices, Auto, on, or off. So that if a user is >>>> really brave, or convinced they have enough RAM for sure, they could t= urn >>>> the option on regardless of the risk, If they are risk averse, they co= uld >>>> turn it off, but most users could just leave it on auto. What do you = think? >>>> >>>> 2. Parallelization Algorithm for solving. I am assuming this second >>>> option is what you meant in your email. This one is entirely of my >>>> creation and is what makes StellarSolver stellar. Modern computers re= ally >>>> have great capacity for computing in parallel and it causes a HUGE >>>> performance boost to use this capability, even on a Pi, since the PI h= as 4 >>>> processors. >>>> >>>> I programmed StellarSolver to have 2 different parallel algorithms, on= e >>>> that solves simultaneously at multiple =E2=80=9Cdepths=E2=80=9D and on= e that solves >>>> simultaneously at different scales. If you set it to Auto, it will se= lect >>>> the appropriate one based on whether you specified the scale or positi= on >>>> (or neither). If the image has both scale AND position, it does NOT s= olve >>>> in parallel and goes back to solving with a single thread. >>>> >>>> When Jasem wanted to me to de-thread the StellarSolver and make it so >>>> that just the solvers are threads, I had to make a bunch of changes an= d one >>>> change I forgot was to make the star extraction before parallel solvin= g >>>> asynchronous. That does mean that when doing a parallel solve, it mig= ht >>>> look like things have frozen for a moment during the star extraction b= efore >>>> the threads start up. I have already fixed this, but it is in the >>>> releaseExperiment branch of StellarSolver, not in Master. I would lik= e to >>>> get this fix integrated before we release, but I will need to test thi= s >>>> thoroughly first as I mentioned in a previous email. I am wondering i= f >>>> this freezing behavior was what caused the =E2=80=9Ccrash=E2=80=9D you= observed? >>>> >>>> Thanks, >>>> >>>> Rob >>>> >>>> >>>> On Nov 10, 2020, at 8:03 AM, Wolfgang Reissenberger < >>>> sterne-jaeger@openfuture.de> wrote: >>>> >>>> OK, I did a quick check on my RPi4 with Parallel Algorithm set to >>>> =E2=80=9EAuto=E2=80=9C - and it works super fast! But since it is dayt= ime, I can only test >>>> the =E2=80=9ELoad and Slew=E2=80=9C option. So maybe the WCS info in t= he file gave hints >>>> that are not present for normal capture and slew or sync. >>>> >>>> I need to check it under real conditions, which might be tricky due to >>>> the fog hanging around here=E2=80=A6 >>>> >>>> Wolfgang >>>> >>>> Am 10.11.2020 um 11:16 schrieb Jasem Mutlaq : >>>> >>>> Alright, let's look at this: >>>> >>>> 1. Parallel algorithm: This is related to SOLVER, not image >>>> partitioning. It should work fine on Rpi4 and the checks are more reli= able >>>> now as Robert worked on that. >>>> 2. WCS Polar Align: Can this be reproduced with simulators? >>>> >>>> -- >>>> Best Regards, >>>> Jasem Mutlaq >>>> >>>> >>>> >>>> On Tue, Nov 10, 2020 at 10:48 AM Wolfgang Reissenberger < >>>> sterne-jaeger@openfuture.de> wrote: >>>> >>>>> It wasn=E2=80=99t that bad. The problem was that KStars went to 100% = CPU usage >>>>> and died (or I killed it, do not exactly remember). I=E2=80=99ll try = to reproduce >>>>> it... >>>>> >>>>> Am 10.11.2020 um 08:45 schrieb Hy Murveit : >>>>> >>>>> OK, well I believe it was fixed a week ago, so if you can still >>>>> recreate it, you should report it. >>>>> It should be fixed before release if it is still freezing the Pi. >>>>> >>>>> Hy >>>>> >>>>> On Mon, Nov 9, 2020 at 11:42 PM Wolfgang Reissenberger < >>>>> sterne-jaeger@openfuture.de> wrote: >>>>> >>>>>> OK, I have to check it. The problem occurred only a few days ago and >>>>>> I think I=E2=80=99m always on bleeding edge... >>>>>> >>>>>> Am 10.11.2020 um 08:38 schrieb Hy Murveit : >>>>>> >>>>>> Wolfgang: I believe Rob and/or Jasem fixed the issue with parallel >>>>>> algorithm bringing down the RPi4 a while back. >>>>>> I have the solver on auto parallelism and load all indexes in memory= , >>>>>> and it seems to work fine (and in parallel). >>>>>> Similarly, for star extraction, Jasem implemented a threaded >>>>>> extraction that also automatically determines how many threads to us= e and >>>>>> seems fine on the RPi4. >>>>>> >>>>>> Eric: I believe these parallel options are the defaults. Hopefully >>>>>> users won't need to configure things like this. >>>>>> For star detection, I don't believe you can turn it off. >>>>>> For star detection Jasem split the frame before detection (into at >>>>>> most num-threads parts--4 for the RPi4). >>>>>> For align, I'm not sure how Rob divided things. >>>>>> >>>>>> Hy >>>>>> >>>>>> On Mon, Nov 9, 2020 at 11:07 PM Wolfgang Reissenberger < >>>>>> sterne-jaeger@openfuture.de> wrote: >>>>>> >>>>>>> Hi all, >>>>>>> I think we are close to finishing the release. I personally would >>>>>>> opt to wait for another week and keep an eye stability. >>>>>>> >>>>>>> Maybe we should take another look if the default settings in the >>>>>>> StellarSolver profiles work a) for typical camera/scope combination= s and b) >>>>>>> for all platforms. >>>>>>> >>>>>>> For example with my RPi, I needed to change the Parallel Algorithm >>>>>>> to =E2=80=9ENone=E2=80=9C because parallelity brought KStars down. = Is the default setting >>>>>>> =E2=80=9ENone=E2=80=9C and I changed it somewhen? With all the new = parameters I would >>>>>>> prefer having a robust setup and leave it to the user to optimize s= peed. >>>>>>> >>>>>>> @Jasem: please take a closer look to MR!122, since it fixed 4(!) >>>>>>> regressions I introduced with my capture counting fix MR!114. Hopef= ully now >>>>>>> we have at least a proper coverage with automated tests... >>>>>>> >>>>>>> Wolfgang >>>>>>> >>>>>>> Am 09.11.2020 um 22:04 schrieb Jasem Mutlaq >>>>>> >: >>>>>>> >>>>>>> Hello Folks, >>>>>>> >>>>>>> So back to this topic, any major blockers to the KStars 3.5.0 >>>>>>> release now? >>>>>>> >>>>>>> 1. Remote Solver should be fixed now. >>>>>>> 2. StellarSolver Profiles are more optimized now. >>>>>>> 3. Handbook not updated yet, but we can probably work on this >>>>>>> shortly. >>>>>>> 4. Couple of pending MRs to take care of. >>>>>>> >>>>>>> How about Friday the 13th? >>>>>>> >>>>>>> -- >>>>>>> Best Regards, >>>>>>> Jasem Mutlaq >>>>>>> >>>>>>> >>>>>>> >>>>>>> On Thu, Nov 5, 2020 at 3:41 AM Robert Lancaster >>>>>>> wrote: >>>>>>> >>>>>>>> Hi Eric, >>>>>>>> >>>>>>>> Ok so then we would be changing the way we do version numbering >>>>>>>> with this, right? >>>>>>>> I believe now we typically add features in each new iteration >>>>>>>> 3.4.1, 3.4.2, etc etc >>>>>>>> and when it is really big like StellarSolver, then we make it a bi= g >>>>>>>> release like 3.5.0 >>>>>>>> >>>>>>>> With this new paradigm, we wouldn=E2=80=99t put new features into = the >>>>>>>> master of the main 3.5 branch >>>>>>>> But instead we would work on a new 3.6 branch, and then bug fixes >>>>>>>> would go into the 3.5 branch >>>>>>>> to make each new minor release, like 3.5.1, 3.5.2 etc. >>>>>>>> >>>>>>>> Do I have this correct? >>>>>>>> >>>>>>>> If this is right, then it would be longer before users see new >>>>>>>> features in the main branch, but the >>>>>>>> tradeoff is that the main branch would have a LOT more stability. >>>>>>>> I see this as a big positive. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> >>>>>>>> Rob >>>>>>>> >>>>>>>> > On Nov 4, 2020, at 5:54 PM, Eric Dejouhanet < >>>>>>>> eric.dejouhanet@gmail.com> wrote: >>>>>>>> > >>>>>>>> > Hello Hy, >>>>>>>> > >>>>>>>> > Version 3.5.0 is only the beginning of the 3.5.x series, with mo= re >>>>>>>> > bugfixes on each iteration (and possibly, only bugfixes). >>>>>>>> > So I have no problem leaving unresolved issues in 3.5.0. >>>>>>>> > >>>>>>>> > For instance, the Focus module now has a slight and unforeseeabl= e >>>>>>>> > delay after the capture completes. >>>>>>>> > The UI reflects the end of the capture only, not the end of the >>>>>>>> detection. >>>>>>>> > This makes the UI Focus test quite difficult to tweak, as runnin= g >>>>>>>> an >>>>>>>> > average of the HFR over multiple frames now has an unknown >>>>>>>> duration. >>>>>>>> > Right now, the test is trying to click the capture button too >>>>>>>> soon 2 >>>>>>>> > out of 10 attempts. >>>>>>>> > But this won't block 3.5 in my opinion (and now that I understoo= d >>>>>>>> the >>>>>>>> > problem, I won't work on it immediately). >>>>>>>> > >>>>>>>> > In terms of reporting problems, the official way is stil >>>>>>>> bugs.kde.org, >>>>>>>> > but there's quite a cleanup/followup to do there. >>>>>>>> > I'd say we can use issues in invent.kde.org to discuss planned >>>>>>>> > development around a forum/bugzilla issue or invent proposal (li= ke >>>>>>>> > agile stories). >>>>>>>> > There are milestones associated with several issues (although I >>>>>>>> think >>>>>>>> > they should be reviewed and postponed). >>>>>>>> > And we can certainly write a punchlist: check the board at >>>>>>>> > https://invent.kde.org/education/kstars/-/milestones/3 >>>>>>>> > >>>>>>>> > Le mer. 4 nov. 2020 =C3=A0 22:38, Hy Murveit = a >>>>>>>> =C3=A9crit : >>>>>>>> >> >>>>>>>> >> Eric, >>>>>>>> >> >>>>>>>> >> I would add to your list: >>>>>>>> >> >>>>>>>> >> - KStars Handbook (review update sections to reflect 3.5.0) and >>>>>>>> finally (perhaps manually if necessary) put the latest handbook on= line. >>>>>>>> >> >>>>>>>> >> - Review the extraction settings. I spent a bit of time looking >>>>>>>> at the default HFR settings, and based on some experimentation (tr= uth be >>>>>>>> told, with a limited amount of data) adjust things a little differ= ently >>>>>>>> than my first guess (which was basically focus' settings). >>>>>>>> >> Rob: My intuition is that I should adjust the default >>>>>>>> StellarSolver star-extraction settings for Focus and Guide as well= in >>>>>>>> stellarsolverprofile.cpp. I don't know whether you've already veri= fied >>>>>>>> them, and want to release them as they are, or whether they are a = first >>>>>>>> shot and you'd welcome adjustment? >>>>>>>> >> >>>>>>>> >> Also, Eric, I suppose I should be adding these things here: >>>>>>>> https://invent.kde.org/education/kstars/-/issues >>>>>>>> >> Is that right? Sorry about that--ok, after this thread ;) But >>>>>>>> seriously, your email is a good summary, and from that link >>>>>>>> >> it doesn't seem as easy to see which are "must do by 3.5.0" and >>>>>>>> which are "nice to have someday". >>>>>>>> >> A 3.5.0 punchlist would be a nice thing to have. >>>>>>>> >> >>>>>>>> >> Hy >>>>>>>> >> >>>>>>>> >> On Wed, Nov 4, 2020 at 12:58 PM Eric Dejouhanet < >>>>>>>> eric.dejouhanet@gmail.com> wrote: >>>>>>>> >>> >>>>>>>> >>> Hello, >>>>>>>> >>> >>>>>>>> >>> Where do we stand now in terms of bugfixing towards 3.5.0? >>>>>>>> >>> >>>>>>>> >>> - StellarSolver has all features in, and 1.5 is finally out at >>>>>>>> Jasem's PPA. >>>>>>>> >>> - However Gitlab CI still complains about that lib package (se= e >>>>>>>> >>> https://invent.kde.org/education/kstars/-/jobs/75941) >>>>>>>> >>> - Unitary tests are being fixed progressively, mount tests are >>>>>>>> down to >>>>>>>> >>> ~20 minutes (yeees!) >>>>>>>> >>> - From my tests, the remote Astrometry INDI driver is not usab= le >>>>>>>> >>> anymore from Ekos. >>>>>>>> >>> - The issue raised with flat frames is confirmed fixed (at >>>>>>>> least by me). >>>>>>>> >>> - Meridian flip is OK (but I had not enough time to test TWO >>>>>>>> flips in a row). >>>>>>>> >>> - Memory leaks are still being researched in Ekos. >>>>>>>> >>> - There is an issue when duplicating an entry in a scheduler >>>>>>>> job, >>>>>>>> >>> where the sequence associated is copied from the next job. >>>>>>>> >>> >>>>>>>> >>> Could we get a 3.6 branch where we will merge development of >>>>>>>> new features? >>>>>>>> >>> And master for bugfixing 3.5.x until we merge 3.6 new features >>>>>>>> in? >>>>>>>> >>> (we'd still have to port bugfixes from master to 3.6) >>>>>>>> >>> I don't think the opposite, master for 3.6 and a separate livi= ng >>>>>>>> >>> 3.5.x, is doable in the current configuration (build, ppas, >>>>>>>> MRs...). >>>>>>>> >>> >>>>>>>> >>> -- >>>>>>>> >>> -- eric.dejouhanet@gmail.com - https://astronomy.dejouha.net >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > -- >>>>>>>> > -- eric.dejouhanet@gmail.com - https://astronomy.dejouha.net >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>>> >>>> >>> >>> >>> --00000000000014726805b494a16f Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I just became free from work for a few days and I tho= ught I'd try to get my MRs in for 3.5.0. Looks like I missed the tag :-= )

Regards
Akarsh


Am = Fr., 20. Nov. 2020 um 18:18=C2=A0Uhr schrieb Hy Murveit <murveit@gmail.com>:
> git log
comm= it bed10ad934e8b60c36da5a3bfeaa8c8e8284e384 (HEAD -> master, upstream/ma= ster)
Author: Jasem Mutlaq <mutlaqja@ikarustech.com>
Date: =C2=A0 Sat Nov= 21 02:49:47 2020 +0300
=C2=A0 =C2=A0 Marking stable release for 3.5.0

Woohoo! Congratulations!!
On Sat, N= ov 14, 2020 at 9:04 PM Hy Murveit <murveit@gmail.com> wrote:
Jasem,
<= br>
Build is broken.

To get things to co= mpile I needed to comment out:
=C2=A0 =C2=A0lines 46, 48 859, 864= of align.h
These are related to your recent commits.

Hy

PS IMHO it's better to re= move all those lines you commented out in the recent commits.
You= can always retrieve them in git.

On Sat, Nov 14, 2020 at 7:46 PM Robe= rt Lancaster <r= lancaste@gmail.com> wrote:
Or did you say the solve succeeded with whatever pro= file you used?=C2=A0 Sorry this email thread is missing part of the message= and I may have misinterpreted it.=C2=A0 Maybe this image was in response t= o your message about the parallel solvers not shutting down that I already = responded to?

On Nov 14, 2020, a= t 10:43 PM, Robert Lancaster <rlancaste@gmail.com> wrote:

Hi Wol= fgang, =C2=A0I tried solving this image with my Small Scale Solving profile= and it failed.=C2=A0 I noticed that your stars are fairly small and it was= downsampling by 3. =C2=A0 =C2=A0So I tried turning off downsampling entire= ly and it succeeded in about 3 seconds.=C2=A0 If you are having trouble wit= h failed solves, you can try disabling the auto downsample function and try= 1 or 2 for the downsample. =C2=A0

On Nov 14, 2020, at 6:44 PM, Wolfgang Reissenberger <sterne-jaeger@openfuture.d= e> wrote:

Try this one:

Am 14.11.2020 um 23:57 schrieb Jasem Mutlaq= <mutlaqja@= ikarustech.com>:

Got a link to the im= age?

A user sent me this log:

[= 2020-11-14T02:18:16.415 UTC WARN ][ =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default] - QObject::startTimer: Time= rs can only be used with threads started with QThread
[2020-11-14T02:18:= 16.443 UTC WARN ][ =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 default] - QtDBus: cannot relay signals from parent Ph= onon::AbstractAudioOutput(0x4cfbe30 "") unless they are emitted i= n the object's thread QThread(0xcf9258 ""). Current thread is= QThread(0x507d2a8 "").
[2020-11-14T02:18:16.444 UTC WARN ][ = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 default] - QtDBus: cannot relay signals from parent QObject(0x4cfbe30 &= quot;") unless they are emitted in the object's thread QThread(0xc= f9258 ""). Current thread is QThread(0x507d2a8 "").
= [2020-11-14T02:18:16.485 UTC WARN ][ =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 default] - QObject::~QObject: Timers= cannot be stopped from another thread

Anyone seen= anything like this? It appears to be related to Phonon playing notificatio= n sounds and not an internal error for KStars.

=
--
<= div>Best Regards,
Jasem Mutlaq



On Sat, Nov 14, 2020 at 11:02 PM Wolfgang Reissenber= ger <st= erne-jaeger@openfuture.de> wrote:
Robert, all,
I had the issue again when t= rying to solve a wide field image around NGC6888, which contains very dense= star fields. I am using the 1-Default profile without any change.

If I leave the =E2=80=9EParallel Algorithm=E2=80=9C option= from the Astrometry Parameters on =E2=80=9EAuto=E2=80=9C, Kstars solves th= e image very fast, but remains on 100%. It seems that the in parallel runni= ng threads were hanging.

I am using the following = versions:
KStars:=C2=A057c44d05c3e1f9895d84c7f4f73950975e8eddb7
StellarSolver:=C2=A02d7eba6685c1bcd77c0525e88b3d24b2fcd474a9
=

Anything I could test right now?

Wolfgang

Am 10.11.2020 um 15:= 50 schrieb Robert Lancaster <rlancaste@gmail.com>:

Hi Wolfg= ang,

So I just want to clarify something you said = here, there are a couple of parallel things and that can be a little confus= ing, so I just want to make sure we are talking about the same things.=C2= =A0 The cause of the confusion is the terminology that astrometry.net=C2=A0uses
1.=C2=A0Load all Indexes in Memory /=C2=A0Load all= indexes in Parallel.=C2=A0 This is the inParallel option for astrometry.net. =C2=A0 In the o= ptions I tried to call this =E2=80=9CLoad all Indexes in Memory=E2=80=9D to= attempt to avoid the confusion with the Parallel Algorithm.=C2=A0 This has= nothing to do with parallelization in different threads or processors.=C2= =A0 It has to do with memory management.=C2=A0 The astrometry.net=C2=A0solver can load the in= dexes and search them one after the other, or it can try to load all the in= dexes at once and then solve.=C2=A0 The second option is much much faster, = but comes with risk. =C2=A0astrometry.net=C2=A0does NOT check to see if it has enough RAM bef= ore it tries to solve, =C2=A0They have big warnings in the documentation ab= out using this option.=C2=A0 If you don=E2=80=99t have enough RAM, it could= use all the RAM and crash.

I programmed StellarSo= lver to check the available RAM prior to starting the solve.=C2=A0 If there= is not enough RAM, it is supposed to turn off the option.=C2=A0 The user c= an also disable the option entirely, so that there is never a problem.=C2= =A0 But you really do want the option turned on if your system can handle i= t.=C2=A0 We had some issues earlier about the RAM calculation.=C2=A0 I thin= k the =E2=80=9CinParallel=E2=80=9D option causes the greatest crash risk.= =C2=A0 I would really like it if somebody could look over the code for dete= rmining enough RAM and see if it is good now.=C2=A0 One thought that I have= is that we can make the calculation more conservative and we could change = the option to have 3 choices, Auto, on, or off.=C2=A0 So that if a user is = really brave, or convinced they have enough RAM for sure, they could turn t= he option on regardless of the risk, If they are risk averse, they could tu= rn it off, but most users could just leave it on auto.=C2=A0 What do you th= ink?

2. Parallelization Algorithm for solving. =C2= =A0=C2=A0I am assuming this second option is what you meant in your e= mail. =C2=A0This one is entirely of my creation and is what makes St= ellarSolver stellar.=C2=A0 Modern computers really have great capacity for = computing in parallel and it causes a HUGE performance boost to use this ca= pability, even on a Pi, since the PI has 4 processors.=C2=A0

=
I programmed StellarSolver to have 2 different parallel algorith= ms, one that solves simultaneously at multiple =E2=80=9Cdepths=E2=80=9D and= one that solves simultaneously at different scales.=C2=A0 If you set it to= Auto, it will select the appropriate one based on whether you specified th= e scale or position (or neither).=C2=A0 If the image has both scale AND pos= ition, it does NOT solve in parallel and goes back to solving with a single= thread.

When Jasem wanted to me to de-thread the = StellarSolver and make it so that just the solvers are threads, I had to ma= ke a bunch of changes and one change I forgot was to make the star extracti= on before parallel solving asynchronous.=C2=A0 That does mean that when doi= ng a parallel solve, it might look like things have frozen for a moment dur= ing the star extraction before the threads start up.=C2=A0 I have already f= ixed this, but it is in the releaseExperiment branch of StellarSolver, not = in Master.=C2=A0 I would like to get this fix integrated before we release,= but I will need to test this thoroughly first as I mentioned in a previous= email.=C2=A0 I am wondering if this freezing behavior was what caused the = =E2=80=9Ccrash=E2=80=9D you observed?

Thanks,

Rob


On Nov 10, 2020, at 8:03 AM, Wolfgang Reissenberger &l= t;sterne-j= aeger@openfuture.de> wrote:

OK, I did a quick che= ck on my RPi4 with Parallel Algorithm set to =E2=80=9EAuto=E2=80=9C - and i= t works super fast! But since it is daytime, I can only test the =E2=80=9EL= oad and Slew=E2=80=9C option. So maybe the WCS info in the file gave hints = that are not present for normal capture and slew or sync.

I need to check it under real conditions, which might be tricky due to th= e fog hanging around here=E2=80=A6

Wolfgang
Am 10.11.2020 um 11:16 schrieb Jasem Mutl= aq <mutlaqj= a@ikarustech.com>:

Alright, let's= look at this:

1. Parallel algorithm: This is related to= SOLVER, not image partitioning. It should work fine on Rpi4 and the checks= are more reliable now as Robert worked on that.
2. WCS Polar Ali= gn: Can this be reproduced with simulators?

--
Best Regards,
Jasem Mutlaq



On Tue, Nov 10, 2020 at 10:48 AM Wolfgang Reissenberger= <stern= e-jaeger@openfuture.de> wrote:
It wasn=E2=80=99t that bad. The problem was that= KStars went to 100% CPU usage and died (or I killed it, do not exactly rem= ember). I=E2=80=99ll try to reproduce it...

Am 10.11.2020 um 08:45 schrieb Hy Murveit <murveit@gmail.com>:

<= div>
OK, well I believe it was fixed a week ago, so if you = can still recreate it, you should report it.=C2=A0
It should be fixed b= efore release if it is still freezing the Pi.

Hy

On Mon, Nov 9, 2020 at 11:42 PM Wolfgang Reissenberger <sterne-jaeger@openfutu= re.de> wrote:
OK, I have to check it. The problem occurred only a few days ago = and I think I=E2=80=99m always on bleeding edge...

Am 10.11.2020 um 08:38 schrieb Hy Murveit <murveit@gmail.com>:
Wolfgang: I believe Rob and/or Jasem fixed the = issue with parallel algorithm bringing down the RPi4 a while back.
I ha= ve the solver on auto parallelism and load all indexes in memory, and it se= ems to work fine (and in parallel).
Similarly, for star extractio= n, Jasem implemented a threaded extraction that also automatically determin= es how many threads to use and seems fine on the RPi4.

=
Eric: I believe these parallel options are the defaults. Hopefully use= rs won't need to configure things like this.
For star detecti= on, I don't believe you can turn it off.
For star detection J= asem split the frame before detection (into at most num-threads parts--4 fo= r the RPi4).
For align, I'm not sure how Rob divided things.<= /div>

Hy

On Mon, Nov 9, 2020 at 11:07 PM Wolfgang R= eissenberger <sterne-jaeger@openfuture.de> wrote:
Hi all,
I think we are close to = finishing the release. I personally would opt to wait for another week and = keep an eye stability.

Maybe we should take anothe= r look if the default settings in the StellarSolver profiles work a) for ty= pical camera/scope combinations and b) for all platforms.

For example with my RPi, I needed to change the Parallel Algorithm = to =E2=80=9ENone=E2=80=9C because parallelity brought KStars down. Is the d= efault setting =E2=80=9ENone=E2=80=9C and I changed it somewhen? With all t= he new parameters I would prefer having a robust setup and leave it to the = user to optimize speed.

@Jasem: please take a clos= er look to MR!122, since it fixed 4(!) regressions I introduced with my cap= ture counting fix MR!114. Hopefully now we have at least a proper coverage = with automated tests...

Wolfgang
Am 09.11.2020 um 22:04 schrieb Jasem Mutla= q <mutlaqja= @ikarustech.com>:

Hello Folks,
So back to this topic, any major blockers to the KStars 3.5.0 = release now?

1. Remote Solver should be fixed now.=
2. StellarSolver Profiles are more optimized now.
3. H= andbook not updated yet, but we can probably work on this shortly.
4. Couple=C2=A0of pending MRs to take care of.

H= ow about Friday the 13th?

--
Best Regards,
Jas= em Mutlaq


<= /div>
O= n Thu, Nov 5, 2020 at 3:41 AM Robert Lancaster <rlancaste@gmail.com> wrote:
Hi Eric,

Ok so then we would be changing the way we do version numbering with this, = right?
I believe now we typically add features in each new iteration 3.4.1, 3.4.2,= etc etc
and when it is really big like StellarSolver, then we make it a big release= like 3.5.0

With this new paradigm, we wouldn=E2=80=99t put new features into the maste= r of the main 3.5 branch
But instead we would work on a new 3.6 branch, and then bug fixes would go = into the 3.5 branch
to make each new minor release, like 3.5.1, 3.5.2 etc.

Do I have this correct?

If this is right, then it would be longer before users see new features in = the main branch, but the
tradeoff is that the main branch would have a LOT more stability.=C2=A0 I s= ee this as a big positive.

Thanks,

Rob

> On Nov 4, 2020, at 5:54 PM, Eric Dejouhanet <eric.dejouhanet@gmail.com> = wrote:
>
> Hello Hy,
>
> Version 3.5.0 is only the beginning of the 3.5.x series, with more
> bugfixes on each iteration (and possibly, only bugfixes).
> So I have no problem leaving unresolved issues in 3.5.0.
>
> For instance, the Focus module now has a slight and unforeseeable
> delay after the capture completes.
> The UI reflects the end of the capture only, not the end of the detect= ion.
> This makes the UI Focus test quite difficult to tweak, as running an > average of the HFR over multiple frames now has an unknown duration. > Right now, the test is trying to click the capture button too soon 2 > out of 10 attempts.
> But this won't block 3.5 in my opinion (and now that I understood = the
> problem, I won't work on it immediately).
>
> In terms of reporting problems, the official way is stil bugs.kde.org,<= br> > but there's quite a cleanup/followup to do there.
> I'd say we can use issues in invent.kde.org to discuss planned > development around a forum/bugzilla issue or invent proposal (like
> agile stories).
> There are milestones associated with several issues (although I think<= br> > they should be reviewed and postponed).
> And we can certainly write a punchlist: check the board at
> https://invent.kde.org/education/kstars/-= /milestones/3
>
> Le mer. 4 nov. 2020 =C3=A0 22:38, Hy Murveit <murveit@gmail.com> a =C3=A9crit :<= br> >>
>> Eric,
>>
>> I would add to your list:
>>
>> - KStars Handbook (review update sections to reflect 3.5.0) and fi= nally (perhaps manually if necessary) put the latest handbook online.
>>
>> - Review the extraction settings. I spent a bit of time looking at= the default HFR settings, and based on some experimentation (truth be told= , with a limited amount of data) adjust things a little differently than my= first guess (which was basically focus' settings).
>> Rob: My intuition is that I should adjust the default StellarSolve= r star-extraction settings for Focus and Guide as well in stellarsolverprof= ile.cpp. I don't know whether you've already verified them, and wan= t to release them as they are, or whether they are a first shot and you'= ;d welcome adjustment?
>>
>> Also, Eric, I suppose I should be adding these things here: https://invent.kde.org/education/kstars/-/issues
>> Is that right? Sorry about that--ok, after this thread ;) But seri= ously, your email is a good summary, and from that link
>> it doesn't seem as easy to see which are "must do by 3.5.= 0" and which are "nice to have someday".
>> A 3.5.0 punchlist would be a nice thing to have.
>>
>> Hy
>>
>> On Wed, Nov 4, 2020 at 12:58 PM Eric Dejouhanet <eric.dejouhanet@gmail.com<= /a>> wrote:
>>>
>>> Hello,
>>>
>>> Where do we stand now in terms of bugfixing towards 3.5.0?
>>>
>>> - StellarSolver has all features in, and 1.5 is finally out at= Jasem's PPA.
>>> - However Gitlab CI still complains about that lib package (se= e
>>>
https://invent.kde.org/education/ks= tars/-/jobs/75941)
>>> - Unitary tests are being fixed progressively, mount tests are= down to
>>> ~20 minutes (yeees!)
>>> - From my tests, the remote Astrometry INDI driver is not usab= le
>>> anymore from Ekos.
>>> - The issue raised with flat frames is confirmed fixed (at lea= st by me).
>>> - Meridian flip is OK (but I had not enough time to test TWO f= lips in a row).
>>> - Memory leaks are still being researched in Ekos.
>>> - There is an issue when duplicating an entry in a scheduler j= ob,
>>> where the sequence associated is copied from the next job.
>>>
>>> Could we get a 3.6 branch where we will merge development of n= ew features?
>>> And master for bugfixing 3.5.x until we merge 3.6 new features= in?
>>> (we'd still have to port bugfixes from master to 3.6)
>>> I don't think the opposite, master for 3.6 and a separate = living
>>> 3.5.x, is doable in the current configuration (build, ppas, MR= s...).
>>>
>>> --
>>> -- eric.dejouhanet@gmail.com - https://astronomy.dejouha.net<= br> >
>
>
> --
> -- eric= .dejouhanet@gmail.com - https://astronomy.dejouha.net









--00000000000014726805b494a16f--