[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kstars-devel
Subject:    Re: KStars v3.5.0 Release Date?
From:       Akarsh Simha <akarshsimha () gmail ! com>
Date:       2020-11-21 2:22:31
Message-ID: CA+9k5txAejcPH2PuJSSW-Vb43Xav5HTELbbjj-faZTAVrY6M+Q () mail ! gmail ! com
[Download RAW message or body]

I just became free from work for a few days and I thought I'd try to get my
MRs in for 3.5.0. Looks like I missed the tag :-)

Regards
Akarsh


Am Fr., 20. Nov. 2020 um 18:18 Uhr schrieb Hy Murveit <murveit@gmail.com>:

>
>
>
>
> *> git logcommit bed10ad934e8b60c36da5a3bfeaa8c8e8284e384 (HEAD -> master,
> upstream/master)Author: Jasem Mutlaq <mutlaqja@ikarustech.com
> <mutlaqja@ikarustech.com>>Date:   Sat Nov 21 02:49:47 2020 +0300    Marking
> stable release for 3.5.0*
>
>
> Woohoo! Congratulations!!
>
> On Sat, Nov 14, 2020 at 9:04 PM Hy Murveit <murveit@gmail.com> wrote:
>
>> Jasem,
>>
>> Build is broken.
>>
>> To get things to compile I needed to comment out:
>>    lines 46, 48 859, 864 of align.h
>> These are related to your recent commits.
>>
>> Hy
>>
>> PS IMHO it's better to remove all those lines you commented out in the
>> recent commits.
>> You can always retrieve them in git.
>>
>> On Sat, Nov 14, 2020 at 7:46 PM Robert Lancaster <rlancaste@gmail.com>
>> wrote:
>>
>>> Or did you say the solve succeeded with whatever profile you used?
>>> Sorry this email thread is missing part of the message and I may have
>>> misinterpreted it.  Maybe this image was in response to your message about
>>> the parallel solvers not shutting down that I already responded to?
>>>
>>> On Nov 14, 2020, at 10:43 PM, Robert Lancaster <rlancaste@gmail.com>
>>> wrote:
>>>
>>> Hi Wolfgang,  I tried solving this image with my Small Scale Solving
>>> profile and it failed.  I noticed that your stars are fairly small and it
>>> was downsampling by 3.    So I tried turning off downsampling entirely and
>>> it succeeded in about 3 seconds.  If you are having trouble with failed
>>> solves, you can try disabling the auto downsample function and try 1 or 2
>>> for the downsample.
>>>
>>> On Nov 14, 2020, at 6:44 PM, Wolfgang Reissenberger <
>>> sterne-jaeger@openfuture.de> wrote:
>>>
>>> Try this one:
>>>
>>> https://drive.google.com/file/d/1QAq19iQjdqe_YJNuNCcOyWHaoyHQGxcE/view?usp=sharing
>>>
>>>
>>> Am 14.11.2020 um 23:57 schrieb Jasem Mutlaq <mutlaqja@ikarustech.com>:
>>>
>>> Got a link to the image?
>>>
>>> A user sent me this log:
>>>
>>> [2020-11-14T02:18:16.415 UTC WARN ][                       default] -
>>> QObject::startTimer: Timers can only be used with threads started with
>>> QThread
>>> [2020-11-14T02:18:16.443 UTC WARN ][                       default] -
>>> QtDBus: cannot relay signals from parent
>>> Phonon::AbstractAudioOutput(0x4cfbe30 "") unless they are emitted in the
>>> object's thread QThread(0xcf9258 ""). Current thread is QThread(0x507d2a8
>>> "").
>>> [2020-11-14T02:18:16.444 UTC WARN ][                       default] -
>>> QtDBus: cannot relay signals from parent QObject(0x4cfbe30 "") unless they
>>> are emitted in the object's thread QThread(0xcf9258 ""). Current thread is
>>> QThread(0x507d2a8 "").
>>> [2020-11-14T02:18:16.485 UTC WARN ][                       default] -
>>> QObject::~QObject: Timers cannot be stopped from another thread
>>>
>>> Anyone seen anything like this? It appears to be related to Phonon
>>> playing notification sounds and not an internal error for KStars.
>>>
>>> --
>>> Best Regards,
>>> Jasem Mutlaq
>>>
>>>
>>>
>>> On Sat, Nov 14, 2020 at 11:02 PM Wolfgang Reissenberger <
>>> sterne-jaeger@openfuture.de> wrote:
>>>
>>>> Robert, all,
>>>> I had the issue again when trying to solve a wide field image around
>>>> NGC6888, which contains very dense star fields. I am using the 1-Default
>>>> profile without any change.
>>>>
>>>> If I leave the „Parallel Algorithm" option from the Astrometry
>>>> Parameters on „Auto", Kstars solves the image very fast, but remains on
>>>> 100%. It seems that the in parallel running threads were hanging.
>>>>
>>>> I am using the following versions:
>>>> KStars: 57c44d05c3e1f9895d84c7f4f73950975e8eddb7
>>>> StellarSolver: 2d7eba6685c1bcd77c0525e88b3d24b2fcd474a9
>>>>
>>>> Anything I could test right now?
>>>>
>>>> Wolfgang
>>>>
>>>> Am 10.11.2020 um 15:50 schrieb Robert Lancaster <rlancaste@gmail.com>:
>>>>
>>>> Hi Wolfgang,
>>>>
>>>> So I just want to clarify something you said here, there are a couple
>>>> of parallel things and that can be a little confusing, so I just want to
>>>> make sure we are talking about the same things.  The cause of the confusion
>>>> is the terminology that astrometry.net uses
>>>>
>>>> 1. Load all Indexes in Memory / Load all indexes in Parallel.  This is
>>>> the inParallel option for astrometry.net.   In the options I tried to
>>>> call this "Load all Indexes in Memory" to attempt to avoid the confusion
>>>> with the Parallel Algorithm.  This has nothing to do with parallelization
>>>> in different threads or processors.  It has to do with memory management.
>>>> The astrometry.net solver can load the indexes and search them one
>>>> after the other, or it can try to load all the indexes at once and then
>>>> solve.  The second option is much much faster, but comes with risk.
>>>> astrometry.net does NOT check to see if it has enough RAM before it
>>>> tries to solve,  They have big warnings in the documentation about using
>>>> this option.  If you don't have enough RAM, it could use all the RAM and
>>>> crash.
>>>>
>>>> I programmed StellarSolver to check the available RAM prior to starting
>>>> the solve.  If there is not enough RAM, it is supposed to turn off the
>>>> option.  The user can also disable the option entirely, so that there is
>>>> never a problem.  But you really do want the option turned on if your
>>>> system can handle it.  We had some issues earlier about the RAM
>>>> calculation.  I think the "inParallel" option causes the greatest crash
>>>> risk.  I would really like it if somebody could look over the code for
>>>> determining enough RAM and see if it is good now.  One thought that I have
>>>> is that we can make the calculation more conservative and we could change
>>>> the option to have 3 choices, Auto, on, or off.  So that if a user is
>>>> really brave, or convinced they have enough RAM for sure, they could turn
>>>> the option on regardless of the risk, If they are risk averse, they could
>>>> turn it off, but most users could just leave it on auto.  What do you think?
>>>>
>>>> 2. Parallelization Algorithm for solving.   I am assuming this second
>>>> option is what you meant in your email.  This one is entirely of my
>>>> creation and is what makes StellarSolver stellar.  Modern computers really
>>>> have great capacity for computing in parallel and it causes a HUGE
>>>> performance boost to use this capability, even on a Pi, since the PI has 4
>>>> processors.
>>>>
>>>> I programmed StellarSolver to have 2 different parallel algorithms, one
>>>> that solves simultaneously at multiple "depths" and one that solves
>>>> simultaneously at different scales.  If you set it to Auto, it will select
>>>> the appropriate one based on whether you specified the scale or position
>>>> (or neither).  If the image has both scale AND position, it does NOT solve
>>>> in parallel and goes back to solving with a single thread.
>>>>
>>>> When Jasem wanted to me to de-thread the StellarSolver and make it so
>>>> that just the solvers are threads, I had to make a bunch of changes and one
>>>> change I forgot was to make the star extraction before parallel solving
>>>> asynchronous.  That does mean that when doing a parallel solve, it might
>>>> look like things have frozen for a moment during the star extraction before
>>>> the threads start up.  I have already fixed this, but it is in the
>>>> releaseExperiment branch of StellarSolver, not in Master.  I would like to
>>>> get this fix integrated before we release, but I will need to test this
>>>> thoroughly first as I mentioned in a previous email.  I am wondering if
>>>> this freezing behavior was what caused the "crash" you observed?
>>>>
>>>> Thanks,
>>>>
>>>> Rob
>>>>
>>>>
>>>> On Nov 10, 2020, at 8:03 AM, Wolfgang Reissenberger <
>>>> sterne-jaeger@openfuture.de> wrote:
>>>>
>>>> OK, I did a quick check on my RPi4 with Parallel Algorithm set to
>>>> „Auto" - and it works super fast! But since it is daytime, I can only test
>>>> the „Load and Slew" option. So maybe the WCS info in the file gave hints
>>>> that are not present for normal capture and slew or sync.
>>>>
>>>> I need to check it under real conditions, which might be tricky due to
>>>> the fog hanging around here…
>>>>
>>>> Wolfgang
>>>>
>>>> Am 10.11.2020 um 11:16 schrieb Jasem Mutlaq <mutlaqja@ikarustech.com>:
>>>>
>>>> Alright, let's look at this:
>>>>
>>>> 1. Parallel algorithm: This is related to SOLVER, not image
>>>> partitioning. It should work fine on Rpi4 and the checks are more reliable
>>>> now as Robert worked on that.
>>>> 2. WCS Polar Align: Can this be reproduced with simulators?
>>>>
>>>> --
>>>> Best Regards,
>>>> Jasem Mutlaq
>>>>
>>>>
>>>>
>>>> On Tue, Nov 10, 2020 at 10:48 AM Wolfgang Reissenberger <
>>>> sterne-jaeger@openfuture.de> wrote:
>>>>
>>>>> It wasn't that bad. The problem was that KStars went to 100% CPU usage
>>>>> and died (or I killed it, do not exactly remember). I'll try to reproduce
>>>>> it...
>>>>>
>>>>> Am 10.11.2020 um 08:45 schrieb Hy Murveit <murveit@gmail.com>:
>>>>>
>>>>> OK, well I believe it was fixed a week ago, so if you can still
>>>>> recreate it, you should report it.
>>>>> It should be fixed before release if it is still freezing the Pi.
>>>>>
>>>>> Hy
>>>>>
>>>>> On Mon, Nov 9, 2020 at 11:42 PM Wolfgang Reissenberger <
>>>>> sterne-jaeger@openfuture.de> wrote:
>>>>>
>>>>>> OK, I have to check it. The problem occurred only a few days ago and
>>>>>> I think I'm always on bleeding edge...
>>>>>>
>>>>>> Am 10.11.2020 um 08:38 schrieb Hy Murveit <murveit@gmail.com>:
>>>>>>
>>>>>> Wolfgang: I believe Rob and/or Jasem fixed the issue with parallel
>>>>>> algorithm bringing down the RPi4 a while back.
>>>>>> I have the solver on auto parallelism and load all indexes in memory,
>>>>>> and it seems to work fine (and in parallel).
>>>>>> Similarly, for star extraction, Jasem implemented a threaded
>>>>>> extraction that also automatically determines how many threads to use and
>>>>>> seems fine on the RPi4.
>>>>>>
>>>>>> Eric: I believe these parallel options are the defaults. Hopefully
>>>>>> users won't need to configure things like this.
>>>>>> For star detection, I don't believe you can turn it off.
>>>>>> For star detection Jasem split the frame before detection (into at
>>>>>> most num-threads parts--4 for the RPi4).
>>>>>> For align, I'm not sure how Rob divided things.
>>>>>>
>>>>>> Hy
>>>>>>
>>>>>> On Mon, Nov 9, 2020 at 11:07 PM Wolfgang Reissenberger <
>>>>>> sterne-jaeger@openfuture.de> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>> I think we are close to finishing the release. I personally would
>>>>>>> opt to wait for another week and keep an eye stability.
>>>>>>>
>>>>>>> Maybe we should take another look if the default settings in the
>>>>>>> StellarSolver profiles work a) for typical camera/scope combinations and b)
>>>>>>> for all platforms.
>>>>>>>
>>>>>>> For example with my RPi, I needed to change the Parallel Algorithm
>>>>>>> to „None" because parallelity brought KStars down. Is the default setting
>>>>>>> „None" and I changed it somewhen? With all the new parameters I would
>>>>>>> prefer having a robust setup and leave it to the user to optimize speed.
>>>>>>>
>>>>>>> @Jasem: please take a closer look to MR!122, since it fixed 4(!)
>>>>>>> regressions I introduced with my capture counting fix MR!114. Hopefully now
>>>>>>> we have at least a proper coverage with automated tests...
>>>>>>>
>>>>>>> Wolfgang
>>>>>>>
>>>>>>> Am 09.11.2020 um 22:04 schrieb Jasem Mutlaq <mutlaqja@ikarustech.com
>>>>>>> >:
>>>>>>>
>>>>>>> Hello Folks,
>>>>>>>
>>>>>>> So back to this topic, any major blockers to the KStars 3.5.0
>>>>>>> release now?
>>>>>>>
>>>>>>> 1. Remote Solver should be fixed now.
>>>>>>> 2. StellarSolver Profiles are more optimized now.
>>>>>>> 3. Handbook not updated yet, but we can probably work on this
>>>>>>> shortly.
>>>>>>> 4. Couple of pending MRs to take care of.
>>>>>>>
>>>>>>> How about Friday the 13th?
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards,
>>>>>>> Jasem Mutlaq
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Nov 5, 2020 at 3:41 AM Robert Lancaster <rlancaste@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Eric,
>>>>>>>>
>>>>>>>> Ok so then we would be changing the way we do version numbering
>>>>>>>> with this, right?
>>>>>>>> I believe now we typically add features in each new iteration
>>>>>>>> 3.4.1, 3.4.2, etc etc
>>>>>>>> and when it is really big like StellarSolver, then we make it a big
>>>>>>>> release like 3.5.0
>>>>>>>>
>>>>>>>> With this new paradigm, we wouldn't put new features into the
>>>>>>>> master of the main 3.5 branch
>>>>>>>> But instead we would work on a new 3.6 branch, and then bug fixes
>>>>>>>> would go into the 3.5 branch
>>>>>>>> to make each new minor release, like 3.5.1, 3.5.2 etc.
>>>>>>>>
>>>>>>>> Do I have this correct?
>>>>>>>>
>>>>>>>> If this is right, then it would be longer before users see new
>>>>>>>> features in the main branch, but the
>>>>>>>> tradeoff is that the main branch would have a LOT more stability.
>>>>>>>> I see this as a big positive.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Rob
>>>>>>>>
>>>>>>>> > On Nov 4, 2020, at 5:54 PM, Eric Dejouhanet <
>>>>>>>> eric.dejouhanet@gmail.com> wrote:
>>>>>>>> >
>>>>>>>> > Hello Hy,
>>>>>>>> >
>>>>>>>> > Version 3.5.0 is only the beginning of the 3.5.x series, with more
>>>>>>>> > bugfixes on each iteration (and possibly, only bugfixes).
>>>>>>>> > So I have no problem leaving unresolved issues in 3.5.0.
>>>>>>>> >
>>>>>>>> > For instance, the Focus module now has a slight and unforeseeable
>>>>>>>> > delay after the capture completes.
>>>>>>>> > The UI reflects the end of the capture only, not the end of the
>>>>>>>> detection.
>>>>>>>> > This makes the UI Focus test quite difficult to tweak, as running
>>>>>>>> an
>>>>>>>> > average of the HFR over multiple frames now has an unknown
>>>>>>>> duration.
>>>>>>>> > Right now, the test is trying to click the capture button too
>>>>>>>> soon 2
>>>>>>>> > out of 10 attempts.
>>>>>>>> > But this won't block 3.5 in my opinion (and now that I understood
>>>>>>>> the
>>>>>>>> > problem, I won't work on it immediately).
>>>>>>>> >
>>>>>>>> > In terms of reporting problems, the official way is stil
>>>>>>>> bugs.kde.org,
>>>>>>>> > but there's quite a cleanup/followup to do there.
>>>>>>>> > I'd say we can use issues in invent.kde.org to discuss planned
>>>>>>>> > development around a forum/bugzilla issue or invent proposal (like
>>>>>>>> > agile stories).
>>>>>>>> > There are milestones associated with several issues (although I
>>>>>>>> think
>>>>>>>> > they should be reviewed and postponed).
>>>>>>>> > And we can certainly write a punchlist: check the board at
>>>>>>>> > https://invent.kde.org/education/kstars/-/milestones/3
>>>>>>>> >
>>>>>>>> > Le mer. 4 nov. 2020 Ã  22:38, Hy Murveit <murveit@gmail.com> a
>>>>>>>> écrit :
>>>>>>>> >>
>>>>>>>> >> Eric,
>>>>>>>> >>
>>>>>>>> >> I would add to your list:
>>>>>>>> >>
>>>>>>>> >> - KStars Handbook (review update sections to reflect 3.5.0) and
>>>>>>>> finally (perhaps manually if necessary) put the latest handbook online.
>>>>>>>> >>
>>>>>>>> >> - Review the extraction settings. I spent a bit of time looking
>>>>>>>> at the default HFR settings, and based on some experimentation (truth be
>>>>>>>> told, with a limited amount of data) adjust things a little differently
>>>>>>>> than my first guess (which was basically focus' settings).
>>>>>>>> >> Rob: My intuition is that I should adjust the default
>>>>>>>> StellarSolver star-extraction settings for Focus and Guide as well in
>>>>>>>> stellarsolverprofile.cpp. I don't know whether you've already verified
>>>>>>>> them, and want to release them as they are, or whether they are a first
>>>>>>>> shot and you'd welcome adjustment?
>>>>>>>> >>
>>>>>>>> >> Also, Eric, I suppose I should be adding these things here:
>>>>>>>> https://invent.kde.org/education/kstars/-/issues
>>>>>>>> >> Is that right? Sorry about that--ok, after this thread ;) But
>>>>>>>> seriously, your email is a good summary, and from that link
>>>>>>>> >> it doesn't seem as easy to see which are "must do by 3.5.0" and
>>>>>>>> which are "nice to have someday".
>>>>>>>> >> A 3.5.0 punchlist would be a nice thing to have.
>>>>>>>> >>
>>>>>>>> >> Hy
>>>>>>>> >>
>>>>>>>> >> On Wed, Nov 4, 2020 at 12:58 PM Eric Dejouhanet <
>>>>>>>> eric.dejouhanet@gmail.com> wrote:
>>>>>>>> >>>
>>>>>>>> >>> Hello,
>>>>>>>> >>>
>>>>>>>> >>> Where do we stand now in terms of bugfixing towards 3.5.0?
>>>>>>>> >>>
>>>>>>>> >>> - StellarSolver has all features in, and 1.5 is finally out at
>>>>>>>> Jasem's PPA.
>>>>>>>> >>> - However Gitlab CI still complains about that lib package (see
>>>>>>>> >>> https://invent.kde.org/education/kstars/-/jobs/75941)
>>>>>>>> >>> - Unitary tests are being fixed progressively, mount tests are
>>>>>>>> down to
>>>>>>>> >>> ~20 minutes (yeees!)
>>>>>>>> >>> - From my tests, the remote Astrometry INDI driver is not usable
>>>>>>>> >>> anymore from Ekos.
>>>>>>>> >>> - The issue raised with flat frames is confirmed fixed (at
>>>>>>>> least by me).
>>>>>>>> >>> - Meridian flip is OK (but I had not enough time to test TWO
>>>>>>>> flips in a row).
>>>>>>>> >>> - Memory leaks are still being researched in Ekos.
>>>>>>>> >>> - There is an issue when duplicating an entry in a scheduler
>>>>>>>> job,
>>>>>>>> >>> where the sequence associated is copied from the next job.
>>>>>>>> >>>
>>>>>>>> >>> Could we get a 3.6 branch where we will merge development of
>>>>>>>> new features?
>>>>>>>> >>> And master for bugfixing 3.5.x until we merge 3.6 new features
>>>>>>>> in?
>>>>>>>> >>> (we'd still have to port bugfixes from master to 3.6)
>>>>>>>> >>> I don't think the opposite, master for 3.6 and a separate living
>>>>>>>> >>> 3.5.x, is doable in the current configuration (build, ppas,
>>>>>>>> MRs...).
>>>>>>>> >>>
>>>>>>>> >>> --
>>>>>>>> >>> -- eric.dejouhanet@gmail.com - https://astronomy.dejouha.net
>>>>>>>> >
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > --
>>>>>>>> > -- eric.dejouhanet@gmail.com - https://astronomy.dejouha.net
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>

[Attachment #3 (text/html)]

<div dir="ltr"><div>I just became free from work for a few days and I thought I&#39;d try to \
get my MRs in for 3.5.0. Looks like I missed the tag \
:-)</div><div><br></div><div>Regards</div><div>Akarsh</div><div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">Am Fr., 20. Nov. 2020 um 18:18  Uhr \
schrieb Hy Murveit &lt;<a \
href="mailto:murveit@gmail.com">murveit@gmail.com</a>&gt;:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><blockquote style="margin:0px 0px 0px \
40px;border:medium none;padding:0px"><i>&gt; git log<br>commit \
bed10ad934e8b60c36da5a3bfeaa8c8e8284e384 (HEAD -&gt; master, upstream/master)<br>Author: Jasem \
Mutlaq &lt;<a href="mailto:mutlaqja@ikarustech.com" \
target="_blank">mutlaqja@ikarustech.com</a>&gt;<br>Date:    Sat Nov 21 02:49:47 2020 +0300<br>  \
Marking stable release for 3.5.0</i></blockquote><div><br></div><div>Woohoo! \
Congratulations!!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On \
Sat, Nov 14, 2020 at 9:04 PM Hy Murveit &lt;<a href="mailto:murveit@gmail.com" \
target="_blank">murveit@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div>Jasem,</div><div><br></div><div>Build is broken.</div><div><br></div><div>To get \
things to compile I needed to comment out:</div><div>     lines 46, 48 859, 864 of \
align.h<br></div><div>These are related to your recent \
commits.</div><div><br></div><div>Hy</div><div><br></div><div>PS IMHO it&#39;s better to remove \
all those lines you commented out in the recent commits.</div><div>You can always retrieve them \
in git.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Nov \
14, 2020 at 7:46 PM Robert Lancaster &lt;<a href="mailto:rlancaste@gmail.com" \
target="_blank">rlancaste@gmail.com</a>&gt; wrote:<br></div><blockquote class="gmail_quote" \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div>Or did you say the solve succeeded with whatever \
profile you used?   Sorry this email thread is missing part of the message and I may have \
misinterpreted it.   Maybe this image was in response to your message about the parallel \
solvers not shutting down that I already responded to?<br><div><br><blockquote \
type="cite"><div>On Nov 14, 2020, at 10:43 PM, Robert Lancaster &lt;<a \
href="mailto:rlancaste@gmail.com" target="_blank">rlancaste@gmail.com</a>&gt; \
wrote:</div><br><div><div>Hi Wolfgang,   I tried solving this image with my Small Scale Solving \
profile and it failed.   I noticed that your stars are fairly small and it was downsampling by \
3.      So I tried turning off downsampling entirely and it succeeded in about 3 seconds.   If \
you are having trouble with failed solves, you can try disabling the auto downsample function \
and try 1 or 2 for the downsample.   <br><div><br><blockquote type="cite"><div>On Nov 14, 2020, \
at 6:44 PM, Wolfgang Reissenberger &lt;<a href="mailto:sterne-jaeger@openfuture.de" \
target="_blank">sterne-jaeger@openfuture.de</a>&gt; wrote:</div><br><div><div><span>Try this \
one:</span><div><a href="https://drive.google.com/file/d/1QAq19iQjdqe_YJNuNCcOyWHaoyHQGxcE/view?usp=sharing" \
target="_blank">https://drive.google.com/file/d/1QAq19iQjdqe_YJNuNCcOyWHaoyHQGxcE/view?usp=sharing</a></div><div><br></div><div><br><blockquote \
type="cite"><div>Am 14.11.2020 um 23:57 schrieb Jasem Mutlaq &lt;<a \
href="mailto:mutlaqja@ikarustech.com" \
target="_blank">mutlaqja@ikarustech.com</a>&gt;:</div><br><div><div dir="ltr">Got a link to the \
image?<div><br></div><div>A user sent me this \
log:</div><div><br></div><div>[2020-11-14T02:18:16.415 UTC WARN ][                              \
default] - QObject::startTimer: Timers can only be used with threads started with \
QThread<br>[2020-11-14T02:18:16.443 UTC WARN ][                                  default] - \
QtDBus: cannot relay signals from parent Phonon::AbstractAudioOutput(0x4cfbe30 &quot;&quot;) \
unless they are emitted in the object&#39;s thread QThread(0xcf9258 &quot;&quot;). Current \
thread is QThread(0x507d2a8 &quot;&quot;).<br>[2020-11-14T02:18:16.444 UTC WARN ][              \
default] - QtDBus: cannot relay signals from parent QObject(0x4cfbe30 &quot;&quot;) unless they \
are emitted in the object&#39;s thread QThread(0xcf9258 &quot;&quot;). Current thread is \
QThread(0x507d2a8 &quot;&quot;).<br>[2020-11-14T02:18:16.485 UTC WARN ][                        \
default] - QObject::~QObject: Timers cannot be stopped from another \
thread</div><div><br></div><div>Anyone seen anything like this? It appears to be related to \
Phonon playing notification sounds and not an internal error for KStars.</div><div><br \
clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>--</div><div>Best \
Regards,<br>Jasem Mutlaq<br></div><div><br></div></div></div></div></div></div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Nov 14, 2020 at 11:02 PM Wolfgang \
Reissenberger &lt;<a href="mailto:sterne-jaeger@openfuture.de" \
target="_blank">sterne-jaeger@openfuture.de</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div>Robert, all,<div>I had the issue again when trying to \
solve a wide field image around NGC6888, which contains very dense star fields. I am using the \
1-Default profile without any change.</div><div><br></div><div>If I leave the „Parallel \
Algorithm" option from the Astrometry Parameters on „Auto", Kstars solves the image very \
fast, but remains on 100%. It seems that the in parallel running threads were \
hanging.</div><div><br></div><div>I am using the following versions:</div><div>KStars:  \
57c44d05c3e1f9895d84c7f4f73950975e8eddb7</div><div>StellarSolver:  \
2d7eba6685c1bcd77c0525e88b3d24b2fcd474a9</div><div><br></div><div>Anything I could test right \
now?</div><div><br></div><div>Wolfgang<br><div><br><blockquote type="cite"><div>Am 10.11.2020 \
um 15:50 schrieb Robert Lancaster &lt;<a href="mailto:rlancaste@gmail.com" \
target="_blank">rlancaste@gmail.com</a>&gt;:</div><br><div><div><div>Hi \
Wolfgang,</div><div><br></div><div>So I just want to clarify something you said here, there are \
a couple of parallel things and that can be a little confusing, so I just want to make sure we \
are talking about the same things.   The cause of the confusion is the terminology that <a \
href="http://astrometry.net/" target="_blank">astrometry.net</a>  \
uses</div><div><br></div><div>1.  <span>Load all Indexes in Memory /</span>  Load all indexes \
in Parallel.   This is the inParallel option for <a href="http://astrometry.net/" \
target="_blank">astrometry.net</a>.    In the options I tried to call this "Load all Indexes in \
Memory" to attempt to avoid the confusion with the Parallel Algorithm.   This has nothing to do \
with parallelization in different threads or processors.   It has to do with memory management. \
The <a href="http://astrometry.net/" target="_blank">astrometry.net</a>  solver can load the \
indexes and search them one after the other, or it can try to load all the indexes at once and \
then solve.   The second option is much much faster, but comes with risk.   <a \
href="http://astrometry.net/" target="_blank">astrometry.net</a>  does NOT check to see if it \
has enough RAM before it tries to solve,   They have big warnings in the documentation about \
using this option.   If you don't have enough RAM, it could use all the RAM and \
crash.</div><div><br></div><div>I programmed StellarSolver to check the available RAM prior to \
starting the solve.   If there is not enough RAM, it is supposed to turn off the option.   The \
user can also disable the option entirely, so that there is never a problem.   But you really \
do want the option turned on if your system can handle it.   We had some issues earlier about \
the RAM calculation.   I think the "inParallel" option causes the greatest crash risk.   I \
would really like it if somebody could look over the code for determining enough RAM and see if \
it is good now.   One thought that I have is that we can make the calculation more conservative \
and we could change the option to have 3 choices, Auto, on, or off.   So that if a user is \
really brave, or convinced they have enough RAM for sure, they could turn the option on \
regardless of the risk, If they are risk averse, they could turn it off, but most users could \
just leave it on auto.   What do you think?</div><div><br></div><div>2. Parallelization \
Algorithm for solving.   <span>  I am assuming this second option is what you meant in your \
email.   </span>This one is entirely of my creation and is what makes StellarSolver stellar.   \
Modern computers really have great capacity for computing in parallel and it causes a HUGE \
performance boost to use this capability, even on a Pi, since the PI has 4 processors.  \
</div><div><br></div><div>I programmed StellarSolver to have 2 different parallel algorithms, \
one that solves simultaneously at multiple "depths" and one that solves simultaneously at \
different scales.   If you set it to Auto, it will select the appropriate one based on whether \
you specified the scale or position (or neither).   If the image has both scale AND position, \
it does NOT solve in parallel and goes back to solving with a single \
thread.</div><div><br></div><div>When Jasem wanted to me to de-thread the StellarSolver and \
make it so that just the solvers are threads, I had to make a bunch of changes and one change I \
forgot was to make the star extraction before parallel solving asynchronous.   That does mean \
that when doing a parallel solve, it might look like things have frozen for a moment during the \
star extraction before the threads start up.   I have already fixed this, but it is in the \
releaseExperiment branch of StellarSolver, not in Master.   I would like to get this fix \
integrated before we release, but I will need to test this thoroughly first as I mentioned in a \
previous email.   I am wondering if this freezing behavior was what caused the "crash" you \
observed?</div><div><br></div><div>Thanks,</div><div><br></div><div>Rob</div><div><br></div><div><br></div><div><blockquote \
type="cite"><div>On Nov 10, 2020, at 8:03 AM, Wolfgang Reissenberger &lt;<a \
href="mailto:sterne-jaeger@openfuture.de" target="_blank">sterne-jaeger@openfuture.de</a>&gt; \
wrote:</div><br><div><div>OK, I did a quick check on my RPi4 with Parallel Algorithm set to \
„Auto" - and it works super fast! But since it is daytime, I can only test the „Load and \
Slew" option. So maybe the WCS info in the file gave hints that are not present for normal \
capture and slew or sync.<div><br></div><div>I need to check it under real conditions, which \
might be tricky due to the fog hanging around \
here…</div><div><br></div><div>Wolfgang<div><div><blockquote type="cite"><div>Am 10.11.2020 \
um 11:16 schrieb Jasem Mutlaq &lt;<a href="mailto:mutlaqja@ikarustech.com" \
target="_blank">mutlaqja@ikarustech.com</a>&gt;:</div><br><div><div dir="ltr">Alright, \
let&#39;s look at this:<div><br></div><div>1. Parallel algorithm: This is related to SOLVER, \
not image partitioning. It should work fine on Rpi4 and the checks are more reliable now as \
Robert worked on that.</div><div>2. WCS Polar Align: Can this be reproduced with \
simulators?</div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div \
dir="ltr"><div>--</div><div>Best Regards,<br>Jasem \
Mutlaq<br></div><div><br></div></div></div></div></div></div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Nov 10, 2020 at 10:48 AM Wolfgang \
Reissenberger &lt;<a href="mailto:sterne-jaeger@openfuture.de" \
target="_blank">sterne-jaeger@openfuture.de</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div>It wasn't that bad. The problem was that KStars went to \
100% CPU usage and died (or I killed it, do not exactly remember). I'll try to reproduce \
it...<br><div><br><blockquote type="cite"><div>Am 10.11.2020 um 08:45 schrieb Hy Murveit &lt;<a \
href="mailto:murveit@gmail.com" target="_blank">murveit@gmail.com</a>&gt;:</div><br><div><div \
dir="ltr">OK, well I believe it was fixed a week ago, so if you can still recreate it, you \
should report it.  <div>It should be fixed before release if it is still freezing the \
Pi.</div><div><br></div><div>Hy</div></div><br><div class="gmail_quote"><div dir="ltr" \
class="gmail_attr">On Mon, Nov 9, 2020 at 11:42 PM Wolfgang Reissenberger &lt;<a \
href="mailto:sterne-jaeger@openfuture.de" target="_blank">sterne-jaeger@openfuture.de</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px \
solid rgb(204,204,204);padding-left:1ex"><div>OK, I have to check it. The problem occurred only \
a few days ago and I think I'm always on bleeding edge...<br><div><br><blockquote \
type="cite"><div>Am 10.11.2020 um 08:38 schrieb Hy Murveit &lt;<a \
href="mailto:murveit@gmail.com" target="_blank">murveit@gmail.com</a>&gt;:</div><br><div><div \
dir="ltr">Wolfgang: I believe Rob and/or Jasem fixed the issue with parallel algorithm bringing \
down the RPi4 a while back.<div>I have the solver on auto parallelism and load all indexes in \
memory, and it seems to work fine (and in parallel).</div><div>Similarly, for star extraction, \
Jasem implemented a threaded extraction that also automatically determines how many threads to \
use and seems fine on the RPi4.</div><div><br></div><div>Eric: I believe these parallel options \
are the defaults. Hopefully users won&#39;t need to configure things like this.</div><div>For \
star detection, I don&#39;t believe you can turn it off.</div><div>For star detection Jasem \
split the frame before detection (into at most num-threads parts--4 for the \
RPi4).</div><div>For align, I&#39;m not sure how Rob divided \
things.</div><div><br></div><div>Hy</div></div><br><div class="gmail_quote"><div dir="ltr" \
class="gmail_attr">On Mon, Nov 9, 2020 at 11:07 PM Wolfgang Reissenberger &lt;<a \
href="mailto:sterne-jaeger@openfuture.de" target="_blank">sterne-jaeger@openfuture.de</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px \
solid rgb(204,204,204);padding-left:1ex"><div>Hi all,<div>I think we are close to finishing the \
release. I personally would opt to wait for another week and keep an eye \
stability.</div><div><br></div><div>Maybe we should take another look if the default settings \
in the StellarSolver profiles work a) for typical camera/scope combinations and b) for all \
platforms.</div><div><br></div><div>For example with my RPi, I needed to change the Parallel \
Algorithm to „None" because parallelity brought KStars down. Is the default setting „None" \
and I changed it somewhen? With all the new parameters I would prefer having a robust setup and \
leave it to the user to optimize speed.</div><div><br></div><div>@Jasem: please take a closer \
look to MR!122, since it fixed 4(!) regressions I introduced with my capture counting fix \
MR!114. Hopefully now we have at least a proper coverage with automated \
tests...</div><div><br></div><div>Wolfgang</div><div><div><br><blockquote type="cite"><div>Am \
09.11.2020 um 22:04 schrieb Jasem Mutlaq &lt;<a href="mailto:mutlaqja@ikarustech.com" \
target="_blank">mutlaqja@ikarustech.com</a>&gt;:</div><br><div><div dir="ltr">Hello \
Folks,<div><br></div><div>So back to this topic, any major blockers to the KStars 3.5.0 release \
now?</div><div><br></div><div>1. Remote Solver should be fixed now.</div><div>2. StellarSolver \
Profiles are more optimized now.</div><div>3. Handbook not updated yet, but we can probably \
work on this shortly.</div><div>4. Couple  of pending MRs to take care \
of.</div><div><br></div><div>How about Friday the 13th?</div><div><br></div><div><div><div \
dir="ltr"><div dir="ltr"><div><div dir="ltr"><div>--</div><div>Best Regards,<br>Jasem \
Mutlaq<br></div><div><br></div></div></div></div></div></div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Nov 5, 2020 at 3:41 AM Robert \
Lancaster &lt;<a href="mailto:rlancaste@gmail.com" target="_blank">rlancaste@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px \
solid rgb(204,204,204);padding-left:1ex">Hi Eric,<br> <br>
Ok so then we would be changing the way we do version numbering with this, right?<br>
I believe now we typically add features in each new iteration 3.4.1, 3.4.2, etc etc<br>
and when it is really big like StellarSolver, then we make it a big release like 3.5.0<br>
<br>
With this new paradigm, we wouldn't put new features into the master of the main 3.5 branch<br>
But instead we would work on a new 3.6 branch, and then bug fixes would go into the 3.5 \
branch<br> to make each new minor release, like 3.5.1, 3.5.2 etc.<br>
<br>
Do I have this correct?<br>
<br>
If this is right, then it would be longer before users see new features in the main branch, but \
the <br> tradeoff is that the main branch would have a LOT more stability.   I see this as a \
big positive.<br> <br>
Thanks,<br>
<br>
Rob<br>
<br>
&gt; On Nov 4, 2020, at 5:54 PM, Eric Dejouhanet &lt;<a href="mailto:eric.dejouhanet@gmail.com" \
target="_blank">eric.dejouhanet@gmail.com</a>&gt; wrote:<br> &gt; <br>
&gt; Hello Hy,<br>
&gt; <br>
&gt; Version 3.5.0 is only the beginning of the 3.5.x series, with more<br>
&gt; bugfixes on each iteration (and possibly, only bugfixes).<br>
&gt; So I have no problem leaving unresolved issues in 3.5.0.<br>
&gt; <br>
&gt; For instance, the Focus module now has a slight and unforeseeable<br>
&gt; delay after the capture completes.<br>
&gt; The UI reflects the end of the capture only, not the end of the detection.<br>
&gt; This makes the UI Focus test quite difficult to tweak, as running an<br>
&gt; average of the HFR over multiple frames now has an unknown duration.<br>
&gt; Right now, the test is trying to click the capture button too soon 2<br>
&gt; out of 10 attempts.<br>
&gt; But this won&#39;t block 3.5 in my opinion (and now that I understood the<br>
&gt; problem, I won&#39;t work on it immediately).<br>
&gt; <br>
&gt; In terms of reporting problems, the official way is stil <a href="http://bugs.kde.org/" \
rel="noreferrer" target="_blank">bugs.kde.org</a>,<br> &gt; but there&#39;s quite a \
cleanup/followup to do there.<br> &gt; I&#39;d say we can use issues in <a \
href="http://invent.kde.org/" rel="noreferrer" target="_blank">invent.kde.org</a> to discuss \
planned<br> &gt; development around a forum/bugzilla issue or invent proposal (like<br>
&gt; agile stories).<br>
&gt; There are milestones associated with several issues (although I think<br>
&gt; they should be reviewed and postponed).<br>
&gt; And we can certainly write a punchlist: check the board at<br>
&gt; <a href="https://invent.kde.org/education/kstars/-/milestones/3" rel="noreferrer" \
target="_blank">https://invent.kde.org/education/kstars/-/milestones/3</a><br> &gt; <br>
&gt; Le mer. 4 nov. 2020 Ã  22:38, Hy Murveit &lt;<a href="mailto:murveit@gmail.com" \
target="_blank">murveit@gmail.com</a>&gt; a écrit :<br> &gt;&gt; <br>
&gt;&gt; Eric,<br>
&gt;&gt; <br>
&gt;&gt; I would add to your list:<br>
&gt;&gt; <br>
&gt;&gt; - KStars Handbook (review update sections to reflect 3.5.0) and finally (perhaps \
manually if necessary) put the latest handbook online.<br> &gt;&gt; <br>
&gt;&gt; - Review the extraction settings. I spent a bit of time looking at the default HFR \
settings, and based on some experimentation (truth be told, with a limited amount of data) \
adjust things a little differently than my first guess (which was basically focus&#39; \
settings).<br> &gt;&gt; Rob: My intuition is that I should adjust the default StellarSolver \
star-extraction settings for Focus and Guide as well in stellarsolverprofile.cpp. I don&#39;t \
know whether you&#39;ve already verified them, and want to release them as they are, or whether \
they are a first shot and you&#39;d welcome adjustment?<br> &gt;&gt; <br>
&gt;&gt; Also, Eric, I suppose I should be adding these things here: <a \
href="https://invent.kde.org/education/kstars/-/issues" rel="noreferrer" \
target="_blank">https://invent.kde.org/education/kstars/-/issues</a><br> &gt;&gt; Is that \
right? Sorry about that--ok, after this thread ;) But seriously, your email is a good summary, \
and from that link<br> &gt;&gt; it doesn&#39;t seem as easy to see which are &quot;must do by \
3.5.0&quot; and which are &quot;nice to have someday&quot;.<br> &gt;&gt; A 3.5.0 punchlist \
would be a nice thing to have.<br> &gt;&gt; <br>
&gt;&gt; Hy<br>
&gt;&gt; <br>
&gt;&gt; On Wed, Nov 4, 2020 at 12:58 PM Eric Dejouhanet &lt;<a \
href="mailto:eric.dejouhanet@gmail.com" target="_blank">eric.dejouhanet@gmail.com</a>&gt; \
wrote:<br> &gt;&gt;&gt; <br>
&gt;&gt;&gt; Hello,<br>
&gt;&gt;&gt; <br>
&gt;&gt;&gt; Where do we stand now in terms of bugfixing towards 3.5.0?<br>
&gt;&gt;&gt; <br>
&gt;&gt;&gt; - StellarSolver has all features in, and 1.5 is finally out at Jasem&#39;s \
PPA.<br> &gt;&gt;&gt; - However Gitlab CI still complains about that lib package (see<br>
&gt;&gt;&gt; <a href="https://invent.kde.org/education/kstars/-/jobs/75941" rel="noreferrer" \
target="_blank">https://invent.kde.org/education/kstars/-/jobs/75941</a>)<br> &gt;&gt;&gt; - \
Unitary tests are being fixed progressively, mount tests are down to<br> &gt;&gt;&gt; ~20 \
minutes (yeees!)<br> &gt;&gt;&gt; - From my tests, the remote Astrometry INDI driver is not \
usable<br> &gt;&gt;&gt; anymore from Ekos.<br>
&gt;&gt;&gt; - The issue raised with flat frames is confirmed fixed (at least by me).<br>
&gt;&gt;&gt; - Meridian flip is OK (but I had not enough time to test TWO flips in a row).<br>
&gt;&gt;&gt; - Memory leaks are still being researched in Ekos.<br>
&gt;&gt;&gt; - There is an issue when duplicating an entry in a scheduler job,<br>
&gt;&gt;&gt; where the sequence associated is copied from the next job.<br>
&gt;&gt;&gt; <br>
&gt;&gt;&gt; Could we get a 3.6 branch where we will merge development of new features?<br>
&gt;&gt;&gt; And master for bugfixing 3.5.x until we merge 3.6 new features in?<br>
&gt;&gt;&gt; (we&#39;d still have to port bugfixes from master to 3.6)<br>
&gt;&gt;&gt; I don&#39;t think the opposite, master for 3.6 and a separate living<br>
&gt;&gt;&gt; 3.5.x, is doable in the current configuration (build, ppas, MRs...).<br>
&gt;&gt;&gt; <br>
&gt;&gt;&gt; --<br>
&gt;&gt;&gt; -- <a href="mailto:eric.dejouhanet@gmail.com" \
target="_blank">eric.dejouhanet@gmail.com</a> - <a href="https://astronomy.dejouha.net/" \
rel="noreferrer" target="_blank">https://astronomy.dejouha.net</a><br> &gt; <br>
&gt; <br>
&gt; <br>
&gt; -- <br>
&gt; -- <a href="mailto:eric.dejouhanet@gmail.com" \
target="_blank">eric.dejouhanet@gmail.com</a> - <a href="https://astronomy.dejouha.net/" \
rel="noreferrer" target="_blank">https://astronomy.dejouha.net</a><br> <br>
</blockquote></div>
</div></blockquote></div><br></div></div></blockquote></div>
</div></blockquote></div><br></div></blockquote></div>
</div></blockquote></div><br></div></blockquote></div>
</div></blockquote></div><br></div></div></div></div></blockquote></div><br></div></div></blockquote></div><br></div></div></blockquote></div>
 </div></blockquote></div><br></div></div></blockquote></div><br></div></div></blockquote></div><br></div></blockquote></div>
 </blockquote></div>
</blockquote></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic