[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gentoo-dev
Subject:    Re: [gentoo-dev] minimalistic emerge
From:       Kent Fredric <kentfredric () gmail ! com>
Date:       2014-08-08 21:33:30
Message-ID: CAATnKFDTUybqEBtYw2edFaLdFnqv0RCY_zvGU=oYU6AOnLXFqw () mail ! gmail ! com
[Download RAW message or body]

On 9 August 2014 08:52, Igor <lanthruster@gmail.com> wrote:

>  Hello Kent,
>
> Friday, August 8, 2014, 9:29:54 PM, you wrote:
>
> But it's possible to fix many problems even now!
>
> What would you tell if something VERY simple is implemented like -
> reporting
> every emerge failed due to slot conflict back home with details for
> inspection?
>
>
>
>
> *--  Best regards,  Igor                            *
> mailto:lanthruster@gmail.com <lanthruster@gmail.com>
>


Yes. As I said, INSTALLATION metrics reporting is easy enough to do.

I use those sorts of tools EXTENSIVELY with the CPAN platform, and I have
valuable reports on what failed, what the interacting components were, and
what systems the failures and passes occur on.

So I greatly appreciate this utility.

Automated bug reports however prove to be a waste of time, a lot of
failures are in fact entirely spurious as a result of user error.

So a metrics system that simply aggregates automated reports from end users
that is observed as a side channel to bugs, proves more beneficial in
reality.

Usually all maintainers need here is a daily or even weekly digest mail
summarising all the packages they're involved with, with their failure
summaries, with links to the failures. ( For example, here is one of the
report digests I received: http://i.imgur.com/WISqv15.png , and one of the
reports it links to :
http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef
)

And for such, you don't need to apply rate limiting, because multiple
reports from a single individual prove to be entirely inconsequential, as
you're not forced to deal with them like normal bugs, but are simply out of
band feedback you can read when you have the time.

And you can then make sense of the content of that report using your inside
expertise and potentially file a relevant bug report based on extracted
information, or use context of that bug report to request more context from
its submitter.

But the point remains that this techology is _ONLY_ effective for install
time metrics, and is utterly useless for tracking any kinds of failures
that emanate from the *USE* of software.

If my firefox installation segv's, nothing is there watching for that to
file a report.

If firefox does something odd like renders characters incorrectly due to
some bug in GPU drivers ( actual issue I had once ), nothing will be
capable of detecting and reporting that.

Those things are still "bugs" and are still "bugs in packages" and still
"bugs in packages that can be resolved by changing dependencies", but are
however completely impossible to test for in advance of them happening as
part of the installation toolchain.

But I'm still very much on board with "have the statistics system". I use
it extensively, as I've said, and it is very much one of the best tools I
have for solving problems. ( the very distribution of the problems can
itself be used to isolate bugs.

For instance,
http://matrix.cpantesters.org/?dist=Color-Swatch-ASE-Reader%200.001000

Those red lights told me that I had a bug on platforms where perl floating
point precision is reduced

In fact, *automated* factor analysis pin pointed the probable cause faster
than I ever could:

http://analysis.cpantesters.org/solved?distv=Color-Swatch-ASE-Reader-0.001000

Just the main blockers are:

- Somebody has to implement this technology
- That requires time and effort
- People have to be convinced of its value
- Integration must happen at some level somehow somewhere in the portage
toolchain(s)
- People must opt in to this technology in order for the reports to happen
- And only then can this start to deliver meaningful results.



-- 
Kent

*KENTNL* - https://metacpan.org/author/KENTNL

[Attachment #3 (text/html)]

<div dir="ltr"><div><div><div><br><div class="gmail_extra"><div \
class="gmail_quote">On 9 August 2014 08:52, Igor <span dir="ltr">&lt;<a \
href="mailto:lanthruster@gmail.com" \
target="_blank">lanthruster@gmail.com</a>&gt;</span> wrote:<br> <blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex">


<div>
<span style="font-family:&quot;Courier New&quot;;font-size:10pt">Hello Kent,<br>
<br>
Friday, August 8, 2014, 9:29:54 PM, you wrote:<br>
<br>
But it&#39;s possible to fix many problems even now!<br>
<br>
What would you tell if something VERY simple is implemented like - reporting  <br>
every emerge failed due to slot conflict back home with details for \
inspection?</span><br><div><div class="h5"> <br>
<br>
</div></div><div class=""><span \
style="font-family:&quot;arial&quot;;color:rgb(192,192,192)"><i>--  <br> Best \
regards,<br>  Igor                                          </i></span><a \
style="font-family:&quot;arial&quot;" href="mailto:lanthruster@gmail.com" \
target="_blank">mailto:lanthruster@gmail.com</a></div></div></blockquote></div><br><br \
clear="all"> </div><div class="gmail_extra">Yes. As I said, INSTALLATION metrics \
reporting is easy enough to do. <br><br>I use those sorts of tools EXTENSIVELY with \
the CPAN platform, and I have valuable reports on what failed, what the interacting \
components were, and what systems the failures and passes occur on.<br> \
<br></div><div class="gmail_extra">So I greatly appreciate this \
utility.<br><br></div><div class="gmail_extra">Automated bug reports however prove to \
be a waste of time, a lot of failures are in fact entirely spurious as a result of \
user error.<br> <br></div><div class="gmail_extra">So a metrics system that simply \
aggregates automated reports from end users that is observed as a side channel to \
bugs, proves more beneficial in reality.<br><br></div>Usually all maintainers need \
here is a daily or even weekly digest mail summarising all the packages they&#39;re \
involved with, with their failure summaries, with links to the failures. ( For \
example, here is one of the report digests I received: <a \
href="http://i.imgur.com/WISqv15.png">http://i.imgur.com/WISqv15.png</a> , and one of \
the reports it links to : <a \
href="http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef">http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef</a> \
)<br> <br></div>And for such, you don&#39;t need to apply rate limiting, because \
multiple reports from a single individual prove to be entirely inconsequential, as \
you&#39;re not forced to deal with them like normal bugs, but are simply out of band \
feedback you can read when you have the time.<br> <br></div>And you can then make \
sense of the content of that report using your inside expertise and potentially file \
a relevant bug report based on extracted information, or use context of that bug \
report to request more context from its submitter.<br> <br></div>But the point \
remains that this techology is _ONLY_ effective for install time metrics, and is \
utterly useless for tracking any kinds of failures that emanate from the *USE* of \
software.<br><div><br><div><div>If my firefox installation segv&#39;s, nothing is \
there watching for that to file a report.<br> <br></div><div>If firefox does \
something odd like renders characters incorrectly due to some bug in GPU drivers ( \
actual issue I had once ), nothing will be capable of detecting and reporting \
that.<br><br></div><div>Those things are still &quot;bugs&quot; and are still \
&quot;bugs in packages&quot; and still &quot;bugs in packages that can be resolved by \
changing dependencies&quot;, but are however completely impossible to test for in \
advance of them happening as part of the installation toolchain.<br> \
<br></div><div>But I&#39;m still very much on board with &quot;have the statistics \
system&quot;. I use it extensively, as I&#39;ve said, and it is very much one of the \
best tools I have for solving problems. ( the very distribution of the problems can \
itself be used to isolate bugs. <br> <br></div><div>For instance, <a \
href="http://matrix.cpantesters.org/?dist=Color-Swatch-ASE-Reader%200.001000">http://matrix.cpantesters.org/?dist=Color-Swatch-ASE-Reader%200.001000</a> \
<br><br>Those red lights told me that I had a bug on platforms where perl floating \
point precision is reduced<br> <br></div><div>In fact, *automated* factor analysis \
pin pointed the probable cause faster than I ever could: <br><br><a \
href="http://analysis.cpantesters.org/solved?distv=Color-Swatch-ASE-Reader-0.001000">h \
ttp://analysis.cpantesters.org/solved?distv=Color-Swatch-ASE-Reader-0.001000</a><br> \
<br></div><div>Just the main blockers are:<br><br></div><div>- Somebody has to \
implement this technology<br></div><div>- That requires time and \
                effort<br></div><div>- People have to be convinced of its \
                value<br></div><div>
- Integration must happen at some level somehow somewhere in the portage \
toolchain(s)<br></div><div>- People must opt in to this technology in order for the \
reports to happen<br></div><div>- And only then can this start to deliver meaningful \
results.<br> <br><br></div><div><br><div><div class="gmail_extra">-- <br><div \
dir="ltr"><div>Kent<font size="1"><b> <br><br></b></font></div><div><span \
style="color:rgb(204,204,204)"><font size="1"><b>KENTNL</b> - <a \
href="https://metacpan.org/author/KENTNL" \
target="_blank">https://metacpan.org/author/KENTNL</a></font></span><br> \
</div><div><br></div></div> </div></div></div></div></div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic