Hello Kent,

Saturday, August 9, 2014, 1:33:30 AM, you wrote:


Yes. As I said, INSTALLATION metrics reporting is easy enough to do. 

I use those sorts of tools EXTENSIVELY with the CPAN platform, and I have valuable reports on what failed, what the interacting components were, and what systems the failures and passes occur on.

So I greatly appreciate this utility.

Automated bug reports however prove to be a waste of time, a lot of failures are in fact entirely spurious as a result of user error.

So a metrics system that simply aggregates automated reports from end users that is observed as a side channel to bugs, proves more beneficial in reality.

Usually all maintainers need here is a daily or even weekly digest mail summarising all the packages they're involved with, with their failure summaries, with links to the failures. ( For example, here is one of the report digests I received: 
http://i.imgur.com/WISqv15.png , and one of the reports it links to : http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef  )


It's exactly what I think is missing with portage.

Yes, CPAN was always reliable. Feedback stands for adaptation. 

Another thing to learn from PERL is command line bug reporting tool that reports bugs directly on 
their bug tracking website. A great tool, usually with Gentoo you go to support then you post 
emerge environment and answer questions which a reporting module launched locally knows better 
than you. That drives a lot of time out of bag tracking team - they always have to ask the same 
questions reporters miss. 




And for such, you don't need to apply rate limiting, because multiple reports from a single individual prove to be entirely inconsequential, as you're not forced to deal with them like normal bugs, but are simply out of band feedback you can read when you have the time.

And you can then make sense of the content of that report using your inside expertise and potentially file a relevant bug report based on extracted information, or use context of that bug report to request more context from its submitter.

But the point remains that this techology is _ONLY_ effective for install time metrics, and is utterly useless for tracking any kinds of failures that emanate from the *USE* of software.


True because what we address is portage stabilization, not the system. 
But portage hell accounts (my esteem) for about 80% of all Gentoo user problems. 

The system stabilization after updates could be improved if there is way to minimize dependencies i.e. 
- not to pull updates unless necessary for the target assembly.

In a long perspective - there might be ways to asses what happened after update on function level. 
For example if daemon didn't start after update - it's a clear indication that there is a problem at least 
with the backward compatibility. 

When an administrator troubleshoots system he follows an algorithm a pattern, it's a complex pattern 
but it could be programmed to some extent.




If my firefox installation segv's, nothing is there watching for that to file a report.

If firefox does something odd like renders characters incorrectly due to some bug in GPU drivers ( actual issue I had once ), nothing will be capable of detecting and reporting that.


Could be done but the effort is unreasonable.




Those things are still "bugs" and are still "bugs in packages" and still "bugs in packages that can be resolved by changing dependencies", but are however completely impossible to test for in advance of them happening as part of the installation toolchain.

But I'm still very much on board with "have the statistics system". I use it extensively, as I've said, and it is very much one of the best tools I have for solving problems. ( the very distribution of the problems can itself be used to isolate bugs. 

For instance, 
http://matrix.cpantesters.org/?dist=Color-Swatch-ASE-Reader%200.001000 


Very nice!




Those red lights told me that I had a bug on platforms where perl floating point precision is reduced

In fact, *automated* factor analysis pin pointed the probable cause faster than I ever could: 

http://analysis.cpantesters.org/solved?distv=Color-Swatch-ASE-Reader-0.001000


Great, once the stats are there, with growing experience new tools could be written to 
automatically analyze data and make decisions. 




Just the main blockers are:

- Somebody has to implement this technology
- That requires time and effort
- People have to be convinced of its value
- Integration must happen at some level somehow somewhere in the portage toolchain(s)
- People must opt in to this technology in order for the reports to happen
- And only then can this start to deliver meaningful results.



IMHO seriously, it could be done if ONLY portage dev team would implement 
an interface CAPABLE for HTTP reporting. Once the interface is there but turned off 
by default - server side statistics are feasible. Personally I don't see any future of 
this system unless it's coded in portage. Today - portage support without server side 
- tomorrow - server side. 


-- 
Best regards,
 Igor                            
mailto:lanthruster@gmail.com