------------03305C1E624580323 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hello Kent, Saturday, August 9, 2014, 1:33:30 AM, you wrote: Yes. As I said, INSTALLATION metrics reporting is easy enough to do.=20 I use those sorts of tools EXTENSIVELY with the CPAN platform, and I have v= aluable reports on what failed, what the interacting components were, and w= hat systems the failures and passes occur on. So I greatly appreciate this utility. Automated bug reports however prove to be a waste of time, a lot of failure= s are in fact entirely spurious as a result of user error. So a metrics system that simply aggregates automated reports from end users= that is observed as a side channel to bugs, proves more beneficial in real= ity. Usually all maintainers need here is a daily or even weekly digest mail sum= marising all the packages they're involved with, with their failure summari= es, with links to the failures. ( For example, here is one of the report di= gests I received: http://i.imgur.com/WISqv15.png , and one of the reports i= t links to : http://www.cpantesters.org/cpan/report/ed7a4d9f-6bf3-1014-93f0= -e557a945bbef ) It's exactly what I think is missing with portage. Yes, CPAN was always reliable. Feedback stands for adaptation.=20 Another thing to learn from PERL is command line bug reporting tool that re= ports bugs directly on=20 their bug tracking website. A great tool, usually with Gentoo you go to sup= port then you post=20 emerge environment and answer questions which a reporting module launched l= ocally knows better=20 than you. That drives a lot of time out of bag tracking team - they always = have to ask the same=20 questions reporters miss.=20 And for such, you don't need to apply rate limiting, because multiple repor= ts from a single individual prove to be entirely inconsequential, as you're= not forced to deal with them like normal bugs, but are simply out of band = feedback you can read when you have the time. And you can then make sense of the content of that report using your inside= expertise and potentially file a relevant bug report based on extracted in= formation, or use context of that bug report to request more context from i= ts submitter. But the point remains that this techology is _ONLY_ effective for install t= ime metrics, and is utterly useless for tracking any kinds of failures that= emanate from the *USE* of software. True because what we address is portage stabilization, not the system.=20 But portage hell accounts (my esteem) for about 80% of all Gentoo user prob= lems.=20 The system stabilization after updates could be improved if there is way to= minimize dependencies i.e.=20 - not to pull updates unless necessary for the target assembly. In a long perspective - there might be ways to asses what happened after up= date on function level.=20 For example if daemon didn't start after update - it's a clear indication t= hat there is a problem at least=20 with the backward compatibility.=20 When an administrator troubleshoots system he follows an algorithm a patter= n, it's a complex pattern=20 but it could be programmed to some extent. If my firefox installation segv's, nothing is there watching for that to fi= le a report. If firefox does something odd like renders characters incorrectly due to so= me bug in GPU drivers ( actual issue I had once ), nothing will be capable = of detecting and reporting that. Could be done but the effort is unreasonable. Those things are still "bugs" and are still "bugs in packages" and still "b= ugs in packages that can be resolved by changing dependencies", but are how= ever completely impossible to test for in advance of them happening as part= of the installation toolchain. But I'm still very much on board with "have the statistics system". I use i= t extensively, as I've said, and it is very much one of the best tools I ha= ve for solving problems. ( the very distribution of the problems can itself= be used to isolate bugs.=20 For instance, http://matrix.cpantesters.org/?dist=3DColor-Swatch-ASE-Reader= %200.001000=20 Very nice! Those red lights told me that I had a bug on platforms where perl floating = point precision is reduced In fact, *automated* factor analysis pin pointed the probable cause faster = than I ever could:=20 http://analysis.cpantesters.org/solved?distv=3DColor-Swatch-ASE-Reader-0.00= 1000 Great, once the stats are there, with growing experience new tools could be= written to=20 automatically analyze data and make decisions.=20 Just the main blockers are: - Somebody has to implement this technology - That requires time and effort - People have to be convinced of its value - Integration must happen at some level somehow somewhere in the portage to= olchain(s) - People must opt in to this technology in order for the reports to happen - And only then can this start to deliver meaningful results. IMHO seriously, it could be done if ONLY portage dev team would implement= =20 an interface CAPABLE for HTTP reporting. Once the interface is there but tu= rned off=20 by default - server side statistics are feasible. Personally I don't see an= y future of=20 this system unless it's coded in portage. Today - portage support without s= erver side=20 - tomorrow - server side.=20 --=20 Best regards, Igor mailto:lanthruster@gmail.com ------------03305C1E624580323 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Re: [gentoo-dev] minimalistic emerge Hello Kent,

Saturday, August 9, 2014, 1:33:30 AM, you wrote:


http://i.imgur.com/WIS= qv15.png&n= bsp;, and one of the reports it links to : http://www.cpantesters.org/= cpan/report/ed7a4d9f-6bf3-1014-93f0-e557a945bbef  )


It's exactly w= hat I think is missing with portage.

Yes, CPAN was always reliable. Feedback stands for adaptation. 

Another thing to learn from PERL is command line bug reporting tool that re= ports bugs directly on 
their bug tracking website. A great tool, usually with Gentoo you go to sup= port then you post 
emerge environment and answer questions which a reporting module launched l= ocally knows better 
than you. That drives a lot of time out of bag tracking team - they always = have to ask the same 
questions reporters miss. 




And for such, = you don't need to apply rate limiting, because multiple reports from a sing= le individual prove to be entirely inconsequential, as you're not forced to= deal with them like normal bugs, but are simply out of band feedback you c= an read when you have the time.

And you can then make sense of the content of that report using your inside= expertise and potentially file a relevant bug report based on extracted in= formation, or use context of that bug report to request more context from i= ts submitter.

But the point remains that this techology is _ONLY_ effective for install t= ime metrics, and is utterly useless for tracking any kinds of failures that= emanate from the *USE* of software.


True because w= hat we address is portage stabilization, not the system. 
But portage hell accounts (my esteem) for about 80% of all Gentoo user prob= lems. 

The system stabilization after updates could be improved if there is way to= minimize dependencies i.e. 
- not to pull updates unless necessary for the target assembly.

In a long perspective - there might be ways to asses what happened after up= date on function level. 
For example if daemon didn't start after update - it's a clear indication t= hat there is a problem at least 
with the backward compatibility. 

When an administrator troubleshoots system he follows an algorithm a patter= n, it's a complex pattern 
but it could be programmed to some extent.




If my firefox = installation segv's, nothing is there watching for that to file a report.
If firefox does something odd like renders characters incorrectly due to so= me bug in GPU drivers ( actual issue I had once ), nothing will be capable = of detecting and reporting that.


Could be done = but the effort is unreasonable.




Those things a= re still "bugs" and are still "bugs in packages" and still "bugs in package= s that can be resolved by changing dependencies", but are however completel= y impossible to test for in advance of them happening as part of the instal= lation toolchain.

But I'm still very much on board with "have the statistics system". I use i= t extensively, as I've said, and it is very much one of the best tools I ha= ve for solving problems. ( the very distribution of the problems can itself= be used to isolate bugs. 

For instance, 
http://matrix.cpantesters.org/?dist=3DColor-Swatch-ASE-Read= er%200.001000 


Very nice!




Those red ligh= ts told me that I had a bug on platforms where perl floating point precisio= n is reduced

In fact, *automated* factor analysis pin pointed the probable cause faster = than I ever could: 

http://analysis.cpantesters.org/solved?distv=3DColor-Swatch-ASE-Reader-= 0.001000


Great, once th= e stats are there, with growing experience new tools could be written to&nb= sp;
automatically analyze data and make decisions. 




Just the main = blockers are:

- Somebody has to implement this technology
- That requires time and effort
- People have to be convinced of its value
- Integration must happen at some level somehow somewhere in the portage to= olchain(s)
- People must opt in to this technology in order for the reports to happen<= br> - And only then can this start to deliver meaningful results.



IMHO seriously= , it could be done if ONLY portage dev team would implement 
an interface CAPABLE for HTTP reporting. Once the interface is there but tu= rned off 
by default - server side statistics are feasible. Personally I don't see an= y future of 
this system unless it's coded in portage. Today - portage support without s= erver side 
- tomorrow - server side. 


--=  
Best regards,
 Igor                   &= nbsp;        
mailto:lanthruster@gmail.c= om ------------03305C1E624580323--