[prev in list] [next in list] [prev in thread] [next in thread] 

List:       wikitech-l
Subject:    [Wikitech-l] Growing our testing, bug reporting, & triaging community
From:       Sumana Harihareswara <sumanah () wikimedia ! org>
Date:       2012-03-30 3:10:08
Message-ID: 4F752410.6020709 () wikimedia ! org
[Download RAW message or body]

Summary: Chris McMahon is following up on improving Labs as a testing
environment, improving continuous integration, and learning from how
mobile gets and uses bug reports.  Alolita and I are gathering data
about how our engineering teams currently take in bug reports from lots
of communication media.


A few of us just had a chat about some upcoming efforts to engage our
community in systematic testing (QA) efforts -- see
https://www.mediawiki.org/wiki/User_talk:Cmcmahon#Community_Testing.2FQA
and http://www.mediawiki.org/wiki/Mobile_QA/Spec for the
conversation-starter.  I figured some folks on this list would be
interested in some plans, ideas, and questions coming out of that.  A
non-comprehensive summary:

* Chris thinks his biggest priority, to improve MediaWiki's testability
overall, is to ensure Labs is a stable, robust, and consistent
environment so it's reasonable to point a firehose of testing at
the beta deployment cluster <http://labs.wikimedia.beta.wmflabs.org/> .
 Unless we ensure a consistent and clean environment, most of the bug
reports will be the result of environment problems instead of the
MediaWiki bugs we want to find.  So that's what he's focusing most of
his time on.  (This limits the time he has available for manual testing
of individual features, but he also makes time to work on editor
engagement, Timed Media Handler testing, and a limited number of other
key projects.)

Of course automated testing is also key, so Chris is working with
Antoine on deploying from Jenkins to the beta cluster -- see
https://www.mediawiki.org/wiki/QA/testing and
https://www.mediawiki.org/wiki/Thread:Talk:Continuous_integration/also_deploy_from_Jenkins_to_beta_cluster%3F
                
.

* Our mobile team has a pretty stable, if time-intensive, system set up
to get tests of its apps.  They release a new version of their app every
2 weeks.  Several days beforehand, they release a beta.  They email
mobile-l, tweet, etc. to raise awareness, but the best way to actually
get testers is to personalize the boilerplate email and send it
personally to each of about 20 people who've shown interest in testing.
 It takes a few minutes to send those emails, and then 12-15 hours to
respond to the feedback and dig into problems.  Feedback and
conversation usually comes from about 5 people out of that 20, and
happens in IRC (so, it's not as easy to delegate, do asynchronously,
point other people to, etc.).  The mobile team also tries to cover some
other feedback channels:
https://www.mediawiki.org/wiki/User:Yuvipanda/Mobile_feedback_avenues
Yuvi works on this 100% every release
cycle, so it doesn't scale -- they couldn't really handle more testers
if they came.

Chris likes that the mobile team is clear and specific in directing its
testers on what to test, and has reasonable constraints on time, number
of testers, and test environment.  For the next mobile release cycle
(starting on April 6th) Chris will shadow Yuvi to see what he does and
to start helping out.

* The internationalisation/localisation team gets feedback (including
bug reports and feature requests) through various channels (IRC, Village
Pumps, Bugzilla, private email, Twitter, mailing lists, etc.), and it's
time-intensive to gather, aggregate, triage, curate, and respond to it.
 Alolita is following up with the i18n team to get more details on that
process -- how do they source feedback? Where do they get it, how do
they aggregate it, and how much time does it take?  I'll be following up
with product managers to get similar step-by-step guides from other
projects.  That way we can figure out how much time it's taking, how to
split up that workload among product managers and QA, and whether we can
get some quick wins in systematizing this process and doing it smarter.

(Separately from the feedback *aggregation* effort, various folks are
investigating feedback *mechanisms* integrated into user-facing things.
 See https://www.mediawiki.org/wiki/Extension:MoodBar/Feedback for some
thoughts on this.)

* We really need help curating Bugzilla.  Mark's Bug Squads can't come
soon enough!  :-)  And we aim to grow testing leaders in the community.
 So if you're interested in stepping up and going from nitpicker to
LEADER of nitpickers, we have some tasks ready for you.  :D

* Mozilla has a regular testing event
https://quality.mozilla.org/category/events/month and lists bugs in an
etherpad as they're reported:
https://etherpad.mozilla.org/testday-20120329 .  Time-limited test
efforts like that are nice; if you can't constrain the number of
channels people use to report things, you can at least constrain *time*
and thus the amount of flow that comes through, so you can actually
follow up more effectively!  Chris doesn't think we quite have the
social and technical infrastructure in place to properly support a
testing event like this yet, but that's a goal.

Thanks to Tomasz, Yuvi, Alolita, and Chris for contributing to this
discussion.

Additional reading:

https://fedoraproject.org/wiki/User:Adamwill/It_boots_ship_it

-- 
Sumana Harihareswara
Volunteer Development Coordinator
Wikimedia Foundation

_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic