[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-usability
Subject:    Aktions and anonymous usage statistics for KDE
From:       "Leo Spalteholz" <leo.spalteholz () gmail ! com>
Date:       2007-01-31 23:55:42
Message-ID: 7c98fd570701311555w3e0aeeemcf93472b90911846 () mail ! gmail ! com
[Download RAW message or body]

Hi All,

I'm writing to reply to a post from almost a year ago made by Zak
Jensen, in which he introduced his "Aktions" framework to, among other
things, gather anonymous usage statistics from KDE users to help with
usability decisions
(http://lists.kde.org/?l=kde-usability&m=114244633417013&w=2).  The
idea of collection anonymous usage statistics was (perhaps
unintentionally) shot down by Ellen Reitmayr as not being all its
cooked up to be
(http://lists.kde.org/?l=kde-usability&m=114277640515080&w=2).
Basically, although Ellen says that anonymous statistics would be
helpful in some limited cases, they are not the panacea that most
people would like to believe.  This seems to have lead to the Aktions
project dropping support for generating these statistics (I'm not sure
if the project itself is still alive).

While I agree that anonymous statistics do not negate the need for
user tests, I strongly believe that they would make the reports
produced by openusability much more convincing and help identify what
the worst usability problems are.  Note that while I don't have as
much usability evaluation experience as Ellen, I have taken several
courses on HCI, Ergonomics, usability study design, and have conducted
some user studies to evaluate software.

I'm replying to this post:
http://lists.kde.org/?l=kde-usability&m=114277640515080&w=2

> Even worse, information about the user is missing! How do you know if the people \
> who sent in their data belong to your targetted user group? Maybe all your data \
> comes from the developers themselves, how should you know??

Volume determines this.  If you get several thousand responses, then
you know they are not just from developers.  Also, the process needs
to be automatic.  If people are asked exactly ONCE if they would like
to participate, and not bothered with prompts after that, then you
will get a wide variety of users, not just the developers and
technical users.

I find your question about the returned user data not matching your
targetted user group very odd.  If your targeted user group is not the
same as the users submitting data (assuming you have a large volume of
submissions) then your assumption about your target user group is
wrong.

> Even if Microsoft used these mechanisms successfully, the method does not convince
> me. Of course you'll get some basic data, but after all, they also had to \
> supplemented it by extensive user testing and observation of users in their work \
> environment to gain a usable application.

No one is saying this will replace user testing.  This is just a tool
to guide user testing and provide statistical backing to reports.

> And here we get to my second, even bigger concern: A software that is
> *potentially* able to call home, is not trustworth for many users. I very
> well remember when Microsoft XP was launched: The first thing everybody who
> got a new PC running XP was told by his peers was to switch off the 'call
> home' function. Still, people were unsure if it really was switched off. If a
> software has the power to observe you and send the data to an unkown
> recipient, a user cannot be sure if a simple button to switch off the
> function really works.

I have to strongly disagree about this.  You claim that most people
turned off the "call home function" (I assume you mean the anonymous
usage statistics collected by Microsoft's customer experience
improvement program as well as the crash data you can send to them).
First of all, this feature is opt-in, so there is no need or way to
turn it off.  Also, this directly contradicts Microsoft's claims of
1.3 Billion sessions for Microsoft Office 2003 alone
(http://blogs.msdn.com/jensenh/archive/2006/04/05/568947.aspx).  1.3
Billion sessions in about 2.5 years since Office 2003 was launched
(and there is still a large percentage of office users using older
versions).
Personally, I never opted in to this system, but if I knew it went
towards helping out KDE, I would definitely do it.  Of course this is
a wild assumption, but I think more people would feel inclined to help
out KDE than a huge corporation like Microsoft.  Even if that is not
true, the facts show that many people are willing to opt-in to systems
like this.

As for the trust issue, I never saw anyone express any concern over
their usage data being sent away.  First of all, people know that it
is opt-in.  People know that it is anonymous, and people trust that it
will not happen if you don't opt-in (at least from major vendors).
Yes, there were stories on Slashdot filled with comments by paranoid
readers clamouring about invasions of privacy and conspiracy theories,
but I have never heard a non-technical user even raise this as a
concern.  Since KDE is open source, it is very easy to ensure that the
data is not collected if the feature hasn't been enabled.

Also, you have to admit that the traditional way of testing a few
users on a very specific task is also not ideal, and has many problems
threatening the validity of the results:
1.  Your users.  Since KDE doesn't have the resources to conduct tests
with large groups, individual variability plays a huge role in the
results.  If you have few users in your test group, then your results
are very specific and cannot be generalized enough to justify changing
the application for everyone.
2.  Your task.  It is very difficult to create a task that represents
a user's work realistically.  I have read many papers involving user
tests, and what usually happens is that experimenters limit the task
so much to obtain a controlled environment that the results are then
hard to generalize.
3.  Your experiment.  Taking users out of their work environment and
having them perform a task carries a host of implications with it.
Your users are not as motivated as with their real work, they may
modify their behaviours because they know they are being watched, etc.
 Of course there are ways to mitigate these effects, but they are
difficult to get right.

If you look at some of the reports on openusability.org, (example:
http://www.openusability.org/reports/get_file.php?group_id=55&repid=43#search=%22kmail%20openusability%20%22mailing%20list%22%22)
 they contain statements such as "The handling of mailing lists is a
feature that perhaps 1% of users expect".  These PFTA statistics
(Pulled From Thin Air) form the basis of whole sections and change
recommendations for programs.  If you don't provide some facts to back
up your assumptions, you are losing credibility amongst developers,
and they will be reluctant to implement your proposals.  If, on the
other hand, you can say that, "from the data of 4000 users, mailing
lists are only used by 1% and even then, only in 5% of sessions", you
have a much stronger position.

I think KDE is in a unique position to implement* a system that is
better than the systems available on Microsoft Windows by being less
obtrusive (one opt-in for all KDE apps is possible) and giving users a
sense of contributing when they normally would not be.

Cheers,
Leo

* Technical issues aside, I just wanted to argue that this is a good
idea.   Whether or not someone wants to implement it is another issue
entirely.
_______________________________________________
kde-usability mailing list
kde-usability@kde.org
https://mail.kde.org/mailman/listinfo/kde-usability


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic