Where is the data?



I'm sorry I couldn't read through the whole GNOME Survey v4 thread, but
it was just too long. What I read though is that data was collected and
extists.
Now I'd like to simply ask: where is it?
Where we developers can find real cold numbers backing out the designs
we're asked to implement?
Where we developers can find hard facts proving the NOTABUG and the
WONTFIX we mark in the most questioned and hot issues?
I'm not a designer, so I may not understand all the papers you provide
in your support, and I may not understand what are the rules and laws of
Human Computer Interaction, as you call it. But I understand numbers,
and would be convinced by seeing that 66% percent of people find this
method of working more productive, or 3 out 5 tested users where able to
discover the functionality without guidance, or all 8 people interviewed
did not use the feature just removed.
Note the small numbers: I know that user testing is limited in scope,
and I know that reaching out to a broader audience would require money
that the foundation doesn't have. But if you insist in saying that all
necessary feedback was already acquired, you need to make it public
somewhere. You need to back your decisions on market share, not
technical purity, because that is our goal (imho).
I continously read of people complaining about GNOME 3, around the web
and in gnome-shell-list (because, unfortunately, much of the complaint
is against "the shell way"), and the answers, when provided, are just
repeating the same design assertions, to the point that some subscribers
are fed up of writing them.
As a specific example of unscientific user testing, I got a friend of
mine to try GNOME 3 at the desktop summit, and when it was time to
shutdown he just asked me, because he found no way and he thought it was
a bug. I'm sorry but I didn't have any explanations, so I just said "the
designers said so", and similarly I had none when some KDE hackers asked
me on the same problem.
I know that what I write, following the guidelines and the mockups, is
right. But people providing feedback don't always agree with that, and
if myself cannot understand the reason, how can I explain to them?
I understand that some features in 3.0 were like "design experiments",
because we have the whole 3.* cycle to improve. But if the results of
those experiments (that is, people's feedback) is not analyzed
thoroughly, how can we be sure that the design is right? Or on the other
hand, how can I see that the feedback is listened to, if decisions are
never reverted?

Sorry for this long mail, and sorry for contributing to the
desktop-devel noise, but I've been waiting to ask these questions for
too long. I hope that a striking and factual answer will avoid lengthy
discussion.

Giovanni

Attachment: signature.asc
Description: This is a digitally signed message part



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]