Re: [Usability] spatial nautilus concerns



The suggestion that a UI of this complexity can be definitively designed
via testing isn't really accurate. Seth discusses some of the issues
here: http://www.gnome.org/~seth/blog/onusabilitytesting

I absolutely agree that you cannot design by testing. Nor can testing in and of itself tell you where to take an existing UI. But testing can definitely inform an issue significantly, particularly one such as this. And we have an opportunity for a good test because so many of the options are implemented.

But just think about all the factors. What will the backgrounds be of
the test subjects? Will you test both people who are used to the
interface and people new to it?

Presumably yes, several user types would be worthwhile. We'd choose users based on what the main target groups are. We might favor novices slightly since they're potentially less likely to configure their systems, but would certainly want more experienced users as well. We might also conduct tests on new users, have them use GNOME for a while, and then have them come back for another test to see how their behavior changed.

What tasks will you give them to
measure?

This is obviously not an easy thing to figure out, but there are folks who do this for a living. I've done some basic usability tests but am a designer rather than a tester and would refer this step to someone more knowledgeable, ideally.

Is the metric speed of finding a single file, how well people
can use removable media, how the UI encourages a usable organization of
the filesystem, how well the user can explain the intended user model,
how people feel subjectively about their experience, or what?

Again, someone with a more complete background could answer this better, but common metrics are things like:
- Did the user complete the task?
- Does the user think he completed the task?
- How long did it take?
- Where did he have obvious trouble?
- How the user felt subjectively (as you suggest)

Your other suggestion, trying to get at the user mental model, also seems worthwhile. Combining these qualitative and quantitative measurements can yield some pretty useful data. And even watching users during the test without taking notes can sometimes be tremendously helpful.

There's a
ton of judgment involved here, it's not just a matter of reading the raw
unbiased numbers.

Absolutely. But there are techniques for doing this. I certainly think if we can find some good testers willing to help out, that would be ideal so the test is set up and the data interpreted properly. From there it's up to designers to use the results to inform revisions.

*Of course* it would be valuable to run a usability test and see where
people stumble. But I think it would be setting the wrong expectation if we thought it would definitively resolve this particular debate. It will
only provide some additional data points on the specific cases tested
and specific questions the test asked.

I agree that it may never resolve the debate. There may be multiple ways to interpret the data. But at least the debate will be a bit more informed :-)

--Dave




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]