Re: [Usability] Prototyping the next generation panel



Hi,

<snip>
> You're still not understanding me, as you're disagreeing with something 
> else entirely.
</snip>

Sorry if that's the case. It's true that I've struggled to understand
some of what you've been suggesting. It seems that we've been talking in
terms of different research designs. You're proposing some kind of
brainstorming sessions with users, right?! I was thinking more in terms
of testing and prototyping. If I'm still misunderstanding you, perhaps
you could describe the kind of research you're proposing in a little
more detail?

> >>>>>> The biggest problem that's going to occur is that most people have an 
> >>>>>> extremely strong preconceived notion of how a desktop should behave, 
> >>>>>> and while you want to have representation this group, it can't be the 
> >>>>>> only group that's represented if you really want to improve the 
> >>>>>> desktop experience.
> >>>>>>         
> >>>>>>             
> >>>>> Very true... probably makes it more important for us to consider the 
> >>>>> learnability of any proposed designs, too.  We can't necessarily 
> >>>>> expect people to "get it" the first time they use a completely new 
> >>>>> desktop, but if they're comfortable and productive within a week, then 
> >>>>> we might be onto something.  If it takes them six months, we're 
> >>>>> probably not.
> >>>>>       
> >>>>>           
> >>> It'll be important to get background information from participants about
> >>> their previous desktop experience. Conducting observations, we should
> >>> also be looking out to see at which points people's behaviour is
> >>> directed by habits picked up from existing systems.
> >>>   
> >>>       
> >> It would be a mistake to deny that.
> >>     
> >>>   
> >>>       
> >>>>>> Fortunately, I recently discovered a clever solution to this. You 
> >>>>>> could weed a lot of these people out by building a survey with a very 
> >>>>>> open-ended question that suggests an answer. Then you can see who 
> >>>>>> responds with the suggested answer.
> >>>>>>
> >>>>>> Example survey question where the results were usually echoed from 
> >>>>>> the suggestion, however with exceptions:
> >>>>>> --------------
> >>>>>> What interests you in the field of Computer Science? Why?
> >>>>>> (Ex. Do you enjoy creating things? Do you enjoy knowing how stuff 
> >>>>>> works?)
> >>>>>> --------------
> >>>>>> Then you can find the people that don't simply echo, and at the same 
> >>>>>> time you can also represent the people that do echo, it's just easier 
> >>>>>> to find the other people that don't echo way.
> >>>>>>         
> >>>>>>             
> >>>>> Sounds like a good idea-- the screener questionnaire is certainly an 
> >>>>> important part of selecting participants for any study.  But I've 
> >>>>> always been lucky enough to have other people around to do that part, 
> >>>>> so I'm not really all that qualified to comment :)
> >>>>>       
> >>>>>           
> >>> Personally, I'm unsure about how useful a pre-questionnaire would be in
> >>> this particular respect, since I wouldn't expect there to be a simple
> >>> relationship between people's conscious understandings and how they use
> >>> the desktop. Let's sit them down at the prototype with a task (or
> >>> whatever it is that we end up using) and see what happens. If we need
> >>> to, we can ask questions about their actions either as they go or
> >>> afterwards.
> >>>   
> >>>       
> >> I wouldn't consider it very expensive to find out if you're unsure, and 
> >> I would bet my life that there is a very strong connection. In fact, 
> >> with a pre-questionnaire, you can save a lot of time and money (if 
> >> you're spending money to do this).
> >>     
> >
> > On the connection between self-understandings and behaviour - I think
> > we'll have to agree to disagree! :) (Though I would say that my take on
> > this relationship has been well documented in the past - indeed, the
> > proposition that there isn't a simple link between the two forms the
> > basis of many major academic research traditions.)
> >
> > On a more practice note: if, as you say, the majority of people will not
> > react well to a redesigned desktop environment, then we shouldn't ignore
> > them - we should pay these people special attention in order to
> > understand how we can make sure the redesign is as well suited to them
> > as possible.
> >
> >   
> I'm not saying that a majority of people won't react well to a new 
> desktop environment. I'm just saying they have a lot of people would 
> have no idea on how they would change the desktop environment so that it 
> would better suit them because they've been doing the same thing for so 
> long.

I was assuming that the kind of research we'd be doing would be along
the lines of testing either current desktop environments or prototypes -
and thought your previous comments were in relation to that. What you
said makes more sense in relation to brainstorming sessions.

> >>> I'd have concerns about classifying people according to a predefined
> >>> schema (if that is what you're suggesting). In this situation, it would
> >>> be much better to generate our own groupings through observation and
> >>> analysis, rather than relying on pre-existing conceptualisations of
> >>> behaviour.
> >>>   
> >>>       
> >> People are already classified according to predefined schema.
> >>     
> >
> > Yes - but if we can produce our own analyses, rather than relying on
> > existing ones, then that might help with producing a desktop experience
> > that is truly original. Plus, I don't like the idea of reproducing other
> > peoples' classifications. ;)
> >
> >   
> I'm not saying not to produce your own analyses, just not to ignore past 
> analyses. Ignoring past analyses is also extremely costly to the entire 
> development process.

Sure. We should be critical about the analyses we draw upon, however.

> >> Go ask a 
> >> marketer for a software or hardware company if you have any doubts. You 
> >> think Dell would sell a gaming PC if they didn't segment the market to 
> >> include "gamers?" Do you think they would sell laptops if they didn't 
> >> segment the market to include "mobile users?"
> >>
> >> Some categories:
> >> --Power users
> >> --Casual users
> >> --Mobile users
> >> --Desktop users
> >> --Home users
> >> --Enterprise users
> >> --Hardcore gamers
> >> --Casual gamers
> >> --Studio users
> >> --Users that are disabled (hard-of-sight/hard-of-hearing/motor-impaired)
> >>
> >> Yes, there are a million ways to categorize users. You want to segment 
> >> users based on how they use their desktop. Before you do that, you want 
> >> to segment users based on what they'd like to be able to do with their 
> >> desktop. Some of these groups are a lot bigger than others. A 
> >> pre-questionnaire allows you to make sure all of, or most of, the market 
> >> segments are represented with minimal cost (i.e. you won't end up 
> >> interviewing fourteen power users and a single casual user if 15 is the 
> >> size of your sample). The best resource to aid you in segmenting the 
> >> user population, and designing a pre-questionnaire, once again, would be 
> >> someone in a marketing department.
> >>     
> >
> > I just don't think that marketing research is helpful in relation to the
> > redesign effort - its purpose is quite different from what we need.
> >   
> A marketing department is far more attuned to research tools such as 
> surveys than most others in a business. The way the questions are worded 
> in a survey will drastically affect the response pattern. Marketers 
> usually have the most experience in an organization at wording surveys, 
> so I'm just saying if you're making a survey and you don't have much 
> experience doing so, try asking someone in a marketing department. If 
> you're looking to understand how to segment users for the purpose of 
> your study, same thing. Marketing departments are very good at those 
> sorts of things because it's part of their day-to-day activities.

I wasn't questioning the expertise or competence of market research
departments...

> >> If you're asking a question during an interview, "how do you improve the 
> >> desktop experience?" most responses will be bound to the user's past 
> >> experiences. There's a segment of users that aren't nearly as bound to 
> >> their past experiences and they would likely be a more reliable resource 
> >> for brainstorming, whereas the other group would likely be a more 
> >> reliable source to measure practicality.
> >>     
> >
> > I wouldn't expect research participants to have the kind of specialist
> > knowledge that would allow them to answer interview questions like that.
> > Instead, I would imagine that exploring new possibilities would come out
> > of a dialogue between researcher and participant - going through
> > exercises with a research participant, you could talk about particular
> > difficulties with participants and suggest possible solutions, for
> > example. This is one place where involving developers would be great -
> > researchers could act as translators between developers and users within
> > an iterative design process (open source approach to research,
> > anyone?!). 
> >
> >   
> "how do you improve the desktop experience?"
> Yea, I guess the wording is off on that. I should have worded it, "how would you." It doesn't take a specialist to brainstorm new ideas. I'm just proposing a way of being more productive with brainstorming. 
> 
> In any case, there's a problem with the method you described above. Suggesting something to a participant can very easily bias the participant to your own ideas, and it's usually accidental when that happens. It's like when the police or prosecutors pose leading questions. It's best for responses to be unfiltered, preferably unaffected by prior suggestions, and to have as many as possible.  There's plenty that's already been suggested, but it doesn't seem complete by any means, and could potentially be a lesser solution.

If we introduced potential solutions at the end of a prototyping/testing
session, then bias shouldn't be a problem. Any suggestions you'd make
would be in response to problems encountered by the participant during
the observation. This would ground them in the participant's usage.

> > <snip> 
> >   
> >> What I'm trying to get at is 
> >> that the question isn't necessarily how do you use your desktop, but in 
> >> what ways can you use your desktop outside of the boundaries of what's 
> >> already defined that would make it more useful to you. Then you focus on 
> >> HOW can an idea be made both practical and as usable as possible.
> >>     
> > </snip>
> >
> > I completely agree - we need to ensure that the research we do doesn't
> > uncritically reproduce existing design patterns. We also need to be able
> > to explore new design possibilities. I think we can do this by using
> > observations of desktop usage (or prototype usage) as a starting point
> > from which to talk with users about what they do with their machines.
> > That ordering (practice, then speech) is an important one, IMO, since it
> > grounds the discussion. You can say - 'why did you do that?', or 'why
> > didn't you do that?'. Also, it is worth remembering that analysis would
> > in no way be restricted to the description of the behaviour that was
> > observed.
> >   
> This is what should be done at the beginning of any project:
> 1. Define the problem(s)
> 2. Determine requirements
> 
> Where are there some problems defined?:
> http://www.vuntz.net/journal/2008/10/22/494-desktop-shell-from-the-user-experience-hackfest-general-overview
> http://live.gnome.org/Boston2008/GUIHackfest/WindowManagementAndMore
> 
> I didn't actually go to Boston, so I have no idea what the direction is 
> and why. For that reason, I believe it should be discussed a bit more. 
> The problems are not nearly documented well enough, nor is the solution.

Yes, we could do with some more work being done in this area.

In terms of a testing/prototyping approach, I can think of two
possibilities:

1. We make problem definition an outcome of the research we do.

2. We restrict ourselves to testing the design ideas that are currently
being developed.

The former would involve testing the current GNOME desktop. The latter
testing prototypes of the new designs.

Best,

Allan



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]