[HIG] More on inductive UI design

Since I posted a link to the Inductive UI Design article the other day,
'tis only fair that I post the follow-up comments of the guy who wrote
it.. even if he does work for You Know Who  :o)


CALUM BENSON, Usability Engineer       Sun Microsystems Ireland
mailto:calum benson ireland sun com    Desktop Engineering Group
http://www.sun.ie                      +353 1 819 9771

Any opinions are personal and not necessarily those of Sun Microsystems
--- Begin Message ---
As the original architect behind the model described in, "Microsoft
Inductive User Interface Guidelines"
), I'd like to respond to recent postings here about that article. I
would also like to ask for help in ascertaining whether a specific idea
in the model is novel or, if not, tracking down references to existing
work in that area.

I should begin by apologizing for the article's lack of citations. The
article was not intended to be a scholarly work, and much of it was
collected from internal papers and specifications that were not
originally intended for a public audience. Moreover, the article was
published on the MSDN (Microsoft Developer Network) site and directed at
Microsoft's ISV developer community as a practical how-to manual and
case study. Since many people within Microsoft and its ISV community are
not very familiar with HCI research, the article used pedagogical
language that has had the unfortunate and unintended side-effect as
being perceived by some HCI people as "smarmy". The article's tone and
failure to recognize previous research should not be interpreted as an
attempt by Microsoft to claim credit that is justifiably due elsewhere.

Having said that, I've read assertions on both CHI-WEB and SIGIA-L that
the document contains nothing new and simply restates existing research.
I'm willing to accept that some of the document's points are already
well-known to HCI researchers, but I'm not sure that this is true of the
entire document. I'm open to being convinced -- I'm simply not yet aware
of work in the particular area described below. If you are aware of
existing research or work in this area, please let me know so that: 1)
future discussions of inductive UI can recognize that research, and 2)
Microsoft can learn from other contributions of that research.

After using this UI model within Microsoft for several years, I think
its most significant contribution is its recommendation that tight
constraints around the clarity of a page title can drive an interface's
design. While the notion of task-based user interfaces has been around
for a long time, I've found that discussions of task-based interfaces
exhort designers to design screens around a task, but typically beg the
question as to what constitutes a single task. In other words, it's
unclear how much or how little can comfortably fit on a screen and still
have that screen qualify as being task based. The answer proposed by the
inductive UI model is that a single task is something that can be stated
(ideally, directly on the screen) in a single concise question or
statement in natural language. This is admittedly still a subjective
description of a task, but a definition that in practice seems to be
easy to keep in mind during ideation, easy to check during evaluation,
and ultimately has the desired effect of stimulating the production of
clearer UI.

In my opinion, the result of this model is not so much a new statement
of design principles, but rather a (new?) design process which we refer
to as inductive UI design. As part of this process, we use a simple
exercise to evaluate the clarity of page's design and obtain some
indication of its predicted usability. Team members are encouraged to
imagine themselves looking over the shoulder of a friend or colleague
who, while using the screen in question, asks for instructions. The team
member then imagines reciting to their friend or colleague the page's
proposed task statement (which will in actuality appear to the end user
as the page's title). If the statement sounds awkward, useless,
meaningless, confusing, or long-winded, then we assume the page's design
will be weak and we explore alternative titles. Often this requires us
to consider a different decomposition of the user interaction into a new
sequence of pages.

In formulating task statements as titles, we adhere to a fairly strict
set of writing guidelines, some of which are explained in the MSDN
article (requiring a verb, avoiding conjunctions, using natural
language, etc.). In our experience, these guidelines have proven
effective at eliciting from a design team more meaningful titles than
they would produce without those constraints. We look to the team's
writers, as experts in wordcraft, to formulate possible task statements
that satisfy our guidelines. We also look to the writers, as experts in
explaining things to people, to judge what a normal person is capable of
understanding. As we develop task statements we insist on consensus
among all members of the design team. We have learned through trial and
error that a failure to agree on the *exact* wording of a task statement
generally means that either all members of the group do not share the
same understanding of the task, or else we have not yet identified the
real significance to the user of the task (e.g., we are focusing on
technological details and not on consumer needs).

The idea of using natural language in an interface is not new.
Fundamentally, the inductive UI model can be viewed as a conversation
with the user, and hence a relatively more social UI of the sort
encouraged by researchers like Nass and Reeves. However, I'm not aware
of existing social UI work that includes specific recommendations for
directly mapping a conversation with a user into a sequence of pages
that follow a particular structure. To my knowledge, the use of a page
title as a litmus test for design clarity is a novel contribution of the
inductive UI model. It is this essential verifiability that makes the
inductive model useful to me as someone working on large commercial
software teams. Without such a specific recommendation, the preexisting
research in task-based or social interfaces offers (in my opinion)
insufficient practical guidance to a large team building a product.

Regarding the topic of modes -- which several people have correctly
identified as an inevitable result of this style of UI design -- I can
only refer to anecdotal evidence that users seem to accept modes when
they occur in the context of relatively infrequent operations. For
example, the Home version of Microsoft Windows XP contains a inductive
control panel called "User Accounts" that lets a person (often a parent)
create and manage the user accounts of multiple people (e.g., family
members) sharing a single computer. The page-based inductive design of
this control panel is slightly more modal than that of an earlier
dialog-based control panel (which is still visible in the Professional
version of Windows XP when the computer belongs to a network domain).
Nevertheless, users seem to accept these modes with little complaint or
even notice, perhaps because setting up or changing user accounts is
such a rare activity. The occasions when they need to perform these
tasks may be so infrequent that they may forget exactly how to perform
these tasks between occasions. It would seem they don't mind stepping
through short page sequences if each page presents a very simple
decision; they may ultimately save more time than what they might
hypothetically spend in a less modal interface that was more difficult
to apprehend. At this time I can't offer concrete data recommending how
infrequent a task must be for an inductive UI to be acceptable, but the
limited research we've done in this area suggests that even a task
performed every few weeks may be infrequent enough for the user to
benefit from this approach. When something is done more frequently
(e.g., daily), the user may learn to anticipate the next step in the
sequence and ultimately come to feel bogged down by the user interface
rather than led through it.

In my experience it is possible to design collections of inductive pages
in which the main sequence is kept very short so that users can complete
a basic operation quickly, while advanced or curious users can choose to
take advantage of links or detours to additional features ("secondary
tasks" in the MSDN article). This helps mitigate the overly plodding and
modal feeling of, say, traditional wizard-based UI.

In any event, while the inductive UI design process may not be new, its
formulation as a set of guidelines may be. Other design teams have
undoubtedly independently created similar processes for designing
interfaces. E-commerce sites like Amazon.com and Ofoto.com present
interfaces that guide the user through complex operations (e.g.,
ordering something) with sequences of pages that have clear titles often
phrased as sentences in natural language. Surely the teams creating
these products employ a set of principles similar to those mentioned
here and in the MSDN article. Nevertheless, at this point I am unaware
of any group that has formulated these principles into a coherent model
for UI design, and specifically I am unaware of work that has focused so
much on the precise statement of a task as a title. If you are aware of
such work, I'm interested in hearing about it. Please send references
directly to me; if I receive a significant number of responses, I'll
summarize and repost.


Jan Miksovsky
UI Architect, Microsoft Windows
Microsoft Corporation

This posting is provided "AS IS" with no warranties, and confers no

        Tip of the Day: Send postings to chi-web acm org
               About CHI-WEB: http://www.sigchi.org/web/
         Vacations, holidays and other subscription changes:

--- End Message ---

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]