Re: Self Documenting Interfaces
- From: "Dan Kaminsky" <effugas best com>
- To: "Preben Randhol" <randhol dusken4 samfundet ntnu no>
- Cc: <gnome-gui-list gnome org>
- Subject: Re: Self Documenting Interfaces
- Date: Fri, 24 Jul 1998 04:34:31 -0700
-----Original Message-----
From: Preben Randhol <randhol@dusken4.samfundet.ntnu.no>
To: Dan Kaminsky <effugas@best.com>
Date: Friday, July 24, 1998 3:46 AM
Subject: Re: Self Documenting Interfaces
>* "Dan Kaminsky" <effugas@best.com
>|
>|
>| So, if it's so great on paper, why is it so awful acted out on screen?
:-)
>| Isn't it easier to prevent the paper->screen mistranslation? really, you
>| should have both...as the screen does stuff, the lines of text describing
>| what's next to do come up.
>
>You have it on screen. You have it in the help-browser window next to
>the app. Only difference is that it is *you* who controls the
mouse/keyboard
>not the machine! And you can learn while you work. You don't have to
>stop working while you wait for the computer to move on the the next
>button or whatever.
It's a dark secret of UIs that overlapping windows suck and that the only
reason every app isn't always maximized is because under-apps perform the
roll of peripheral vision. Help files often occlude, not nicely either.
Anyway, the record system can operate at the pace of the user using
"breakpoints", in other words the user sees an action happen along with a
textual or aural description of what just happened. Then the user is given
a chance to do it for themselves. The user can choose to fast forward,
rewind, or close the screenplay help system anytime.
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]