3d today instead of when MS wants to go...
- From: Ian Bicking <ian bickiia earlhall earlham edu>
- To: gnome-list gnome org
- Subject: 3d today instead of when MS wants to go...
- Date: Thu, 8 Oct 1998 20:06:31 -0400 (EDT)
Hibbard M. Engler writes:
> Ian - fluff + illusion. Lets stray the hardware a little bit beyound the
> college student's $200.00 486 for one second.
> It should be possible to use some I-gogs (the real 3d kind) semi
> effectively- although thir resolution is limited.
> Another possibility is attach two CCD's to the I-gogs and run those into
> the video cards as background. Now instead of pretend reality, you have
> "augmented reality". As for real-estate, the ability to rotate , move to
> another room, dive into or pull back from a set of documents is much more
> useful for switching application sets then the 9 screen trick that most
> WM's do.
OK, let's really analyze this:
The idea of the interface is to express information. Everything else
is, I think, a method of getting at that goal, and to do it in a way
that works with the way we perceive.
Not only are screens two-dimensional, our vision is too. We impose a
sense of three-dimensionality onto what we perceive, but we still are
seeing 2D.
So how much information can we push through our visual input? Well,
to be strict we have this many states:
x_res * y_res * color_depth
If we add I-gogs we can multiply that by 2 -- no more, because we've
only provided two screens where there was one. It isn't another
dimension.
Now, of course, we can't actually perceive that many different
states. We can't distinguish between them all, and they don't all
have the internal organization we require to help merge our internal
models with these external models.
All said, we can probably perceive something on the order of a dozen
colors. If we put them next to each other, we can perceive maybe
three times that many (e.g., I don't know exactly what the color of
the focused window and unfocused window are -- they are both kind of
blue -- but with both on screen I can tell the difference). If we try
to go beyond that we are either dealing with vague, more intuitive
information (a population density map, for instance), or we're asking
too much of the user.
Resolution -- well, I'm fairly close to maxed out on my 15" monitor at
1024x768. I'd imagine you could triple the width and double the
height and still look at it all -- but beyond that 15" (30 degrees?)
most of that is already well within the periphery.
Even then, we can only understand a much more finite number of
organizations of those pixels. A few lines and shapes, and the glyphs
we've all become used to (letters and icons, mostly -- and the icons
tend to push it a bit).
Then, probably the biggest limitation -- focus. I have this whole
screen here, but I'm probably only able to pay attention to about
three or four square inches of dense information (such as writing).
Only when I look at the much more sparse information -- like window
positioning -- can I usefully understand the entire screen. Right now
I have four windows open, but the only way I can understand what's
going on in each of them is to look at each one individually.
Having those different levels is very helpful -- it let's be move my
focus comfortably and easily between these different contexts. If I'm
reading a book (or a web page) and I stop reading for a moment to look
at a figure I will not easily be able to move back to my previous
context because it won't be nicely partitioned, it's just a mass of
text. So these windows have provided some useful organization of
information.
Perhaps 3D could do something similar -- provide another level of
information orthogonal to other forms. But I'm not sure how you'd
actually go about that. "Diving in" to an application certainly
would not be helpful -- the computer isn't letting you focus on what
you want to, and move comfortably between focuses. It's doing the
focus for you -- a very bad idea. If you've allowed the user to
interact with the information without having to interact with the
computer, you've achieved something.
Maybe you can use depth perception to help the user partition their
focus -- by moving to different depths, different parts of the screen
(or whatever you call the underlying 2D representation) can be focused
on.
However, I don't think you can add much to the pure informational
content. A tri-dimensional spreadsheet, for instance, just can't be
done (at least not right). If you imagine it as a cube, you can't
view the inside of the cube no matter what view you have. Viewing the
outside is just a contorted manner of 2D -- 2D folded around a 3D
object. To really present this information properly you may need a
time component, and that get's really messy (interaction-wise) -- the
flexibility of static information is very useful. Basically you'll
have to resort to a more context-sensitive manner of information
representation.
Personally, I think improvements in interface presentation are best
made by looking at the issue of focus -- how to best help the user
keep track of their attention, how to express information without
demanding focus, etc. Things like translucence could certainly be a
help for this.
<------------------------------------------------------------------->
< Ian Bicking | bickiia@earlham.edu >
< drawer #419 Earlham College | http://www.cs.earlham.edu/~bickiia >
< Richmond, IN 47374 | (765) 973-2824 >
<------------------------------------------------------------------->
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]