Re: [Usability] Touchscreen and gestures



On a technological position that's true. You can replace all functions by 
gestures. But I'm not convinced that people do really know what kind of 
gesture is related to a specific task. A gestural interface replaces direct 
manipulations ("direct" because a mouse has only a few functions) by the 
perhaps more natural movement of hand and fingers. Provocatively, usability 
would be forced to invent appropriate gestures instead of making an interface 
accessible. Indeed this could be better if gestures are easy to remember or 
are very intuitiv in such an extend as needed. Does anyone knows a scientific 
analysis on that topic?

Am Freitag, 29. April 2011, 20:33:05 schrieben Sie:
> Hello Heiko,
> 
> On Fri, April 29, 2011 19:18, Heiko Tietze wrote:
> > I disagree with the idea of coevolution. If touch input is captured
> > reliable (which I doubt) it still lacks of precision.
> 
> Yes it does, but most can be alleviated by making elements (or rather
> areas of sensitivity) bigger or use established gestures. I’m not an
> 
> expert at touch interfaces but here’s my take:
> > For instance, I cannot imagine how to resize a window
> 
> Either increase the sensitivity area of borders or, what I’d say is
> better, use the second index finger to resize when the first one is
> holding the title bar. Can be done while / after moving.
> 
> > or how to place a cursor on a certain position, not to mention tooltips
> 
> Both concerned with the problem that there is no hover state for touch
> interfaces. I haven’t thought about that yet. How do mobile / tablet
> operating systems handle tooltips?
> 
> > drag 'n drop
> 
> Just like on Android and iOS by long-press and moving it.
> 
> > Additionally, more clicks are needed to start a program without
> > menus (as with the new Shell concept or Unity).
> 
> How that? The start menu had submenus and both the Gnome and Unity menu
> reveal big icons after one click.
> 
> 
> As I see it, in general desktop operating systems can benefit from the
> simplicity and single-tasking needed for mobile and tablet operating
> systems. I said it to Jakub before, it seems to me that Gnome 3 took a lot
> of design cues from WebOS – and that’s a good thing.
> 
> > As far as I see, conventional
> > operations are "translated" into a new world currently. Functions get a
> > gestural analogy and some design adoptions.
> > I'd like to suggest a split. On a desktop PC with keyboard and mouse we
> > have
> > sophisticated procedures that should be kept. On a device without these
> > inputs
> > functionality should be changed completely. I don't have a final solution
> > yet,
> > unfortunately, but there seems to be much potential. A window could be
> > always
> > maximized and any subwindow or frame has to be applied as overlay, drag
> > 'n drop could be replaced partially by "select and point" (as example
> > for the easy part). My idea is not to have up to 40 gestures (like
> > Microsoft) but to
> > reduce functionality. Or do I worry too much?
> > 
> > Kind regards,
> > Heiko.



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]