Re: cut and paste in gnome-terminal



Greetings,

Good points Mario. This is where the goals of well-behaved apps (which, I believe we have a much higher percentage of in GNOME as compared to Windows) meets the reality of end-user needs.

I don't think MouseKeys is the right kludge for us. I don't know that we can guarentee a specific number of pixels of mouse movement per key press (especially relative to the content, at any of a variety of screen resolutions, etc.).

I think the right kludge involves using Gnopernicus' (or Orca's) knowledge of what is on the screen and moving the mouse either relative to that knowledge (preferred kludge variant), or as an absolute pixel percentage relative to the window (Orca script to click the mouse 33% of the way down in this window and 7% over).


Make sense?


Peter Korn
Sun Accessibility team

Mario Lang wrote:
Bill Haneman <Bill Haneman Sun COM> writes:


Mario said, of the assertion that cut-and-paste is a function of the
application, not the terminal:


That sounds more like an excuse than a solution to me  :-)



I know what you mean.  However, it is true that many console
applications provide their own keystrokes for text selection, which is
the core feature you really need here.


From personal experience, there are two types of cut and paste.
The one is application specific, like working inside Emacs and
moving pieces of text around.  However, I very often see a need
to cut something from one terminal window, and insert it into another (URLs,
email addresses or example command lines come to mind).

In those cases, I'd rather have a generic method which does not
rely on the application currently running.  For instance, 'screen'
provides cut&paste too, but it is useless if I want to paste
the text into another terminal or application.

So IMHO we loop, saying apps are responsible for providing
cut&paste is just not enough IMO.


In general if an application already provides a feature, trying to
introduce it into the application's operating environment (in this
case the terminal emulator) can be a recipe for conflicts.


I still think it should be much easier and more consistent to
introduce some keyboard based selection method into gnome-terminal
directly, rather than trying to add cut&paste to all sorts of
legacy terminal applications.


Basically every keystroke combination you can think of is already used
by some terminal application, so no matter what key combination we
chose for 'keyboard selection', it would conflict with _something_.

I've suggested (in bugzilla bug #78291) that keyboard selection should
be modal, for that reason - i.e. the terminal user could invoke some
key sequence to toggle 'selection mode' on and off.  Of course that
key sequence itself is likely to conflict with something else, but we
already have hotkeys for menus in gnome-terminal, so I think it would
be reasonable to add this functionality to the gnome-terminal menus.


Sounds good to me.


But the lack of keynav cut-and-paste is something that comes up rather
frequently in applications.  I agree that the applications _should_
provide keyboard navigation for all these features, but the reality is
that putting a feature like this into gnopernicus and/or orca (via the
AccessibleText API) would probably still be helpful.  I object
somewhat more strenuously to relying on mouse-motion synthesis to help
us - I know that for instance JAWS provides this, and gnopernicus is
supposed to provide some mouse synthesis too, but I consider a
mouse-only application to be quite broken.


That is easy to say from the viewpoint of someone who isn't limited
by accessibility problems in the first place :-).  In reality, if your boss
tells you you need to use a certain application, every trick,
no matter how icky it is, is going to be helpful.  You are just not
able to wait until some accessibility problems have been resolved
upstream (and in the case of commercial apps, you probably
would wait forever).  As an example, a friend of mine recently
bought a wireless package from his mobile phone provider to
access the internet from his laptop.  The special connection
utility they wrote is absolutely inaccessible.  The only way he found to
actually use it is to write a JAWS script which positions the mouse
pointer at an exact screen location he had to work out with
a sighted users help beforehand, and generate a click.
Now, one might argue that this is not pretty, OTOH, the alternative
for him would have been to just not use that product, and that
is sometimes not an option.


We have the 'MouseKeys' feature via the numberpad which can be used to
move the mouse around, but for blind users the feedback for this is
very poor, and as Kenny has pointed out, the use of the numberpad for
this conflicts with gnopernicus keybindings.  Mouse motion often has
side effects, so I think that moving the 'mouse cursor' by default is
a bad idea - it's better in my opinion to use this as a method of last
resort.


Of course, but these "last resort" methods also have to work somehow
in reality...  Not that I am trying to argue for hacks like this,
but reality shows that as a disabled user, you sometimes have
no other choice.  I once participated in a seminar
about Lotus notes 4.something.  Back then, we really *had* to
use the JAWS mouse motion feature to get any work done.  Without it,
the seminar (for blind users) would have needed to be canceled.


We will
need mouse motion at some point anyway (some windows apps come to mind, where
the only way around certain accessibility problems is to go and use
the mouse somehow).  I kind of like the approach that JAWS
is taking here, you can switch between two cursors, the
editing cursor, and the mouse.  If you are in mouse mode, your arrow
keys move the mouse, and the display (braille or speech) follows.
Actually, in that mode you can even use the real mouse to navigate,
since your braille display will follow it around.
So ideally, I'd just switch to mouse mode, go where I need to start my cutting,
hold shift, and move the cursor to where my cut ends...
This could also be useful for people with certain physical disabilities.
I've known a girl once who had real difficulties holding her hands still.
Mouse usage was not easy for her, since the pointer used to jump
around on the screen.  She even got a special keyboard where the
keys were kind of holes, so that she wouldnt accidentally miss them.
A simple cursor based mouse movement method would probably have helped her.



We do have this, but since it uses the numberpad, gnopernicus needs to
provide some alternate keybindings in order for both MouseKeys and
gnopernicus to be useful at the same time.  Note that MouseKeys use of
the numberpad is specified via X keymaps, so it makes more sense to
modify the gnopernicus keybindings than to try making MouseKeys use
something else.


Excuse my ignorance, but can't the screen reader just
bind certain keys to "Mouse motion" and forward these requests
to the MouseKeys thing whenever the user invokes those?  At least for me, this
way of solving things would make much more sense than having to reconfigure
all gnopernicus keybindings.  As a user of assistive technologies, I dont
particularily care if MouseKeys or Gnopernicus or Orca
is actually moving my mouse, what I want is a fairly consistent
set of keybindings.  If I think about how I'd explain the current
situation to a newbeish like blind user of GNOME, I think
they'd be confused.  For me, conceptually, MouseKeys is something
that the screen reader should expose to the user, not the X keymap.
What if, say, I use a completely different input device to control things.
Would I need to separately configure MouseKeys to use that device to
be able to use the mouse?
I see where all this comes from, i.e. MouseKeys should be
usable even if the user does not use a Screen reader at all,
OTOH, I think the two should be able to cooperate a bit more nicely.






[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]