[orca-list] Request for feedback from Orca low-vision users.



Hi all,

We would like some feedback from our Orca low-vision users on
what they would like Orca to do in the following scenerio.

As you know, Orca has a "say all" function whereby it will speak/braille the whole of a text document.

As we progress through the document, what we would like to do for low vision users, is to provide some kind of visual feedback to show exactly where in the document you currently are.

Now what form should that feedback take? We can easily set the
text caret to the currently spoken word. Alternatively should
we try to highlight the current word, or underline it, or put
a rectangle around it, or ... ?

Your input would be very much appreciated.

Thanks.




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]