Re: [orca-list] hoping to recycle bits of orca for speech recognition project



I have been keeping an eye on it but it's still quite primitive when compared to NaturallySpeaking. Is mostly built for command control. Personally, I'm bothered by the claim "it can replace your mouse and keyboard". Whenever I see that, it's quite clear that the person is not thinking about a useful speech user interface.

 I'm also afraid that like previous speech recognition on Linux projects, it is driven by one person and when they lose interest, basically the disabled people counting on it are screwed.  I would feel one heck of a lot better if there was a 5 or 10 year commitment by a well-funded charity pushing the development forward. And yes I have the same  worry with nuance. Handicap accessibility for them is a side effect.  they ignore us, they change things that make it difficult to build our own accessibility capabilities and quite frankly, I'm expecting any day they will say use NaturallySpeaking their way or no way.

 the 3rd reason for doing things my way is that I have a goal. My goal is to have a small box which contains all of my accessibility needs. I don't care what platform I work with as long as my little box can drive it. You see, we've been doing accessibility enhancements the wrong way. Instead of enabling a complete machine and every machine the disabled person needs to use, we should give them their accessibility box and every work machine should have a small layer  that lets  the accessibility box couple in and drive the work machine.

Right now with my virtual machine and bridge program, I can slot my virtual machine on a portable hard drive and move it to any other machine with KVM and  that machine is made accessible in a matter of minutes.  So if I put my energy into making something accessible, that's where I'm going to do it. I hope someday to take a surface Pro 3, run NaturallySpeaking on it and only use Windows as the basic speech recognition engine and UI display for speech. I've had rough prototypes of this working in a virtual machine environment and I will tell you, it works amazingly well.

So whenever Simon or what ever Linux based speech recognition environment comes along in a fully useful form, my bridge code shouldn't take much to work with it and I could wipe out windows on the surface Pro 3, install LInux and still have a portable accessibility box.

 So I will pose this question to you and the rest of the orca community. Would you find it easier  to make a small bridge piece like I'm doing for speech recognition and run orca on a tablet that can connect to any desktop machine so that the accessibility is available without a lot of overhead?

-- eric




On Mon, Sep 21, 2015 at 11:11 AM, Alex Midence <alex midence gmail com> wrote:

Have you tried Symon?  It takes speech input.  It might do what you want without you having to reinvent the wheel. 

 

Thanks.

Alex M

 

 

From: orca-list [mailto:orca-list-bounces gnome org] On Behalf Of eric @ eggo
Sent: Monday, September 21, 2015 9:55 AM
To: orca-list gnome org
Subject: [orca-list] hoping to recycle bits of orca for speech recognition project

 

tl;dr:  need help figuring out how to determine which  window is in focus.

 

--

I got fed up with  needing speech recognition for dealing with my disability  and only being able to use Windows. So, I wrote a bridge that takes results of speech recognition from a Windows-based virtual machine and injected into the input queue of linux.   it works pretty well as you can see by my dictating directly into Geary.

 

The next step is to communicate what application/window has focus to the recognition environment so that I can activate the right grammar. somebody on the gnome accessibility list pointed me here saying that there should be some common code I could repurpose and I could appreciate the help.

 

thanks!



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]