[jg pa dec com: Re: GNOME and speech recognitiono:]



------- Start of forwarded message -------
Resent-Cc: recipient list not shown: ;
MBOX-Line: From gnome-gui-list-request@gnome.org  Sun Mar 14 23:04:56 1999
Date: Sun, 14 Mar 1999 19:55:58 -0800
From: jg@pa.dec.com (Jim Gettys)
To: talin@acm.org, dzol@virtual-yellow.com
Cc: gnome-gui-list@gnome.org
Subject: Re: GNOME and speech recognitiono: 
Content-Type: text/plain
Resent-From: gnome-gui-list@gnome.org
Resent-Reply-To: gnome-gui-list@redhat.com
X-Mailing-List: <gnome-gui-list@gnome.org> archive/latest/509
X-Loop: gnome-gui-list@gnome.org
Precedence: list
Resent-Sender: gnome-gui-list-request@gnome.org
X-URL: http://www.gnome.org


1) the Linux market is now getting large enough that there is serious
hope that a speech recognition vendor (e.g. Dragon or IBM) might
port their software.

2) the amount of work to build a speech recognition system is large;
the amount of specialized knowledge is huge (anotherwards, without
the right people, you can't hope to do it on your own).

3) Speech recognition in the X Window System is an OLD topic:
been there, done that...

Ok, who has been there, and who has done that and where can you find out
about it?

a) who: Bob Scheifler, who, with me, started X in the first place.
He had to do this due to carpal tunnel syndrome, which caused him serious
trouble some years ago.
b) where can you find out about it: Look over old X conference proceedings.
c) Bob made a nice video: "Hands Off X!".

Believe it or not, the hooks required in a toolkit are VERY small.

In short, have the toolkit decorate the window with properties giving
the class and instance information of the window.  This allows an
independent helper application to go and find the right window to
send synthesized input events to.

As to what to put in the toolkit library, go look at the Xt sources;
it is only a few lines of code (setting the right property with the right
information).  I think it may be #ifdef'ed out in Xt by default, though
one could argue it should probably be on by default.

I believe this is important at this time: we don't/want N different ways
to do this, its been done before, it is easy, and the sooner it is in
the toolkit, the less legacy applications would have to be updated to
add support.

And this support is generic for controlling applications, to other input
methods than speech (buttons, stylus, etc), where some application
may need to send input events to other applications, and know where to
send the events.  Bob's application should still be around somewhere.
He had to kludge things with a PC to get it all to work, but now that
PC's are fast, and Linux sells in millions, and there is no love lost
between the likes of IBM and Microsoft, lets lower the inertial barrier
to speech recognition and other input technologies.

I'm very interested in seeing this happen; I'd like to see X/gnome/Linux
on PDA's without keyboards and mice, and need this hook.  Said PDA (Itsy)
even has enough speed it might be able to do recognition someday (200mhz).
Having to use Xt apps would be a bummer...

                                - Jim Gettys


- -- 
        FAQ: Frequently-Asked Questions at http://www.gnome.org/gnomefaq
         To unsubscribe: mail gnome-gui-list-request@gnome.org with 
                       "unsubscribe" as the Subject.
------- End of forwarded message -------



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]