Build a speech generating device



Hello,

a close relative of mine suffers from ALS, a motor neuron disease which
leads to the inability to speak as well as general immobility. Since
there are only few speech generating devices (SGDs) available on the
market, and those are as limited as they are expensive, I plan to build
a custom SGD using a tablet computer as a basis and applying available
free software components.

Since I am not (yet) very much into the field of accessibility
techniques, I hope to get some hints from the community regarding the
best suited components for this project.

The primary components I identified to be necessary are

      * a virtual keyboard with word prediction 
      * pre-defined text snippets 
      * a speech synthesizer backend (for German language output) 
      * a frontend to the speech synthesizer

For the speech synthesizer, I currently plan to use OpenMary[1], since
its output quality is significantly better than espeak’s, even with
mbrola voices.

For the speech synthesizer frontend, I plan to either adapt gespeaker,
adding OpenMary support and as-you-type playback, or build a custom
solution.

The pre-defined text snippets could either be built into the virtual
keyboard (as onboard does), or into the speech synthesizer frontend.

So currently the field on which I need most advice is the field of
virtual keyboards. I tried to evaluate the available options, and there
is quite a lot of information scattered across the internet, but for me
it was quite difficult to get information about the usability and state
of the different projects.

Currently, I found these projects which might be worth to evaluate:

      * Caribou, as the future GNOME virtual keyboard of choice. There
        was effort to integrate presage as a prediction engine, but I am
        not sure about the state of this effort. 
      * OnBoard, which is the current Ubuntu solution and for which a
        branch with prediction support exists. But here too, I am not
        sure about the state of this branch. 
      * Maliit, as the MeeGo solution, which seems to be quite solid,
        but prediction support would have to be added (and might also
        use presage). 
      * OpenAdaptxt, which has a prediction engine as its core, but I am
        unsure if this is already a usable solution or an evolving
        project with a mere basis for future development. 
      * Dasher, as a complete different approach. I might be a good
        replacement for a regular virtual keyboard once mobility
        decreases to a level where a regular keyboard is hard to handle.
        But it seems not to be very well maintained, it’s quite unstable
        and I did not manage to get all of its functionality working

I would be glad if someone could give me hints about your impressions as
to which of the available solutions is the most usable and matches my
requirements.

Of course, I’m also happy about any hints and advice regarding the topic
in general, about experiences with SGDs, etc.

Thanks,
Frederik


P.S.: I cross-posted this message to gnome-accessibility-list and
ubuntu-accessibility, since I am interested in feedback from both
communities. I hope this is okay. I am not into project politics, I am
just searching for an available solution that can help me achieve what I
am aiming at.


[1] http://mary.dfki.de/



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]