Re: Call for projects for grants



Phew - thanks for the heads up on this, Peter. I wasn't on the foundation list until a few hours ago, so I missed David's original e-mail.

 * Perfect a free software eye tracker program like OpenGazer (needs a
*lot* of work to be usable & stable)

There's also MouseTrap, which is based more on the simpler problem of head tracking. It was a successful "GNOME Outreach Program: Accessibility" project, and is also something being continued as an HFOSS project. So, it might already have enough coverage. It would be nice to see if the OpenGazer and MouseTrap folks could collaborate on code or algorithms via the nice open source community we have.

 * Open voices - doing quality synthetic voices is a lot of work, major
research project & lots of time in a sound studio with specialised
actors. Funding one (or several) in various languages would be useful.

There are a few different ways to tackle speech synthesis. The one you allude above is generally related to concatenative synthesis, and is what you get with systems such as Festival, Cepstral, etc. The voice data tends to be very large, sometimes on the order of 100's of Mb for the more natural sounding voices. While the voices are very natural sounding at normal speaking rates, they tend to become less intelligible at the higher speaking rates blind users are accustomed to. In addition, the performance of these voices (e.g., time to speak, time to stop speaking, etc.) can be poor. Finally, getting good data not only requires picking the right voice talent, but also requires many hours of aligning the recorded data properly so as to get good units. It's very expensive.

Another kind of speech synthesis is formant synthesis, and is what you find in systems such as DECtalk, IBMTTS/ViaVoice, and eSpeak. These voices are less natural sounding, but are more intelligible at higher rates of speed. A good friend of mind, who also happens to blind, often gives the analogy that these engines are the equivalent of fonts: they obviously aren't made by humans, but they are a lot easier to read quickly.

As Peter mentions, there is AEGIS work to expand the language coverage of eSpeak. So, I think a prime candidate for work might be to work with the eSpeak author (Jonathan Duddington) to do something such as allow pluggable vocal tract models and then work to decrease the 'harshness' of the engine that users have complained about. This would improve the acceptability or 'listenability' of eSpeak, while the AEGIS work would improve the locale coverage. With these two things, we could potentially get a very good open source synthesis solution that is small, fast, and intelligible at high speaking rates for a lot of the world's languages.

Internet + accessibility:

 * Integrate an eBook library like Gutenberg Library or Bookshare into
the desktop - integrate well with Orca to make a book reader

The good folks at Benetech (creators of Bookshare) are using Mozilla Foundation funding to create a DAISY (and eBook) reader as an extension to Firefox. Called DAISYfox, it was demoed at the CSUN Conference on Technology and Persons with Disabilities last month, and is coming along nicely.

I helped get DAISYFox off the ground via Mozilla grant funding and was really pleased to see the results at CSUN. We've worked closely with them to make sure that it works with desktop screen readers (e.g., Orca is at the top of my list ;-)) as well as being self-voicing if needed.

After a great meeting at CSUN, we're looking to get additional funding from the Mozilla Foundation to carry the work forward, and future work involves several interesting areas, such as managing your local DAISY library, integration/search for Bookshare and other content providers, etc. If that falls through, however, I strongly believe in DAISYFox and think it is a great candidate for a grant.

I also want to second Alberto's suggestion of GtkWebKit accessibility support. This effort could use more folks engaged in it...

I've set up a WebKit "hackfest" for this Thursday, which is more of an opportunity for us GNOME a11y folks to teach the WebKit folks how to use GNOME accessibility tools such as accerciser and orca to analyze the way WebKit exposes things to assistive technologes via the AT-SPI. My goal is to "teach them how to fish", and to be able to compare their implementation to the Gecko implementation in Firefox. From there, we can ask them to scope out the work to support AT-SPI and ARIA and then see how much work they think is necessary to do the job.

For other ideas:

Eitan Isaacson will be mentoring an HFOSS intern project for a braille transcription package. This project will exercise the liblouis package a fair amount and should flush out many issues and answer many questions. From there, we can take this knowledge and apply it to something such as doing braille transcription directly in OpenOffice via a to-be-written extension. Having the transcription software directly in the content generation tool is the right spot, IMO, and is something that is probably worthy of a grant.

The Bonobo/CORBA deprecation for GNOME 3.0 will require a rewrite or replacement for gnome-speech. Luke Yelavich from Canonical is looking for support from his management for doing the work. Without support from his management, we're going to be stuck. This is an interesting project with good value and cross desktop (e.g., GNOME and KDE) opportunity.

The current on screen keyboard solution (GOK) could use some updating, and perhaps there's an opportunity to make it support a larger audience (e.g., mobile devices with touch screens but no keyboards) as well as better integrated with the desktop.

The deafness and learning disability communities also don't have great coverage in GNOME. The VizAudio work that Bryen Yunashko is mentoring for HFOSS will hopefully yield some good results for the deafness community, but more work would be welcome I'm sure. We can do some stuff for the learning disability in Orca via highlighting and trimming of speech output, etc., but there's nobody lined up to do the work right now.

Finally, please check out our already-prepared page of GNOME accessibility desires, at http://live.gnome.org/Accessibility/GetInvolved

Indeed. This is a good spot for ideas and has more than I mentioned above. The GNOME accessibility community tries to keep this up to date, but it would also be good to drop a message to gnome-accessibility-list gnome org before starting something or if you have questions. It's a friendly group. :-)

Will
GNOME a11y lead


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]