Re: GNOME 2 Sound Architecture and APIs?



On Thu, 12 Apr 2001, Bill Haneman wrote:

> The big issue for accessibility, at least, is latency.  Presumably the
> sound event framework allows sounds to be played on a near-real-time
> basis, provided they have been pre-loaded.

As far as GNOME & sound events go, these sound events are just responses
to UI events, and have very lax latency requirements. Having sound samples
in the sound server itself is useful, but not required to implement GNOME
sound events.

> I'm not sure if this helps or hurts text-to-speech, since usually text
> to speech sends the clip at the same time as the request to play it.
> Rather than scheduling it to be played at a certain time, the client
> wants it played "ASAP".

That is your basic sound server thing, which is just a virtualized sound
device, which is the "right way".

> text-to-speech will require of the sound subsystem:
>
> latency <= 100 ms
> ability to stop sound emission
> ability to readily mix sound requests to play concurrently

I think the existing esound solution can do #3 fine, and #1 not horribly
far off. AFAICS #2 is mostly a problem for the application - if you don't
want sound to be output, don't output it!

-- Elliot
The truth knocks on my door, and I say
"Go away. I'm looking for the truth"
...and so it goes away.




_______________________________________________
gnome-hackers mailing list
gnome-hackers gnome org
http://mail.gnome.org/mailman/listinfo/gnome-hackers




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]