Re: Audio Server Strategy Idea Braindump


I didn't reply yet because a lot of these issues are still burning in
the back of my head.

But ...

> Here's the part that I don't like. I know about GNOME API stability in
> the D&DP and all that mess. But this is not the solution, at least not a
> long-term one. You've just said you want to drop ESD. Then drop it!
> Everyone wants to drop it anyway, why would we keep it in. Because it's
> easy? Nah.

The Esound API doesn't need to be dropped, IMO, just extended.  For a
lot of applications, the esound api is fine.  For a bunch of others, the
API can extended to give them what they need.

> This basically means that you stick to some of the disadvantages of ESD:
> * it provides a far-too-simple API for advanced audio handling.
>    - It has no knowledge over multi-channel audio apart from stereo
>      (5:1, anyone?).

Can be added.

>    - It provides no API for getting latency or queue timings, so any 
>      form of exact A/V sync handling (like we love to do in GStreamer)
>      can only be accomplished using ugly hacks. Even then, only
>      marginally.

Can be added.

> * ESD actually loves to convert everything you send to it to stereo, 
>   44,1kHz, 16bit. You'd probably fix this, but you didn't explicitely
>   mention that so I thought I'd mention this. Just so you know. :). 
see the command line options, it can do more formats than this.  For
esound itself, as for any sound server, it makes perfect sense to stick
to one output format and convert incoming audio to it.

Basically, here's what I feel:

a) esound has a bunch of valuable properties for a desktop sound
server.  These are properties that need to be kept, whatever the end
solution.  These include:
- caching of samples (so, for a complete desktop, where each application
uses the same "yes" and "no" sound, this can lower delay between
clicking and hearing, and reduce memory usage)
- network transparency
- automatic shutdown if wanted
- software mixing
- the API already in place

The thing we want to fix in esound:
- being able to query for playback delay (so you can synchronize with
video playback)
- being able to synchronize more than one audio file
- configure-time backend choosing

- The things we might want to add to esound:
  the ability to throw compressed sound at it, both for caching sounds,
and for network transparency

Basically, my POV is that the brokenness in esound, and the thing that
people complain about, can be fixed by adding api.

Now, as for an end solution, I see two possibilities.

In both options, esound would stay ABI-compatible on the library level.
Ie, all old programs would be able to keep working.  Also, in both
options, esound would get new pieces of API to fix the problems there
are now; ie synchronization and delay/latency querying.

a) esound gets a rewrite, uses GStreamer for internally resampling,
mixing, and output, and this buys it the ability to take compressed
files/streams as input

b) esound gets a rewrite, with API being added, maybe pluggable output
backend - or using libao by default, but no new dependencies (except for
maybe a decent sample rate convertor, ugh).  The emphasis would be on
weeding out the crap, and moving all the platform/hardware/output code
out of the program. 


Dave/Dina : future TV today ! -
<-*- thomas (dot) apestaart (dot) org -*->
I know everybody here wants you
I know everybody here thinks he needs you
And I'll be waiting right here just to show you
how our love will blow it all away
<-*- thomas (at) apestaart (dot) org -*->
URGent, best radio on the net - 24/7 ! -

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]