Re: Polypaudio for Gnome 2.10, the next steps
- From: Seth Nickell <snickell redhat com>
- To: Mike Hearn <mike navi cx>
- Cc: desktop-devel-list gnome org
- Subject: Re: Polypaudio for Gnome 2.10, the next steps
- Date: Mon, 22 Nov 2004 19:01:21 -0500
On Mon, 2004-11-22 at 22:46 +0000, Mike Hearn wrote:
> On Mon, 22 Nov 2004 17:31:30 -0500, Havoc Pennington wrote:
> > This is passing the buck a bit but I'd like to hear from some of the
> > teams that would be using this - Helix, GStreamer, Rhythmbox, Sound
> > Juicer, whoever.
> A slightly lame question from me, but is there general consensus amongst
> multimedia developers (I'm not one) that the desktop-level sound server is
> architecturally the right way to go?
> I've been suggesting that this is a problem better solved lower in the
> OS layer-cake for a while but never really got any feedback on that, so
> I'm guessing it's either a bad idea or nobody really cares either way.
Substantially, I agree with Mike. It seems like this problem should be
solved at the Alsa layer. In fact, Alsa already has this implemented, we
just need to have it setup by default on cards that don't support
IMO the big problem with esound wasn't the implementation (which
certainly had some issues), but the fundamental approach. To function, a
sound servers has to become the sound API. If anything doesn't use the
sound server API, things break. Having a new set of sound APIs just to
add the "software mixing" feature (which was traditionally in the
hardware anyway, and still is on many cards) seems silly. Software
mixing should be an internal implementation detail that doesn't effect
the sound APIs.
That mixing is no longer done in hardware but has been punted to the
host system for low cost cards feels like a driver layer issue, not
something that should be solved by an intermediary sound API. To give a
crude analogy, this feels like proposing that the host CPU bits for
Winmodem support should be handled in the application layer.
People will undoubtedly raise "remote terminal" issues as a reason to go
with a sound server approach. While we *should* make sure terminal
services work, they aren't the primary target, and we shouldn't be
centering the design around them. It seems like the right layer to
attack remote audio issues is in gstreamer anyway while the data is
still compressed and can be more readily transported across the network
(this approach could possibly also be helpful in maintaining audio/video
sync across the network).
] [Thread Prev