Re: Polypaudio for Gnome 2.10, the next steps



On Mon, 22 Nov 2004 19:20:27 -0500, Seth Nickell wrote:
>> Actually grabbing the audio at the gstreamer layer has the problem that
>> only gstreamer apps can do remote audio which would not make the LTSP
>> people very happy. 
> 
> As opposed to streaming uncompressed audio? That seems like a bad idea
> to me...
 
I guess my proposal email wasn't clear enough. What'd happen is this:

Each app starts up and begins playing audio. The alsalib configuration is
a chain of plugins, a bit like gstreamer except much less advanced. The
chain looks like this:

app -> tee -> [dmix] -> plughw

app is the program
tee is the mythical as yet unwritten plugin described below
[dmix] just means optional use of dmix/asym if the hardware needs it,
otherwise it's missed out
plughw is the alsa plugin that writes audio out to the sound card

The tee plugin (name may already be taken, didn't check!) would do nothing
in the common case, and when a $SPEAKER environment variable was set,
would instead write the audio to a socket like /tmp/audio-mike/12345 where
12345 is the pid. This is uncompressed raw audio from the app.

A gstreamer app can monitor this directory using FAM and when a new socket
appears connect to it and start downloading the raw audio into itself
before mixing all the streams from different apps together, compressing
that stream using whatever gst pipeline you like, and forwarding it on to
the remote terminal.

Or you could save the audio of a specific app to a file:

  socketsrc location=/tmp/audio-mike/`pidof foobar-app` ! oggenc !
  vorbismux ! filesink

I'm sure a GStreamer guru will whack me now for totally making up a
pipeline and element names, but you get the idea.

So the audio going over the wire would be compressed according to
arbitrary policy. The uncompressed audio is simply moved around inside the
same computer which should not be too bad.

thanks -mike




[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]