Re: [Ekiga-list] PTLIB alsa plugin status
- From: Alec Leamas <leamas alec gmail com>
- To: Ekiga mailing list <ekiga-list gnome org>
- Subject: Re: [Ekiga-list] PTLIB alsa plugin status
- Date: Fri, 27 Feb 2009 00:30:21 +0100
Derek Smithies wrote:
Hi,
On Thu, 26 Feb 2009, Alec Leamas wrote:
Just to sort his out, trying to understand the overall requirements.
Which is a very reasonable thing to do, and it is a good question.
And with the idea that using threads is perfectly reasonable in this
context :-)
Excellent. This idea is very reasonable.
-Let's focus on the playback case, leaving any read aside (which
refer to a different alsa device).
Good idea.
- This should mean that while one thread (A) is closing it's
playback device, another thread (B) starts writing to it.
Yes, thread B is writing audio to the playback device. Thread B is
collecting the data off the jitter buffer,
decoding the audio using the specified codec,
sending the data to the playback device.
thread B is stopped by a bad write to the playback device. Typically,
a bad write to the playback device is caused by the playback device
being closed.
Hm... a write operation could be guaranteed to return in finite time
(using non-blocking io + snd_pcm_wait). So couldn't the close method
just mark the chanell as closing, leaving the dirty work to the "writer"
thread and thus avoiding the locks? (Which, otoh, really isn't a big
issue in this scenario). If required, opening could be handled in the
same way, I guess. This would also create the advantage that the thread
could process the jitter buffer data in parallel with the alsa output,
without the need to wait for the IO to complete. Wouldn't this give a
more accurate timing? Also, avoiding blocking io is a Good Thing IMHO.
Something like
while(1) {
if( closing)
close(); return
chunk = process_jitter_buffer();
snd_pcm_wait( pcm, timeout)
if( timeout)
// close or prepare...
else
snd_pcm_write( pcm , chunk) // non blocking
}
In the gui thread, user clicked hangup, and a close down thread is
created. This close down thread runs independantly of the gui (so does
not hold up the gui so responses work ok) and makes a call to
OpalCall::Clear() (which is a structure inside Opal) which then goes
through all the RTP threads (including audio) and closes the devices.
Since the Open, Close and Read/Write operations are atomic, there is
no possibility of one happening while the other happens and breaking
things.
The Opal thread which does the call to device Open then goes on to
launch the read/write threads. So read/writes don't run before the
device is open.
Thanks. So there are never any io operations in parallell, but parallell
io/close operations. I think I understand. A good explanation, BTW. I
might submit a patch with documentation to the plugin base class
clarifying this, basically what we have here.
I don't think this aspect of the the Opal design is a problem. The
problem we are are trying to address is the reason for the buffering -
why is there a 100ms delay???
Yes. I *think* I've seen five periods hardcoded somewhere...
Answer:
There are two entities that I have seen which can "store" audio to
give you a delay.
The jitter buffer, which can store seconds of audio.
There are status variables in the jitter buffer which indicate how
long it is buffering audio for.
As I suspected. Thanks also for this. So basically we have network
latency, jitter/echo cancellation buffer and the device/alsa buffer, all
in total preferably in the 150 - 200 ms range. If there is no echo
cancellation, the alsa buffer (if larger) could also be jitter buffer.
But not if fancy things like echo cancellation should be performed (?).
The sound device. Opal sets the sound device (on linux) to store 2
buffers of audio, which is (at most) 2 x 30ms.
One of the 30ms buffers is the buffer currently being written to the
sound device.
The second 30ms buffer is the next buffer to be written.
The buffering depth is set by the call to
devices/audiodev.cpp:bool PSoundChannel_EKIGA::SetBuffers (PINDEX
size, PINDEX count)
size is <=480 (480 is for a 30ms long buffer. GSM uses 20ms.)
count is typically 2 (windows uses 3 or more)
It "is" possible that this call is not happening at the right time. I
doubt this, but you could verify this with a review of the logs.
If this command was being missed, the sound device would get whatever
value it is defaulted to..
The thing is that when looking at the alsa device from the operating
system level (in the /proc filesystem) it's clear that the buffer is 5
periods * 20 ms = 100 ms (details in the thread initiated by Andrea). So
something is not as expected... Is the simple truth that the alsa
period size doesn't match the codec chunk size? But even if so, should
it matter? "suspicious"
Derek.
--a
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]