Re: [linux-audio-dev] CSL-0.1.2 Release
- From: Paul Davis <pbd Op Net>
- To: Stefan Westerfeld <stefan space twc de>
- Cc: gnome-hackers gnome org, xdg-list freedesktop org, kde-multimedia kde org, linux-audio-dev ginette musique umontreal ca
- Subject: Re: [linux-audio-dev] CSL-0.1.2 Release
- Date: Sun, 10 Jun 2001 17:09:18 -0400
>I think you are not seeing an important point here:
>
> * CSL is meant for some applications (not all applications)
> * VST-like APIs are an entierly different thing
>
>I do agree with you that for music style low latency applications, you want
>a callback driven API with a common data format. Such as the flow system
>aRts uses inside the sound server.
the problem with this approach is that to continue supporting the
HAL-type API that ALSA/OSS/CSL etc. represents is to miss the chance
to develop an audio API that is more reflective of the understanding
we and others have gained over the years. Apple's recent adoption of
the callback model suggests to me that recognition that this model is
not just for plugin API's anymore is about to become widespread.
so, in short, i don't think that VST-like API's are an entirely
different thing, if by that you mean "VST-like APIs are not for applications".
> Using CSL and artsd, both can peacefully
>coexist. You can
>
>1. use your callback driven low latency effects, synthesis, hd-recording with
>the aRts flowsystem inside the artsd server process
but requiring much work on the application developer to develop a
customized (G)UI, since the GUI doesn't run in the artsd server
process.
by contrast, the multi-process model we've talked about for LAAGA
allows applications developers, who already know that 80-90% of the
code for their (G)UI-ed audio applications is for the GUI and not the
audio, to continue working with the kinds of designs that they use already.
if you had to alter any of the list of "real time" apps i posted here
a couple of days ago to make them either:
* provide a plugin to run inside artsd, and have the GUI and the
plugin communicate somehow
or:
* change the internal audio model to use a callback-driven system
i'd wager that in nearly every case, the second option is vastly
simpler. the one advantage i can see to the forced process separation
of the audio part and the GUI part is that is encourages cleaner
design. however, it also creates a lot of work, which in turn
encourages people to simply not use the API at all.
writing MVC code where the Model, View and Controller may be in
different processes, and changes to the Model need to be notified in
ways that do not affect real-time operation of the model: this is
highly non-trivial stuff. i don't know of any example systems that can
do this; by contrast, i can think of numerous systems that use the
callback model; LAAGA's job right now, as i see it, is to extend the
callback model to multiprocess support, so that we can leverage the
possibilities of the linux kernel and develop completely independent
applications that can run in sample-sync and share data with each
other. personally, the thought of being able to run Muse and Ardour in
complete sync with each other makes me very impatient ...
>2. use legacy or very simple applications such as quake, mpg123,
>window manager sounds,... outside the server, and just inject their
>input into the flow system via CSL
ALSA PCM "shm" will do this very easily for us.
also, everything you've told us about aRts as a sound server suggests
that its well designed to handle the "legacy" applications, but not
well designed right now to handle low-latency applications. this is
worrying, since its clear from the discussions that we've had here
that the move from support "player"-type apps to support real-time
softsynths, HDR systems, sequencers and so on is not just a matter of
recoding a few details of the internals.
>I think neither applications of type 1. nor applications of type 2. will cease
>to exist anytime soon, so having both types of APIs available is definitely
>good.
I agree. The question is merely what the APIs should actually look
like, and how much work its worth doing to support let alone encourage
the development of audio apps that use the HAL-type model.
--p
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]