Re: [linux-audio-dev] Re: AudioServer Standard?
- From: Stefan Westerfeld <stefan space twc de>
- To: David Olofson <audiality swipnet se>
- Cc: Benno Senoner <sbenno gardena net>, Havoc Pennington <hp redhat com>, gnome-kde-list gnome org, linux-audio-dev ginette musique umontreal ca
- Subject: Re: [linux-audio-dev] Re: AudioServer Standard?
- Date: Fri, 24 Sep 1999 01:46:31 +0200
Hi!
On Thu, Sep 23, 1999 at 11:20:36PM +0200, David Olofson wrote:
> On Thu, 23 Sep 1999, Benno Senoner wrote:
> [...]
> > Later when audiality will be fully fuctional we could provide a compatibility
> > layer for esd or arts-enabled sound apps.
> > Right, David ?
>
> The basic "compatibility layer" would be the /dev/dsp emulation, that Audiality
> will have as well. Perhaps aRts could run as a client to Audiality as a
> quick'n'dirty solution to get it all integrated? That way, applications
> expecting aRts would still work, and the whole system could still be integrated
> without having to port everything to one of the engines.
Not really. aRts requires really tight timing for tasks like full duplex
audio processing, hard disk recording, realtime midi synthesis (I don't
want to wait more that some ms when pressing a key on my midi keyboard),
etc.
For that reason, hooking aRts to another audio server engine (like for
instance esd as well), is probably impossible. You could build a "link-
me-in" /dev/dsp realtime virtualization server, which then loads both,
audioality and aRts as shared libs. On the other hand, you could probably
build audioality as one aRts plugin or aRts as one audioality plugin.
I think all those ideas sound strange and only show that it makes no sense
to build two projects with exactly the same goal, purpose and technology.
What is the sense of having two fully featured realtime multimedia
processing engines running? Normally, people tend to run one linux kernel
and one X server for instance.
> [...]
> > It shares many concepts with audiality, but I don't know anything about the
> > design.
>
> Indeed, and I'll look more closely at the aRts code. Perhaps there is time and
> work to save by reusing parts of aRts in the new engine. (Which might be
> Audiality, or something else - my efforts so far have been in other areas than
> hacking actual engine code.) Depends on the coding style, and how well aRts
> fits with the new plug-in API currently in the design stage.
>
> [...]
> > > + network transparency
> > >
> > > Since aRts uses CORBA for almost everything, it is network transparent.
> > > On the other hand, some audio server functionality that has been
> > > implemented in aRts use TCP to transfer the signal for instance from your
> > > external (non-arts-module-like) mp3 player to aRts. This TCP stuff is
> > > also network transparent.
> >
> > tenwork transparency is ok, but we need to separate the things,
> > that means if source and destination are on the same machine,
> > use IPC/shmem to exchange data,
> > if not then use sockets or so.
> > But Corba seems a bit slow to me for a high performance event system.
>
> I wouldn't worry much about the actual implementation. However, what does the
> APIs look like, WRT dependencies on other APIs and standards? I'd rather stick
> with very low level stuff, especially in the plug-in API. Partly for
> performance reasons, partly because it makes it easier to port to exotic
> environments like DSP farms, clusters and real time kernels, like RTLinux.
>
> Also, I start to prefer C to C++, at least for this kind of stuff... Perhaps
> I've read too much Linux kernel code? ;-) (And, I was an asm die hard for a few
> years, when I hacked on the Amiga...)
The aRts plugins are plain C++ - CORBA would be much too slow for things like
that. It's just that things like session management, distribution, flow
graphs, etc. are handled over a CORBA interface. That way you can for
instance build flow graphs with a visual editor, while the synthesis
server isn't linked to Qt, X11 and similar.
> [...]
Of course I don't know what exactly you want to do in Audioality and how,
but from the discussion and from the webpage it seems to me like the
goals you have are very close to the goals of aRts. On the other hand, aRts
is under development about two years, and has many of the things you say
you want to achieve already. And more.
For instance it will integrate really nicely into the next version of
kooBase (which will be called Brahms), has really decent flow graph
management, audio server functionality (which is why this topic came up),
etc.
Of course, it isn't perfect, and there are quite a few things that could
be done a lot better. But it is a program you can install, play with,
change a few lines, recompile, and immediately see the effect. Of course,
if you want to have a new plugin API and a new signal flow scheduler,
you change a bit more. But still you can keep the whole framework, GUI
stuff, flow graph stuff, etc., which won't care about that kind of
things.
So why don't you just consider working on aRts? It's open source and your
ideas/code is always welcome. I think linux should try to solve problems
once, right, together and then build on what is there. Finally, that also
seems to have happened for Desktops, where we have now Gnome and KDE, and
most people just joined one of the projects, so we have two very nice
solutions here.
Thats what should happen for audio, too.
Cu... Stefan
--
-* Stefan Westerfeld, stefan@space.twc.de (PGP!), Hamburg/Germany
KDE Developer, project infos at http://space.twc.de/~stefan/kde *-
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]