Re: aRts and Gnome



   Hi!

On Sun, Nov 14, 1999 at 10:38:58PM -0500, Havoc Pennington wrote:
> My understanding of the problems with using the audio device directly is:
> 
>  - only one app can use it at a time
>  - it doesn't work over the network
>  - it is platform-specific
> 
> I know nothing about sound - but can you give a brief summary of how aRts
> addresses these issues; also, what is the benefit of the additional stuff
> in aRts? Why shouldn't we have a simple server that does the above stuff,
> and then aRts is one client of that server? In short, what does the
> additional complexity of aRts buy us?
> 
> Honest questions - basically I'm asking for a summary of the reasons KDE
> decided to go with aRts, what problems are being addressed, etc.

To get a short and intuitive reply, I'll try the following analogy:

Suppose linux had one device for graphics output /dev/graphics. You could
write blocks of pixels in there, and they would be displayed on the screen.
The same restrictions as in your example would be true for this.

 - only one app can use it at a time
 - it doesn't work over the network
 - it is platform-specific

So you could start gimp, and it would run fullscreen until it terminates,
etc. Obviously, this would be quite unsatisfying.

However, simply writing a server which allows you to e.g. switch between
the apps, and is network transparent,... would not help to solve the problem
completely.

Especially, new concepts like "window" and "window management", had to be
introduced. Also, for efficiency for instance things like moving, pixmaps,
3d rendering,... must be done differently than "by the app".


aRts now should become a consistent object model for streaming multimedia
components. Thus, it implements things like communication between
components, object model, etc. The concept of streaming communication is
for instance something you don't need for X11 as much as you do for audio.

Also, a wide range of problems occurs due to realtimeness - some
implementations (e.g. running a realtime effect processor in a different
process space than the process that reads/writes /dev/audio) are simply
too slow.

The idea is (and that is what aRts does):

- you make a model of how a multimedia component should look like
- you write a stream oriented communication framework (that does
  almost anything for the components)
- you perhaps write a CORBA like middleware (e.g. going with MCOP, or on
  top of CORBA) to handle communication between process spaces
- you write flow graph management, etc.

If all that is ready, you can easily implement something like esd on top
of that, or do more, depending on wether the user just listens to mp3s
or has higher requirements. (Composing: realtime requirement, midi
communication ; gaming: realtime requirement ; voice telephony: full
duplex + realtime - note that all these apps shall be able to run at
the same time, without interfering). 


About how aRts solves conventional problems (the ones you gave above):

- it runs as server - preferred as real time process
- it exposes a TCP interface for streaming and "simply playing wavs"
- it exposes a CORBA interface for doing other things

All these are naturally network transparent. But if you want a solution
for only these, esd is already there, isn't it?

It perhaps is just a simple idea: if you only want to play streams (like
mp3s), doing things like esd does is sufficient. But you'll never support
real music technology - like sequencers, synthesizers, filters - by just
offering esd. I'd really like to see more "real music technology" running
under linux, so I work on aRts. And Brahms proves, that at least to some
degree aRts is the right direction. But it's just a small step right now.

   Cu... Stefan
-- 
  -* Stefan Westerfeld, stefan@space.twc.de (PGP!), Hamburg/Germany
     KDE Developer, project infos at http://space.twc.de/~stefan/kde *-



[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]