Re: midi player?

There are basically 3 types of computer sound formats:

1. Short Clip / Uncompressed - These are wav, au, and other such formats. Good
for anything less than 5 seconds, gets to big above that size. Good for alert 
sounds and other such things. Almost no processor or memory overhead.

2. Long Clip / Compressed - These are mp3's and other compressed sound mediums. 
These are good for longer sounds and music files. These usually have high 
amounts of processor overhead, because of the decompression involved.

3. Event based / Sampled - These are midi and mod sounds. To play these a
piece of hardware or software has to be used to convert the events and
samples into coherent output. These are also ussd only for longer sounds

EsounD currently supports 1 very well becaues it allows for the storage of short
clips on the client end to be played back at will. It does not support 2 as
well because the sounds are decompressed on the server and then sent as a stream
of uncompressed sound data, which puts a greater load on the server and the
network, but still works ok. Type 3 sounds are supported in much the same way
as type 2 sounds, and have the same network bandwidth and server load drawbacks.
Also, EsounD currently ignores the fact that there exist hardware midi and mod

Proposed solutions:

Allow for pluggable client side backends
  Allow the user to plug in whatever program he/she wants that does the sound
  rendering. This way we get the load off of the server and the network. mod and 
  midi sound rendering hardware can then be used. If the client cannot handle 
  the load of a certain renderer (ex. mp3 on a slow 486), the renderer could still 
  be done in software on the server. There are also plenty of backends out there 
  (just look at the number of frontends availible for mpg123 and timidi)

Compress the sound coming across the network:
  This would be good for type 1 and 3 (especially mod samples), but would be
  counter productive for type 2 sounds. Anything that uses EsounD as a simple
  /dev/dsp should be compressed also. 

I admit that the "one sound stream per channel" idea really sucked. I was 
thinking in terms of hardware that only supports a single midi or mod 
output stream at once. On the AWE card, I know you can have normal sound
going out /dev/dsp, while simultaneously having midi's playing on the
AWE chip. In the case of hardware accelleration support, if more than one
sound format that uses that form of acceleration needs to be played, a software 
renderer could be used, and mixed in with everything else. I am kicking 
myself right now for not thinking of this earlier.

I looked at midid and NetMidi. They both basically allow the bidirectional 
transmission of midi commands over a network to other midi devices, and 
bridging using a network instead of midi cabling, whereas EsounD is intended 
to provide sound playback of recorded sound files to an output device, a 
completely unidirectional task.

In short, the server should conserve as much network bandwidth and proccessor
time as possible. Think about this - streaming a 44khz 16bit sound across a 
network requires about 150kbyte/sec, as compared to 128kbit/sec for an mp3 file. 
It is at least 8 times more efficent to simply stream the file instead of a sound 
stream. Even better, a midi file is usually under 50kbytes and many mods are less 
than 300kbytes, so streaming the file instead of sound data would be even more 
efficent in those cases. This is why I think that we should avoid having the
renderers on the server at all costs.

Zack (lurker who is working on high speed RAM based database that will get a 
gnome frontend when the backend and CGI frontend are done)

# Zack Williams #
#                                                                 #
#   Linux is like an arcwelder - it is insanely powerful in the   #
#        right hands, but most people get by with JB Weld         #

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]