Re: MERGE: soundfont-support



   Hi!

On Wed, Nov 17, 2010 at 11:44:26PM +0000, Tim Janik wrote:
> Hey Stefan.
>
> What I don't understand here is what you mean by letting the user
> setup the reverb in the end of a module chain.
> If I'mnot mistaken, fluidsynth loads the SF, renders it *including*
> the reverb effect, and then BSE uses the sampled rendering data.
> So the freeverb use isn't actually eliminated.

In the patch I propose to merge, fluidsynth reverb is not used. Therefore
resampling is not needed. Lengthy explanation for the reasons below.

> As I recall, our last idea was to always run fluidsynth at 44.1k on 48k
> rates, and then resample the  output if beast runs at any other rate
> (and we concluded that for the time being only 88.2k and 96k would
> be sensible alternative rates, for which resampling is fast).

Right, and last time we talked about it, resampling seemed to be a good idea.
But while preparing the branch for merging I changed my mind about resampling,
and now I think its a bad idea.

I'll try to describe the reasons in some detail, so you can come to your own
conclusion.

First of all, lets assume that the typical use-case will have more than one
fluidsynth based track (say a Piano and a Violin track). So the question
arises: how do we deliver events to fluidsynth in a way that both tracks get
synthesized.

There are two basic models:

1) we use two fluidsynth instances, one for the Piano and one for the Violin
track - that way - in fact - we can get reverbiation for each of the tracks
done by fluidsynth.

However, the cost for doing so is that each fluidsynth instance will have to
load its own copy of the soundfont. Since these are large and produce long
loading times (say for 10 tracks we would have 10 * 120 MB, and a long start up
time), using a seperate fluidsynth instance for each track would produce big
memory and cpu costs. Also, since we need to resample (in this model) and apply
reverbiation to each track seperately, we'll also have increased run time
costs.

2) we use one fluidsynth instance for both tracks - fluidsynth supports this
model explicitely, and its what the patch currently uses. In this case, we get
two stereo audio tracks rendered by fluidsynth, which we can route through the
mixer. This is nice, because volume control or after effects (once supported by
the mixer) can be done through the mixer. We also get volume metering in the
mixer for each channel. So using one fluidsynth instance is really enough for
what we need.

But what about reverb/chorus in this model? Fluidsynth explicitely supports the
multi track mode we're using here, so they thought of a solution: instead of
processing reverb and chorus for each channel seperately, fluidsynth assumes
that a fraction of each channel's output should be sent through an effects bus
(one stereo bus), and then reverb should be done for all channels combined.
Later, we should add in the fx bus to the result. That way, they minimize the
cpu power required for reverb computation: instead of using cpu power to
compute - in our use-case - reverb for the Piano and reverb for the Violin,
they use one extra effects track (combining maybe 20% of the Piano output and
10% of the Violin output). Thats cheaper, especially for many channels. And it
sounds exactly the same if you add the effects bus back in, because the reverb
is an LTI system.

However, it doesn't map well into our model. We can map the two output tracks
to the mixer, but not the fx bus. For instance, if you turn down the volume of
the Piano channel in the mixer (extreme case: mute it), fluidsynth would not
notice, because the mixer is *after* the fluidsynth processing. Result: the
Piano would still be in the fx bus.

Therefore, I decided to ignore the fx bus completely. Instead, I thought, the
user could add a reverb after the fluidsynth processing if he so desired. In
fact, this is more flexible, because the user can use any reverb he wants (not
just the fluidsynth one, but maybe some LADSPA plugin), and adjust the amount
of reverb at the GUI, as well as the actual parameters for the reverb.

So I have not used the effects bus in the code I propose for merging. The
result is that there will be no reverb added by fluidsynth, BUT the user can
fix that easily through beast mechanisms we already have (song post net or
track post net).

I would also not count on the reverb/chorus settings within the soundfont being
always optimal. Maybe its adding a little too much reverb for the users taste.

Of course there are SoundFont editors for this, but it might be more convenient
if you can adjust the reverb directly via mixer mechanisms. Since for synthetic
voices we need proper reverb anyway (much in the same way fluidsynth does this,
by routing 20% of synth track #1 and 10% of synth track #2 to a seperate reverb
track), BEAST needs to be able to do it correctly for all tracks. Also the CPU
saving model of using only one reverb should be implementable at BEAST level.

So the bottom line is: the patch I am proposing to merge does not use
fluidsynth reverb for good reasons, therefore it does not need resampling.

   Cu... Stefan

> On 11/09/2010 03:50 PM, Stefan Westerfeld wrote:
>>     Hi!
>>
>> I've finished putting the SoundFont support into a branch, so it can be merged
>> into the main tree. The original bug report (with some remarks) was:
>>
>> http://bugzilla.gnome.org/buglist.cgi?bug_id=576946
>>   576946 - Sound Font support for BEAST
>>
>> The last time we talked about it, the plan was to resample the output of
>> fluidsynth in case the sampling rate was 96000 or 88100, to get the reverb
>> built into fluidsynth sound the same like for sampling rates 48000 and 44100.
>> However, my code does not use the reverb. The reason is that while seperating
>> multiple tracks, for instance using a Piano on one track and a Violin on
>> another track, fluidsynth will combine the reverb for both tracks, so there is
>> only one stereo output using the reverb. For beast however, you may need to use
>> different processing for each track, in the mixer. Usually you would just want
>> to regulate the volume and stereo panning in the mixer (because thats the only
>> thing our mixer can do right now), but in further beast releases you also may
>> want to put equalizer, compressor and other effects on each mixer bus
>> seperately. The bottom line is: the user will be able to add reverb, chorus and
>> other effects using beast, so my recommendation is to forget about fluid synth
>> reverb and chorus; the user can add reverb via post net, or via the mixer once
>> effects are available on that level.
>>
>> So I did not implement resampling, and I do not think its necessary or
>> desirable either. Besides, resampling will add latency to the fluidsynth
>> tracks, and if that happens, the tracks will be out of sync with the other
>> tracks (with synthetic instruments) that do not use fluidsynth.
>>
>> Therefore, I recommend merging soundfont support as provided in the branch. It
>> will allow users to have many new good-sounding instruments, and that may be
>> just the feature we want to be advertising for beast-0.7.4; it would be a
>> justification for a new release, I suppose :-)
>>
>> Here is the code:
>>
>> repo:   http://space.twc.de/public/git/stwbeast.git
>> branch: soundfont-support
>>
>>     Cu... Stefan
>
> -- 
>
> Yours sincerely,
> Tim Janik
>
> ---
> http://lanedo.com/~timj/
> _______________________________________________
> beast mailing list
> beast gnome org
> http://mail.gnome.org/mailman/listinfo/beast

-- 
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]