Re: MERGE: soundfont-support
- From: Stefan Westerfeld <stefan space twc de>
- To: Tim Janik <timj gnu org>
- Cc: Beast Liste <beast gnome org>
- Subject: Re: MERGE: soundfont-support
- Date: Thu, 14 Jul 2011 18:37:48 +0200
Hi!
On Wed, Jul 13, 2011 at 12:54:17PM +0200, Tim Janik wrote:
> On 24.11.2010 00:33, Stefan Westerfeld wrote:
> >2) we use one fluidsynth instance for both tracks - fluidsynth supports this
> >model explicitely, and its what the patch currently uses. In this case, we get
> >two stereo audio tracks rendered by fluidsynth, which we can route through the
> >mixer. This is nice, because volume control or after effects (once supported by
> >the mixer) can be done through the mixer. We also get volume metering in the
> >mixer for each channel. So using one fluidsynth instance is really enough for
> >what we need.
> >
> >But what about reverb/chorus in this model? Fluidsynth explicitely supports the
> >multi track mode we're using here, so they thought of a solution: instead of
> >processing reverb and chorus for each channel seperately, fluidsynth assumes
> >that a fraction of each channel's output should be sent through an effects bus
> >(one stereo bus), and then reverb should be done for all channels combined.
> >Later, we should add in the fx bus to the result. That way, they minimize the
> >cpu power required for reverb computation: instead of using cpu power to
> >compute - in our use-case - reverb for the Piano and reverb for the Violin,
> >they use one extra effects track (combining maybe 20% of the Piano output and
> >10% of the Violin output). Thats cheaper, especially for many channels. And it
> >sounds exactly the same if you add the effects bus back in, because the reverb
> >is an LTI system.
> >
> >However, it doesn't map well into our model. We can map the two output tracks
> >to the mixer, but not the fx bus. For instance, if you turn down the volume of
> >the Piano channel in the mixer (extreme case: mute it), fluidsynth would not
> >notice, because the mixer is *after* the fluidsynth processing. Result: the
> >Piano would still be in the fx bus.
> >
> >Therefore, I decided to ignore the fx bus completely. Instead, I thought, the
> >user could add a reverb after the fluidsynth processing if he so desired. In
> >fact, this is more flexible, because the user can use any reverb he wants (not
> >just the fluidsynth one, but maybe some LADSPA plugin), and adjust the amount
> >of reverb at the GUI, as well as the actual parameters for the reverb.
>
> I have a few questions here:
> - Does fluidsynth still waste time to compute the fx bus in your
> setup, that you are discarding?
No, it doesn't; I've just compared fluidsynth with effects and stwbeast with
SoundFont support using oprofile, and stwbeast doesn't spend any time on the
fluidsynth reverb and chorus functions.
> - Have you thought of ways to mix in the fx bus nevertheless if the
> users wants that anyway (i.e. getting an authentic soundfont sound
> rendered is a valid usage scenario as well).
Of course it would be possible to somehow render the fluidsynth effects
and get the result as extra mixer channel in beast. However, I would not
recommend that, because as soon as you do anything more than 1:1 playing,
things will go wrong. For instance, if you put a post-network on one of
the fluidsynth beast tracks, the fluidsynth effect rendering will render the
reverb without the post-network.
More unnatural behaviour follows if you use the mixer beast provides.
For instance if you solo or mute a track, the fluidsynth effects will
still include all tracks. If you use the mixer to amplify one of the
tracks, the fluidsynth effects will not change accordingly.
The other option would be to use one fluidsynth instance per track. But
not only the cpu usage for this would be considerably higher (because
each track would have its own effects); it also requires loading the
SoundFont once per track, which for large SoundFonts (> 100M), would
make the memory consumption very high.
> - Is there a way to extract reverberation/chorus parameters of the
> soundfont, so we can write a script that'd approximate the desired
> effects via beast plugins?
I don't know if the fluidsynth API allows querying these parameters. If it
does, this would be the easiest way.
But if that fails: I've written a SoundFont parser for SpectMorph, so it should
be possible to get the information required to automatically adjust
reverb/corus levels with beast plugins.
Cu... Stefan
Please keep the beast list on CC for mails which came from there. I'm adding
that back in now.
--
Stefan Westerfeld, Hamburg/Germany, http://space.twc.de/~stefan
[
Date Prev][
Date Next] [
Thread Prev][
Thread Next]
[
Thread Index]
[
Date Index]
[
Author Index]