Re: Soundfont patch comments



   HI

On Thu, Dec 08, 2016 at 12:21:23AM +0100, Tim Janik wrote:
On 07.12.2016 14:10, Stefan Westerfeld wrote:
To be honest, I have no idea how to make BseStorage allocated with new+delete.
So far it appears to be a GObject derived from BseObject, no idea which steps
need to be done to use C++ new+delete.

Ok, I see, seems tricky indeed. Just the first thoughts I have about this:
* I wonder if it needs to be a GObject at all and couldn't be transformed into a
normal virtual C++ clas easily.
* If it needs to stay a BseObject, adding a struct Data with explicit ctor/dtor
as you suggest sounds like the best solution. You migth go for that either way,
because even if BseStorage can become a native C++ class you'll have done half
of the porting already.

Ok, I now moved C++ members to the data structure as proposed, with

object_init:
      new (&data) Data();
object_finalize:
      data.~Data();

This allowed me to use std::shared_ptr for Blobs now, so all manual C-style
reference counting is gone. If BseStorage gets proper constructors, for
instance if it is refactored to a native C++ class, Blobs should be extremely
easy to port to that.

I doubt that the Rapicorn::Blob is what we need. If you look at the actual blob
code we ship here, the blob is a file referenced by the bse file. There are
two cases:

1) non-embedded: so basically then the blob is just a reference to
"/foo/bar/bazz.sf2", the bse file doesn't contain any soundfont data - in this
case the fluid synth engine can directly open/load the original sf2

Note, Rapicorn::Blob can refer to ondisk files.

2) embedded: then the blob stored within the bse file contains the whole sf2
file, can be houndreds of MB, then if you open that bse file, a temporary copy
of the data is written to a temp file. the fluid synth engine then can load
"/bse-user-1234-..."

So in any case, we pass the filename (temporary or actual) to the fluid synth
engine for loading; the blob is - if you want  to call it that way - a file
within a file; which allows us to make bse files self contained, without fluid
synth API being capable of loading embedded soundfonts directly from the bse
file.

Yes, about that...
I think we've previously discussed possibly moving BseStorage to a *directory*,
so its individual bits are composed of real files and we just need a longer
import/export period when loading/storing bse files.
Having that in place would certainly have helped with the implementation here,
but short of that, it's important to keep this in mind, so not too much effort
is wasted to maintain this in-file complexity.
I.e. if that seems to hard, efforts are better spent on the bse->directory move
instead and simply using external file handles for assets in the code.

Right. I believe the amount of work for allowing embedding is not too much right
now, but if bse ever uses directories, it would be simple to migrate to that.

e) If BSE really needs it's own Blob, it also needs proper unit tests for its API.

I don't agree on this point. We're not talking about a generic data structure
here, which should have unit tests. We're talking about a specific API part of
BseStorage. Its not Bse::Blob, but Bse::Storage::Blob - if you want to rename
it to something that better reflects the purpose, ok. Of course if you're saying
you want to have unit tests for every part of BseStorage, then ok, we'd also
have unit tests for Bse::Storage::Blob. However, I don't think we can afford
the developer time it costs for unit testing every single aspect of every
single API we have...

Not for every existing bit, that's right. But we need some basic tests for new
stuff that gets added. We need to be able to assert some basic functionality in
between releases and after frequent code changes with all the migrations we have
going on. The only way to achieve that is adding basic unit tests when new
components are introduced (and designing for that) and adding tests for
regressions we encounter. Which btw means we also need a basic unit test for the
SF2 support itself. You could be starting with scripting that first and then
figure which parts of the new Blob are not indirectly tested and still need unit
test exposure.

Ok, so if we generally look at what we can test with a reasonable amount of work,
I suggest two tests:

1) add a .bse file which uses (external file) a sound font to audio tests
2) add a .bse file which includes (in bse file) a sound font to audio tests

Both are simple to add. I've added (1) to my wip/soundfont branch now. The
hardest part was stripping down fluid r3 so that it can be included, I've now
a very small

-rw------- 1 stefan stefan 84512 Dez  9 13:22 /home/stefan/src/stwbeast/tests/audio/minfluid.sf2

version which has only one preset, and thus should be ideal for testing. I've
included the (MIT) license info from debian.

In any case test (2) is very easy to add, I've just not done it yet in case
we still change something in the .bse file format. So with those we test

- loading by filename
- loading (via temp file) from embedded data
- playing/synthesis part

I'm sceptical whether there should be more tests. Although a test for saving
would be nice to have, there is no pre-existing code in bse for that. We don't
have any unit test for BseStorage, so it may be harder to add.

@@ -108,6 +119,10 @@ bse_storage_class_init (BseStorageClass *klass)
+  bse_storage_blob_clean_files(); /* FIXME: maybe better placed in bsemain.c */

Why would we want to delete PID related temporary files on *startup*?

SoundFonts are huge (sometimes 400M or more). Not all systems clean /tmp
reguarily. So we want to make sure that we don't leak temp files. Even if beast
crashes or the user kills the beast process or the system was rebooted and the
system doesn't clean /tmp on reboot.

So it is wise to look at everything that we may have left from earlier beast
processes that are now dead on startup. Therefore bse_storage_blob_clean_files()
scans /tmp for old temp files, looks if the PID is still running, and if not
removes the temp file. So we shouldn't accumulate more and more stale temp
sf2 files.

The whole thing still looks flakey to me. Thoughts:
* I really want to avoid messing with other programs data here, maybe moving
everything into tmp/beast-data-<UID>.XXXXXX/ can help with that.
* /tmp/ might very well be a *small* memfs mount, possibly residing in swap,
usually not bigger than the actual RAM size. Which means it might be the wrong
place to extract hundreds of MB.
* ~/ might bew NFS mounted or encrypted, which means it might be the wrong place
to extract hundreds of MB in.

Considering the above, the location needs to be user configurable. The XDG spec
has something in place for that alread [1]; i.e. we should pick
$XDG_CACHE_HOME/libbse/ - see Rapicorn::Path::cache_home().

[1] https://specifications.freedesktop.org/basedir-spec/latest

Ok - done, so it should usually go to ~/.cache/libbse/ now.

Hm, so we have BseSoundFontPreset objects that have program+bank, save and load
those values but do not expose them as properties. I think that's unprecedented
in BSE so far. But I'm not claiming I fully understand the Preset impl so far...

Presets simply hide program|bank settings. To for instance if the user wants a
church organ, then he can say so at the ui. The track will have a pointer to
a preset which says "church organ". When the project is saved|loaded, these
preset objects are saved similar to the bsewave objects.

Finally, during playback, the program and bank integers from the preset determine
which sound gets selected, so church organ may be program 20, bank 1 or something.

So the user sees the name whereas the fluid engine sees the numbers. There is no
need to edit these, as the soundfont already determines which presets exist and
which numbers the presets have.

Couldn't the preset be stored as a simple string then?

If presets are objects, tracks and sound font oscs have one single property
"Sound Font Preset" which determines both, the active sound font and the preset
(as the preset object always knows to which sound font it belongs). This is the
way things work now.

If presets are strings, then each track and each sound font osc would need two
properties, "Sound Font" and "Sound Font Preset", as the string 'Trumpet' no
longer determines the sound font it belongs to, because multiple loaded
soundfonts could have this preset. The SF2 support could be refactored to work
in this two-propery-way, not sure if I'd consider this more elegant. It's quiet
a bit of work to change it, but you think it should be done this way, then it
should be changed now rather than later, because it will affect how the bse
files look.

bsesoundfontosc.cc:

Good that you added an extended comment about how SF2 support works with the BSE
engine, that helped a lot.
Open issues I see with that:
a) I'll probably rework the comment so it shows up in the Doxgen docs, not sure
if that should go under SoundfontOsc or some more generic "Bse Engine" topic though.
b) There's a basic design problem here wrg to locking of the SF2 osc modules. To
recap, all SF2 engine modules lock fluidsynth, the first  one calls
process_fluid_L, the others block and stall concurrently processing CPUs. The
reason this problem exists is that there's a n:1 dependency here (n*SoundfontOsc
-> 1*fluidsynth) that's not reflected in the bse engine module tree, so the
scheduler cannot accomodate for that. What needs to be done is: all n
BseSoundfontOsc modules need to take 1 additional input channel (not shown at
the UI), and that channel gets connected to 1 fluidsynth bse engine module that
also has no user visible representation in the BseObject tree. This 1 fluidsynth
module then calls process_fluid_L() and the bse engine scheduler ensures that
all n dependant modules are called *after* the fluidsynth module has finieshed.

To be honest, this may be the "correct" solution, however I may not be able to
implement it, because my knowledge of bse internals is not good enough. This is
hardly standard procedure where I could lookup how to do it from some other
code snippet.

Sure, I'll give you a hand with that. Regarding precedents, I think the
ContextMerger comes close, it creates more modules than BseObjects behind the
scenes to implement polyphony, so should be able to give us an idea how to wire
things up.

Great.

c) What's unclear to me is wether there should be 1 fluidsynth module globally
or per BseProject, is there something like a FluidsynthContext that's
instantiated per project, or is there only one (implicit) context globally? That
determines if there's need for one fluidsynth module per project or globally.
(In case you wonder, simultaneous playback of multiple projects is required for
sample preview and note editing, and will be required for virtual midi keyboard
support.)

As far as I know, once per project is fine. BseSoundFontRepo exists as a child
to the project. The important thing is that we want to allow the user to use 10
instruments from the same soundfont in one project, without forcing us to load
the same soundfont 10 times. Therefore the fluid state is in BseSoundFontRepo.

However, if the user opens two projects that use the same soundfont, we will load
it twice, which is acceptable I think.

Keep in mind that loading it twice is a common case during editing.
I.e. the SF is loaded once for the project you're editing, and during note
preview in the piano roll, the current playback network (including SF) is
duplicated in a temporary project that plays the preview note.
Since that's a common case, it'd be good if there was a way to share big sound
fonts or fluidsynth contexts between two projects...

BTW, other DAWs allow only *one* project at a time to start/use the synthesis
engine, which is probably not too bad a limitation in practice. If we were to
move to that model, could we have a single global fluidsynth context that gets
shared between the active project and internal temporary assistant projects?

Sharing state between different projects is not easy to add. Right now, the
fluid state is initialized to a fixed number of channels / channel->sfont
mapping during project prepare. One channel is added for each track.

The number of channels/mapping is then considered static for the time the
project is playing. So sharing state between different projects, temporary or
non-temporary would be problematic, because if projects switch from playing to
non-playing dynamically, the total number of channels would have to be changed.
However the fluidsynth API forces us to do set the number of channels at
initialization time.

I think we'll just have to live with a factor of two here. Sounds acceptable
to me at least.

As a general remark, if we were not discussing fluid synth integration but a
project that contains a LV2 plugin, like ZynAddSubFX, duplication of the plugin
for temporary assistant projects would be a terrible idea, since the ZynAddSubFX
plugin has a GUI, and there is a 1:1 mapping between the UI and the plugin
instance. So maybe a long term strategy for BEAST should consider getting rid
of temporary assistant projects altogether, in order to be better able to
integrate components that contain complete instruments|effects as plugins, LV2,
VST or similar.

If you look at Bitwig Polysynth or VST instruments, there is always one
instance per instrument (track), and this one instance interacts with the user
via UI, renders audio data, processes midi events and so on. No temporary
assistant projects here. ZynAddSubFX VST|LV2 works in Ardour5, Qtractor, Bitwig
and other DAWs.  In the long run I'd like to be able to use it in BEAST tracks.

   Cu... Stefan
-- 
Stefan Westerfeld, http://space.twc.de/~stefan


[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]