Re: Soundfont patch comments

On 07.12.2016 14:10, Stefan Westerfeld wrote:
Note that in a few cases, I didn't fix problems yet, or I thought further
discussion is necessary. However, new wip/soundfont should be closer to mergeable
than any previous submission.

Great, thanks for the work so far. I think we need to have a call to sort some
of the areas where I don't fully grasp the SF2 specifics and you asked for help
with BSE internals.
I'll try to mark those.

You must not use the same UI label for two properties of the same object.

This is not the UI label; _("Synth Input") is the property_group.

Wow, you're right indeed.
That just hammers home how useful the IDL files are compared to the old API.

On a more general note, I really dislike introducing new C structures,
especially if they roll their own reference counting. I think what should be
done here instead is:
a) ensure BseStorage is properly C++ allocated with new+delete

To be honest, I have no idea how to make BseStorage allocated with new+delete.
So far it appears to be a GObject derived from BseObject, no idea which steps
need to be done to use C++ new+delete.

Ok, I see, seems tricky indeed. Just the first thoughts I have about this:
* I wonder if it needs to be a GObject at all and couldn't be transformed into a
normal virtual C++ clas easily.
* If it needs to stay a BseObject, adding a struct Data with explicit ctor/dtor
as you suggest sounds like the best solution. You migth go for that either way,
because even if BseStorage can become a native C++ class you'll have done half
of the porting already.

From my competence level with how the object system works, the only thing that
I could offer to implement would be a workaround: add a an embedded

struct BseStorage
  struct Data
  } *data;

allocate Data with new+delete

Yes, those would go into object_init and object_finalize respecitvely, even
avoiding the extra allocation:
   struct Data
   } data;
        new (&data) Data();

I doubt that the Rapicorn::Blob is what we need. If you look at the actual blob
code we ship here, the blob is a file referenced by the bse file. There are
two cases:

1) non-embedded: so basically then the blob is just a reference to
"/foo/bar/bazz.sf2", the bse file doesn't contain any soundfont data - in this
case the fluid synth engine can directly open/load the original sf2

Note, Rapicorn::Blob can refer to ondisk files.

2) embedded: then the blob stored within the bse file contains the whole sf2
file, can be houndreds of MB, then if you open that bse file, a temporary copy
of the data is written to a temp file. the fluid synth engine then can load

So in any case, we pass the filename (temporary or actual) to the fluid synth
engine for loading; the blob is - if you want  to call it that way - a file
within a file; which allows us to make bse files self contained, without fluid
synth API being capable of loading embedded soundfonts directly from the bse

Yes, about that...
I think we've previously discussed possibly moving BseStorage to a *directory*,
so its individual bits are composed of real files and we just need a longer
import/export period when loading/storing bse files.
Having that in place would certainly have helped with the implementation here,
but short of that, it's important to keep this in mind, so not too much effort
is wasted to maintain this in-file complexity.
I.e. if that seems to hard, efforts are better spent on the bse->directory move
instead and simply using external file handles for assets in the code.

e) If BSE really needs it's own Blob, it also needs proper unit tests for its API.

I don't agree on this point. We're not talking about a generic data structure
here, which should have unit tests. We're talking about a specific API part of
BseStorage. Its not Bse::Blob, but Bse::Storage::Blob - if you want to rename
it to something that better reflects the purpose, ok. Of course if you're saying
you want to have unit tests for every part of BseStorage, then ok, we'd also
have unit tests for Bse::Storage::Blob. However, I don't think we can afford
the developer time it costs for unit testing every single aspect of every
single API we have...

Not for every existing bit, that's right. But we need some basic tests for new
stuff that gets added. We need to be able to assert some basic functionality in
between releases and after frequent code changes with all the migrations we have
going on. The only way to achieve that is adding basic unit tests when new
components are introduced (and designing for that) and adding tests for
regressions we encounter. Which btw means we also need a basic unit test for the
SF2 support itself. You could be starting with scripting that first and then
figure which parts of the new Blob are not indirectly tested and still need unit
test exposure.

@@ -108,6 +119,10 @@ bse_storage_class_init (BseStorageClass *klass)
+  bse_storage_blob_clean_files(); /* FIXME: maybe better placed in bsemain.c */

Why would we want to delete PID related temporary files on *startup*?

SoundFonts are huge (sometimes 400M or more). Not all systems clean /tmp
reguarily. So we want to make sure that we don't leak temp files. Even if beast
crashes or the user kills the beast process or the system was rebooted and the
system doesn't clean /tmp on reboot.

So it is wise to look at everything that we may have left from earlier beast
processes that are now dead on startup. Therefore bse_storage_blob_clean_files()
scans /tmp for old temp files, looks if the PID is still running, and if not
removes the temp file. So we shouldn't accumulate more and more stale temp
sf2 files.

The whole thing still looks flakey to me. Thoughts:
* I really want to avoid messing with other programs data here, maybe moving
everything into tmp/beast-data-<UID>.XXXXXX/ can help with that.
* /tmp/ might very well be a *small* memfs mount, possibly residing in swap,
usually not bigger than the actual RAM size. Which means it might be the wrong
place to extract hundreds of MB.
* ~/ might bew NFS mounted or encrypted, which means it might be the wrong place
to extract hundreds of MB in.

Considering the above, the location needs to be user configurable. The XDG spec
has something in place for that alread [1]; i.e. we should pick
$XDG_CACHE_HOME/libbse/ - see Rapicorn::Path::cache_home().


Hm, so we have BseSoundFontPreset objects that have program+bank, save and load
those values but do not expose them as properties. I think that's unprecedented
in BSE so far. But I'm not claiming I fully understand the Preset impl so far...

Presets simply hide program|bank settings. To for instance if the user wants a
church organ, then he can say so at the ui. The track will have a pointer to
a preset which says "church organ". When the project is saved|loaded, these
preset objects are saved similar to the bsewave objects.

Finally, during playback, the program and bank integers from the preset determine
which sound gets selected, so church organ may be program 20, bank 1 or something.

So the user sees the name whereas the fluid engine sees the numbers. There is no
need to edit these, as the soundfont already determines which presets exist and
which numbers the presets have.

Couldn't the preset be stored as a simple string then?

Good that you added an extended comment about how SF2 support works with the BSE
engine, that helped a lot.
Open issues I see with that:
a) I'll probably rework the comment so it shows up in the Doxgen docs, not sure
if that should go under SoundfontOsc or some more generic "Bse Engine" topic though.
b) There's a basic design problem here wrg to locking of the SF2 osc modules. To
recap, all SF2 engine modules lock fluidsynth, the first  one calls
process_fluid_L, the others block and stall concurrently processing CPUs. The
reason this problem exists is that there's a n:1 dependency here (n*SoundfontOsc
-> 1*fluidsynth) that's not reflected in the bse engine module tree, so the
scheduler cannot accomodate for that. What needs to be done is: all n
BseSoundfontOsc modules need to take 1 additional input channel (not shown at
the UI), and that channel gets connected to 1 fluidsynth bse engine module that
also has no user visible representation in the BseObject tree. This 1 fluidsynth
module then calls process_fluid_L() and the bse engine scheduler ensures that
all n dependant modules are called *after* the fluidsynth module has finieshed.

To be honest, this may be the "correct" solution, however I may not be able to
implement it, because my knowledge of bse internals is not good enough. This is
hardly standard procedure where I could lookup how to do it from some other
code snippet.

Sure, I'll give you a hand with that. Regarding precedents, I think the
ContextMerger comes close, it creates more modules than BseObjects behind the
scenes to implement polyphony, so should be able to give us an idea how to wire
things up.

c) What's unclear to me is wether there should be 1 fluidsynth module globally
or per BseProject, is there something like a FluidsynthContext that's
instantiated per project, or is there only one (implicit) context globally? That
determines if there's need for one fluidsynth module per project or globally.
(In case you wonder, simultaneous playback of multiple projects is required for
sample preview and note editing, and will be required for virtual midi keyboard

As far as I know, once per project is fine. BseSoundFontRepo exists as a child
to the project. The important thing is that we want to allow the user to use 10
instruments from the same soundfont in one project, without forcing us to load
the same soundfont 10 times. Therefore the fluid state is in BseSoundFontRepo.

However, if the user opens two projects that use the same soundfont, we will load
it twice, which is acceptable I think.

Keep in mind that loading it twice is a common case during editing.
I.e. the SF is loaded once for the project you're editing, and during note
preview in the piano roll, the current playback network (including SF) is
duplicated in a temporary project that plays the preview note.
Since that's a common case, it'd be good if there was a way to share big sound
fonts or fluidsynth contexts between two projects...

BTW, other DAWs allow only *one* project at a time to start/use the synthesis
engine, which is probably not too bad a limitation in practice. If we were to
move to that model, could we have a single global fluidsynth context that gets
shared between the active project and internal temporary assistant projects?

   Cu... Stefan

Yours sincerely,
Tim Janik
Free software author.

[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]