This is expected. Both browsepi and mod-ui need to be patched to add a new file manager entry. The path you created is correct. I can share the patches here before the PR ok? But is the plugin still running or crashing because of that? Because in this case you can
comment mod fileTypes entry in ttl
change default model in ttl (at bottom under state) so that you can switch between available models by hand
New demo From the sound perspective I’ve only changed the IR using a Bassman this time. Idea: registering a delta dirac through the plugin cab + eq section should give me the companion cab + eq for the model. Problem: the resulting IR files sound really bad and introduce a lot of gain. In case you want to try those IRs are here. Procedure I’ve used to register the IR:
on the plugin, disabled everything except cab and equalizer
on Reaper I have a track with delta dirac that is routed to another track in which I’ve instantiated the plugin. So I hit “render track” and I obtain a track which seems nothing but silence
with peak normalization I see an impulse is visible, but is delayed by 1024 samples more or less
I adjust the ir waveform by cutting silences and obtain a 8192 samples file, which I export to 32f format wav
Tried the IRs with IR Loader Cabsim and they are loud and delayed. Definitely something is wrong there…
48kHz 1024 samples file are more than enough for cab IRs.
Jan 01 02:15:03 moddwarf mod-jackd[315]: lilv_lib_open(): error: Failed to open library /root/.lv2/rt-neural-generic.lv2/rt-neural-generic.so (/lib64/libm.so.6: version `GLIBC_2.29' not found (required by /root/.lv2/rt-neural-generic.lv2/rt-neural-generic.so))
Thanks a lot! I’ll try it asap. I’d like to have more crunchy amps like Fender Blackface, Princeton Reverb, Bassman Tweed, Matchless, Vox AC30. On the hi-again side I’d like a Custom Audio OD-100, a Soldano SLO-100 and Plexi (all with no cab). Maybe could be worth to rip some of them from a Fractal Audio device
CMake Error at rt-neural-generic/CMakeLists.txt:14 (add_subdirectory):
The source directory
/home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/modules/RTNeural
does not contain a CMakeLists.txt file.
-- Found PkgConfig: /home/user/mod-workdir/moddwarf/host/usr/bin/pkg-config (found version "0.28")
-- Checking for module 'lv2>=1.10.0'
-- Found lv2, version 1.18.2
-- Configuring incomplete, errors occurred!
See also "/home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/CMakeFiles/CMakeOutput.log".
make: *** [package/pkg-generic.mk:188: /home/user/mod-workdir/moddwarf/build/aidadsplv2-2963a6d4996ab291e270c312c8e1ef4380aa6c9a/.stamp_configured] Error 1
Oh! You’re part of the game from the beginning. In the Dataset I’ve created as well as in the one mentioned in the paper there are two tracks: Bass and Guitar. It seems a portion of Bass help the nn to learn the device modeled, so an equal portion of bass is inserted in train, test and val. I would be curious to train over a bass amp with the very same dataset. I guess Neural DSP has some Bass stuff aren’t they? Problem here is we need to create a virtual room where people with gear like real amps and effects can meet people with a Colab subscription like me. Otherwise time I get back to my everyday job and this thread could freeze for a while. Now we have a neural lv2 plugin without all unnecessary Juce stuff, plus a very small list of trained models, plus companion IRs to be used with available ir loaders (mod cabsim, but I prefer lsp) and file manager integration. We need to decide what to do next. I think the training process could be automatized, recently Ik Multimedia introduces Tone X TONEX. Honestly I would prefer a real person to follow the training process of the model, but an opensource version of the utility that you enter files and you get json models in output would be awesome.
For now the train workflow is entirely external to the plugin, is actually done by invoking a python script on a Colab instance running with GPU. I think it’s time for the thread on how to train new models. Here I would like to keep plugin devel stuff. I couldn’t create the thread until now since I’ve made a lot of tests to understand better the workflow, that is now rather simple: you record two audio files at 48kHz, you give them to the python script and you take a break for the next 2h. The neural network learn to imitate the amp model, it does this with a precision that is expressed in ESR and at the moment with the depth of the network I’m using, I’m obtaining ESR 0.008-0.011 which is pretty good. The ankward thing is that until now I’ve trained over Neural DSP’s plugins which are neural models on their own. This is mostly because I’m a bit of a Neural fan boy, despite I hate everything about iLok and Pace and so on. Well the idea of the plugin that do training is cool, but if the plugin is running on an embedded device without Neural Accellerators then I see it very hard to pursue. Instead, I would like to provide the model generation like a web service, so that the user load the recorded files on a web interface, select the network type and then the server will send a query to a cluster or a Colab instance. Yet the device could acts as an audio card recording the Dataset through the hardware, but still external hw would be necessary, for a real amp you need a reactive load, in my case (plugin running in my DAW) it wouldn’t make sense to do recording on the device.
I’m no dev but will keep following this thread.
An ex-student of mine is involved with AI and deep learning. He is the CEO of Somma.ai.
I will try to reach out to him and maybe learn more about all of this.
Thanks a lot for the explanation.
You definitely deserve a beer! (or several)
Cheers
jep - me too.Thanks for the explanation.
Do the audio files have to be recorded for real or can I just send a pink noise or special eq sweep through my setup and then to the script?
I have an Azure budget of around 150€ per month. Do you think its possible to use that for the script?