to do that the .nam file (which is indeed containing a valid json formatted file) should be processed in order to create a file that matches RTNeural description of a network formed by multiple conv1d layers. I’ve did a similar job in the past by processing the output of an existing training script based on torch to match what is expected by RTNeural. Just for your to know I’m in the process of renaming this script into toRTNeural.py for better clarity
A lot depends on the architecture of the .nam model in question. I’ve been able to get “feather” models (the simplest) running on Pi 4, but not the beefier models.
Both RTNeural and the NAM code are using Eigen to do the heavy lifting. It is possible that RTNeural is more highly optimized, but I suspect that most of the performance difference is due to the more expensive network architecture of the NAM models.
I agree, like I wrote earlier we are using RNNs since a paper investigated their usage for this application as a substitution for Wavenet
what I liked about RTNeural after first approach was the fact it acts like a proxy where the very same layer or processing block of a neural network is available in various engines like EIGEN and XSIMD. The structure will allows to leverage on what will came next hopefuly, like GPU or Neural Accelerators. Currently this hw support it’s not there, but I see where RTNeural is going and I liked the approach.
I agree. It would be nice to have a single, optimized codebase for playback of audio network models, and RTNeural seems like the closest thing at the moment. I’d also like to see a standard model file format that can handle both LSTM and WaveNet models.
yes definitely. My first analysis is that RTNeural could eventually support Wavenet, since the input file for RTNeural is a rather generic one, you basically express the structure of each layer. So, probably need some testing, but in theory this is the way it should be done
I’d also like that. Would be great to unite all efforts both in devs and training models to compete with ToneX.
edit: not compete but have an alternative.
Yes - RTNeural has support for Conv1D layers, so I think it has all of the pieces necessary.
Absolutely. Right now effort and discussion is fragmented - it would be great to have a discussion somewhere between the main “players” - NAM, GuitarML, RTNeural, AidaDSP, MOD, etc…
Since you need glibc_2.27 to run this plugin, I’m guessing it will only run on 1.13.1, when it’s out, right? Excited for it to come out on the store!
glibc is unrelated. plugin is going to be 1.13 specific (or above, of course) due to the aidadsp folder only being added in that version, plus we want to use a newer compiler in order to get more optimizations/speed on the fancy templated stuff that RTNeural makes use of.
Cool to see this project is still alive and growing.
Are they any recent sound samples to be checked?
I’ve read a lot of stuff that promises a lot reading between the lines but the proof is in the pudding!
Is there anything that I can do help?
Since I like to explore options for high gain threash/heavy metal sounds but also th smoother “gentle breakup” sounds, I can lend my ear and perhaps record some samples.
Beware though, my tech knowledge doesn’t go much further than copy’ing a plugin manually via powershell.
this project is still alive and growing
sure is alive and kicking and from the very first poc (August 2022) a lot of things has been improved in order to provide to you the best experience, which translates into
- we added a three band eq and also depth / presence controls. You can position your EQ in pre/post, you can change all the frequencies and you can switch mid eq to bandpass in order to isolate the rest and understand which frequencies you want to boost to cut the mix
- there is a switch to disable the neural network processing. It is useful because: you can A/B to understand better what is “adding” the amplifier to your guitar / bass base sound
- you can use snapshots to switch seamingless from an amp to another, with little if no latency in the switch and by preserving reverb tails / delays
- we’ve gone through a review of the training process and dataset since the models that we used in the past were not responding well to guitar pot volume. This problem is now solved thanks to @spunktsch
- of course the gui has been redesigned expecially for Mod Audio platform so that also eyes are satisfied and the perceived value of the plugin is guaranteed
- we merged a PR from @falkTX to our repo (that alone should be enough) so now we basically support every current and future RNN model existing so far
- as Felipe said, to support all the templated code and also optimizations of the engine a major upgrade of the toolchain was required, which of course required some time to be completed
- thanks always to Felipe now our plugin is more thread safe and compliant with lv2 worker non rt thread for model loading
- we prepared a docker container to perform the whole training process with cuda gpu support and Jupyter notebook running locally. For this to be released we need to finish the testing on various OSes so be patient but it’s coming. This is a substantial improvement on other setups where you need to install manually dependencies following a guide
- we’re working closely with Mod Audio for the release of other interesting plugins, that been said this plugin will remain free and is our (Aida DSP) gift to the Mod Audio community!!!
Is there anything that I can do help?
I’m going to push the new version of the models during the weekend. We will test together and share feedback if you like. In future we will setup a TODO list for the community to help and speed up the developement of this new feature (neural models)
to add to @madmaxwell: we are preparing a few things and plan to show off some bits and pieces before the release.
Mainly graphics, demos and video. So if you @LievenDV or anyone is interested in helping out jus DM me.
But be sure there is a lot of cool stuff coming for the heavier guitars. 5150 blockletter, Mezzabarba, Driftwood Amps etc.
I’m interested in helping out with both reviewing demo material as reviewing actual functionalities.
I believe my marketeer/guitarist/singer combo can be somewhat helpful.
pm me with things you like to share; how-to’s, media, testing scenario’s…
I could help with anything related with graphics, demos and video. I am a bassplayer and I use mainly the Neural DSP Darkglass. I hope to receive my Dwarf soon (I paid the 150$ voucher early this month).
I tried the AIDA-X: it works but I can’t understand how can I profile my amps or load other models, I saw that are .json files and I can’t find a “converter” from .nam to .json.
I also tried to understand something from the discussion in this topic, but I really can’t.
Is there an easy guide to learn how it works?
we are preparing some documentation, will go live very soon!
Release that modelling docker asap. I’m absolutely hyped and can’t wait to start. Also the released Helix and Moon models sound very nice!