New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone

Hi @falkTX can you share your mpb mk file for this or did you build it on device?

.nam models are containing network structure which is Wavenet and weights. We are based on RTNeural which works in a very similar way. Also the layers that you use in Wavenet are present in RTNeural implementation. So my question is: wouldn’t be possible to port the .nam into json files that can be parsed with RTNeural accordingly? With very little work our plugin should be able to load them…

There is still the problem of cpu consumption. Wavenet is heavy, what I’ve seen in available literature is that similar if not equal performance is achievable by using RNNs, but with less cpu consumption which is really the point when deploying to embedded devices. We currently use RNNs in our models. ESR and final sound of the amps are really good.

That would be my question too… looking at the files they are quite similar, the ideas behind them seem to be the same. But if the “weights” refer to different things, this can’t work in the end without lots of effort…

I just did a simple:

git clone https://github.com/mikeoliphant/neural-amp-modeler-lv2
cd neural-amp-modeler-lv2/build
source /path/to/mod-plugin-builder/local.env moddwarf
cmake .. -DCMAKE_BUILD_TYPE=Release
make

Performance is really subpar though, I would be quite interested on trying to load the NAM models inside aidadsp plugin instead.

It’d be cool if you can work it out, however understandably if it’s a lot of work / not possible then so be it.
Perhaps reach out to Steve on his github to ask about .nam files etc.

  • github /sdatkinson/neural-amp-modeler

to do that the .nam file (which is indeed containing a valid json formatted file) should be processed in order to create a file that matches RTNeural description of a network formed by multiple conv1d layers. I’ve did a similar job in the past by processing the output of an existing training script based on torch to match what is expected by RTNeural. Just for your to know I’m in the process of renaming this script into toRTNeural.py for better clarity

1 Like

A lot depends on the architecture of the .nam model in question. I’ve been able to get “feather” models (the simplest) running on Pi 4, but not the beefier models.

Both RTNeural and the NAM code are using Eigen to do the heavy lifting. It is possible that RTNeural is more highly optimized, but I suspect that most of the performance difference is due to the more expensive network architecture of the NAM models.

I agree, like I wrote earlier we are using RNNs since a paper investigated their usage for this application as a substitution for Wavenet

what I liked about RTNeural after first approach was the fact it acts like a proxy where the very same layer or processing block of a neural network is available in various engines like EIGEN and XSIMD. The structure will allows to leverage on what will came next hopefuly, like GPU or Neural Accelerators. Currently this hw support it’s not there, but I see where RTNeural is going and I liked the approach.

1 Like

I agree. It would be nice to have a single, optimized codebase for playback of audio network models, and RTNeural seems like the closest thing at the moment. I’d also like to see a standard model file format that can handle both LSTM and WaveNet models.

1 Like

yes definitely. My first analysis is that RTNeural could eventually support Wavenet, since the input file for RTNeural is a rather generic one, you basically express the structure of each layer. So, probably need some testing, but in theory this is the way it should be done

I’d also like that. Would be great to unite all efforts both in devs and training models to compete with ToneX.

edit: not compete but have an alternative.

1 Like

Yes - RTNeural has support for Conv1D layers, so I think it has all of the pieces necessary.

1 Like

Absolutely. Right now effort and discussion is fragmented - it would be great to have a discussion somewhere between the main “players” - NAM, GuitarML, RTNeural, AidaDSP, MOD, etc…

5 Likes

Since you need glibc_2.27 to run this plugin, I’m guessing it will only run on 1.13.1, when it’s out, right? Excited for it to come out on the store!

glibc is unrelated. plugin is going to be 1.13 specific (or above, of course) due to the aidadsp folder only being added in that version, plus we want to use a newer compiler in order to get more optimizations/speed on the fancy templated stuff that RTNeural makes use of.

3 Likes

Cool to see this project is still alive and growing.

Are they any recent sound samples to be checked?
I’ve read a lot of stuff that promises a lot reading between the lines but the proof is in the pudding! :smiley:

Is there anything that I can do help?
Since I like to explore options for high gain threash/heavy metal sounds but also th smoother “gentle breakup” sounds, I can lend my ear and perhaps record some samples.
Beware though, my tech knowledge doesn’t go much further than copy’ing a plugin manually via powershell. :smiley:

this project is still alive and growing

sure is alive and kicking and from the very first poc (August 2022) a lot of things has been improved in order to provide to you the best experience, which translates into

  • we added a three band eq and also depth / presence controls. You can position your EQ in pre/post, you can change all the frequencies and you can switch mid eq to bandpass in order to isolate the rest and understand which frequencies you want to boost to cut the mix
  • there is a switch to disable the neural network processing. It is useful because: you can A/B to understand better what is “adding” the amplifier to your guitar / bass base sound
  • you can use snapshots to switch seamingless from an amp to another, with little if no latency in the switch and by preserving reverb tails / delays
  • we’ve gone through a review of the training process and dataset since the models that we used in the past were not responding well to guitar pot volume. This problem is now solved thanks to @spunktsch
  • of course the gui has been redesigned expecially for Mod Audio platform so that also eyes are satisfied and the perceived value of the plugin is guaranteed
  • we merged a PR from @falkTX to our repo (that alone should be enough) so now we basically support every current and future RNN model existing so far
  • as Felipe said, to support all the templated code and also optimizations of the engine a major upgrade of the toolchain was required, which of course required some time to be completed
  • thanks always to Felipe now our plugin is more thread safe and compliant with lv2 worker non rt thread for model loading
  • we prepared a docker container to perform the whole training process with cuda gpu support and Jupyter notebook running locally. For this to be released we need to finish the testing on various OSes so be patient but it’s coming. This is a substantial improvement on other setups where you need to install manually dependencies following a guide
  • we’re working closely with Mod Audio for the release of other interesting plugins, that been said this plugin will remain free and is our (Aida DSP) gift to the Mod Audio community!!!

Is there anything that I can do help?

I’m going to push the new version of the models during the weekend. We will test together and share feedback if you like. In future we will setup a TODO list for the community to help and speed up the developement of this new feature (neural models)

12 Likes

to add to @madmaxwell: we are preparing a few things and plan to show off some bits and pieces before the release.
Mainly graphics, demos and video. So if you @LievenDV or anyone is interested in helping out jus DM me.

But be sure there is a lot of cool stuff coming for the heavier guitars. 5150 blockletter, Mezzabarba, Driftwood Amps etc.

5 Likes

@madmaxwell
@spunktsch

I’m interested in helping out with both reviewing demo material as reviewing actual functionalities.

I believe my marketeer/guitarist/singer combo can be somewhat helpful.

pm me with things you like to share; how-to’s, media, testing scenario’s…

4 Likes