New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone

Ah that’s interesting. Have you tested most recent update? v0.7.0?

We cant use the official plugin due to it not supporting Linux or LV2.
Instead I tried using GitHub - mikeoliphant/neural-amp-modeler-lv2: Neural Amp Modeler LV2 plugin implementation which in theory works the same way, just in a simpler package.

You can fork Steve’s version or use Mike’s fork of it, because iplug2 has an lv2 branch, and it is MIT licensed. I’m certain Keith is very supportive of Steve’s implementation too - so it’d be great to see the developement of it in Mod ecosystem. That being said, I am happy either way with whatever direction this all goes in - it’s a great time to enjoy playing guitar and having more options for high quality tones.
If you do need anything I’m in touch with Steve and Keith through being the fb moderator - and have a general interest in trying to help or support in anyway I can.
All the best =)

3 Likes

Keith just posted up some new models in the NAM fb group by the way - you might enjoy trying?

plenty of models here to try too:
github - pelennor2170/NAM_models

I don’t mean to push you out of your way or anything, but this is purely to respond to you saying you couldn’t get NAM models loaded or working. The latest version now only works with single-file format (.nam) and I believe mostly all the pelennor repo is upto date.

All the best!
Dom

1 Like

Hi @falkTX can you share your mpb mk file for this or did you build it on device?

.nam models are containing network structure which is Wavenet and weights. We are based on RTNeural which works in a very similar way. Also the layers that you use in Wavenet are present in RTNeural implementation. So my question is: wouldn’t be possible to port the .nam into json files that can be parsed with RTNeural accordingly? With very little work our plugin should be able to load them…

There is still the problem of cpu consumption. Wavenet is heavy, what I’ve seen in available literature is that similar if not equal performance is achievable by using RNNs, but with less cpu consumption which is really the point when deploying to embedded devices. We currently use RNNs in our models. ESR and final sound of the amps are really good.

That would be my question too… looking at the files they are quite similar, the ideas behind them seem to be the same. But if the “weights” refer to different things, this can’t work in the end without lots of effort…

I just did a simple:

git clone https://github.com/mikeoliphant/neural-amp-modeler-lv2
cd neural-amp-modeler-lv2/build
source /path/to/mod-plugin-builder/local.env moddwarf
cmake .. -DCMAKE_BUILD_TYPE=Release
make

Performance is really subpar though, I would be quite interested on trying to load the NAM models inside aidadsp plugin instead.

It’d be cool if you can work it out, however understandably if it’s a lot of work / not possible then so be it.
Perhaps reach out to Steve on his github to ask about .nam files etc.

  • github /sdatkinson/neural-amp-modeler

to do that the .nam file (which is indeed containing a valid json formatted file) should be processed in order to create a file that matches RTNeural description of a network formed by multiple conv1d layers. I’ve did a similar job in the past by processing the output of an existing training script based on torch to match what is expected by RTNeural. Just for your to know I’m in the process of renaming this script into toRTNeural.py for better clarity

1 Like

A lot depends on the architecture of the .nam model in question. I’ve been able to get “feather” models (the simplest) running on Pi 4, but not the beefier models.

Both RTNeural and the NAM code are using Eigen to do the heavy lifting. It is possible that RTNeural is more highly optimized, but I suspect that most of the performance difference is due to the more expensive network architecture of the NAM models.

I agree, like I wrote earlier we are using RNNs since a paper investigated their usage for this application as a substitution for Wavenet

what I liked about RTNeural after first approach was the fact it acts like a proxy where the very same layer or processing block of a neural network is available in various engines like EIGEN and XSIMD. The structure will allows to leverage on what will came next hopefuly, like GPU or Neural Accelerators. Currently this hw support it’s not there, but I see where RTNeural is going and I liked the approach.

1 Like

I agree. It would be nice to have a single, optimized codebase for playback of audio network models, and RTNeural seems like the closest thing at the moment. I’d also like to see a standard model file format that can handle both LSTM and WaveNet models.

1 Like

yes definitely. My first analysis is that RTNeural could eventually support Wavenet, since the input file for RTNeural is a rather generic one, you basically express the structure of each layer. So, probably need some testing, but in theory this is the way it should be done

I’d also like that. Would be great to unite all efforts both in devs and training models to compete with ToneX.

edit: not compete but have an alternative.

1 Like

Yes - RTNeural has support for Conv1D layers, so I think it has all of the pieces necessary.

1 Like

Absolutely. Right now effort and discussion is fragmented - it would be great to have a discussion somewhere between the main “players” - NAM, GuitarML, RTNeural, AidaDSP, MOD, etc…

5 Likes

Since you need glibc_2.27 to run this plugin, I’m guessing it will only run on 1.13.1, when it’s out, right? Excited for it to come out on the store!