New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone

Hello, a few months ago I’ve put myself into this crazy neural stuff for audio dsp. Since I was not fully satisfied with NeuralPi plugin (build issues due to JUCE, built-in effects that are not needed on mod platform) I’ve pushed this

Since I have Neural DSP Archetype Plini and I think it sounds amazing, I’ve trained a model over Electric Sunrise: Riff preset without parametrizations (you can’t change gain and volume of the model). This is a very high gain preset for killer riffs.

The result is really promising although cpu consumption for now is really high. By switching to 1024 as buffer size for jackd I was able to hear it and it sounds amazing like the original.

Of course this is a lot subjective, so what I like of the neural approach is that is has ESR measurements embedded into the training workflow, so we can let the numbers talk.

The images below show the response of the trained network against an input which it has never seen (test)

we of course have audio too, see online archive here, go to AI dir and the pick latest training. You can do audio A-B comparison between:

  • aidadsp-1-target.wav in Data/test dir which is the original aidadsp-1-input.wav (dry) material rendered through the Neural DSP plugin (I’ve used Reaper DAW on Windows 10)
  • in Results dir test_out_final.wav which is the output from the trained neural network

The Dataset used is an original one I’ve put together since I was not fully satisfied with the stock one proposed by original developers.

I’ve also forked Automated-GuitarAmpModelling repo to inject changes needed for using this dataset

I’ve still have a Colab subscription so if you’re interested I can train other models, you just need to record properly the thing you want to model. I can share details of the procedure in a separate thread.

For the lv2 plugin code all the credits go to the original authors @keyth72 @chowdsp and others. I’ve just used LSTM class which let me came with a simple lv2 plugin with 4-5 calls to this class and it’s done. Of course like I said before some development work is needed to shrink cpu usage, not to mention integration with mod-ui filemanager and neural models json files selection, which I would like to add in the future.

OOT: still a little nightmare is mod-sdk, I’ve pulled official docker hub image and also builded latest source in a 18.04 ubuntu container and always getting thumbnail/screenshot deploy error, so @falkTX if everything else is ok I would ask for those two images thanks!


Amazing !!!


really nice stuff - although I understood only half of it. Do the audio samples of results have cab emulation on?


please do. If I want to be able to control gain and volume how much more is there to do?

1 Like

No, they don’t and they shouldn’t, at least if one want to capture only the amp’s tone. Usually the cab is skipped since you may want to apply your cab sim in cascade. Althought given the technology, why put limitations…we could train models with and without the default cab used in the preset…

For the parametric models: the work need to be done only in the forked Automated-GuitarAmpModelling that to simplify is the (training) script that produces the lstm-model.json. @keyth72 already implemented parametrization in his own fork of Automated-GuitarAmpModelling, I need to cherry-pick and adjust his commits on top of my changes. I think should be feasible…


ok, thought so but good to know. If you need help with the gui stuff just hit me up.


For the graphics I would need to finish the current package with thumbnail and screenshot, but I’m not able to export them from mod-sdk, using official docker container…thanks!

[Update] trying to solve high cpu usage issue with original developers, bumped latest RTNeural and following their suggestions in next branch. Will do a recap after the tests

[Training] while I’m preparing the dedicated thread, a few infos. On Colab Pro with nvidia Tesla with current training script it takes 2h to finish the training (with no parametrization), plus the time to register the tracks. Currently I use two tracks, bass and guitar, and to register it is sufficient to click “render” with the given plugin settings. Is better to render instead to record it since no latency is involved, this is super important the input and target tracks are time aligned! Of course a whole different story is if analog outboard needs to be recorded. In this case we rely on DAW’s latency comp afaik.


phantomjs might be missing from the container.

if you have a duox or dwarf, you can install the developer images which contain mod-sdk with a working screenshot generator.

Or ignore that for now, we can help with the screenshot later on.


Hey @madmaxwell i tried to build this on a pi 3, but it just errors out. Are there any specific things I need to do when building?


@micahvdm can you check out

I’m using yocto 64bit OS dunfell branch so I have GCC 9.3.0 and I’m compiling with aarch64. I have an alternate build system with yocto sdk I’ve generated that potentially is more similar to RPi3

1. go to sdk download dir and type ./ you can provide install dir on the cmd line
3. source environment-setup-aarch64-poky-linux
4. git clone && cd aidadsp-lv2
5. mkdir build && cd build
7. cmake --build .
8. make install DESTDIR="absolute path of a local dir on your Pc" (this is just for deploy this dir to the target device)

Problems: in my setup it does compile & run, just cpu consumption is high and force me to use 1024 buffer size in jackd. If I compile with the CXXFLAGS and LDFLAGS commented in the recipe, it does compile but became a nice white noise generator…

1 Like

Below my docker mod-sdk recreation, step by step (where thumbnail/screenshot deploy doesn’t work, just like the official docker hub mod-sdk). Phantomjs seems installed properly.

docker pull ubuntu:18.04
docker run -it -d --network host -v ~/Stuff/lv2:/lv2 --name "mod-sdk" ubuntu:18.04 bash
docker exec -it mod-sdk bash
apt-get update
apt-get install build-essential liblilv-dev phantomjs python3-pil python3-pystache python3-tornado python3-setuptools python3-pyinotify
apt-get install git
cd /home
git clone
cd mod-sdk
git checkout e0a9cf283982d16ced2c01fd2cbd0de77c1b8640
python3 build
export LV2_PATH=/lv2
1 Like

You may try this patch for


Hi everyone,

would you mind if I move this thread to the developer section of the forum? I think it would fit better there.


Tried again the whole procedure with your patch but id didn’t solved. For this I think better to continue in another thread. We could gather somewhere python3-tornado 4.3 and use it in the container


UPDATE: after some big support from @chowdsp @keyth72 :heart_eyes: I’ve switched to models with hidden_size=16 and measured a cpu usage drops by 50%. The hidden_size is specified in the config file of the training, in NeuralPi hidden_size = 20 but in new DarkStar plugin is 16 for reference. In the next days I will focus on cxxflags and ldflags to further improve performance, but the hidden_size seems to play a major role. I will also study for drawbacks in quality when reducing this parameter.


Guys…it’s done! I’m running a decent archetype-you-know preset with 128 buffer…

since I used cmake everything should be portable to Mod Duo build system effortless. Then several stuff, depending on how much you are interested, can happen. We would need to support these models in file explorer for example. In addition to that I will now dedicate in creating new models to came with a decent workflow for registering trainable datasets.


Shut up and take my money!




This is huge! NeuralDSP archetype plugins would be my go to VSTs if this platform supported them. So this news is amazing!


Please beware that I’m not running the original neural vst! With this plugin we can load models trained over existing hardware. I have trained over a neural dsp plugin since it’s way faster than actually registering a real amp and honestly the result surprised me.