MOD presents the AIDA-X and dives into Neural Modelling

Dear users

I bring very exciting news!

We are proud to present the AIDA-X, a neural model player developed by AIDA DSP, capable of loading realistic amp (and also other gear) neural models for live playing using the Dwarf, Duo X and Duo.

A big thank you to @madmaxwell and @spunktsch for the effort and love put into AIDA DSP.

Some of you might be familiar with this technology, as it has been popularised by Neural DSP, with their Quad Cortex, as well as IK Multimedia’s ToneX.

AIDA-X is now currently in Beta and we want to invite you to test it, so we can speed up the promotion to stable. For enabling beta plugins, check this wiki article.

Using the plugin is very straight forward and, in order to use it, you need to upload some model files to your MOD device, in the Aida DSP Models section of the file manager.

Neural Modelling is very exciting in the sense that anyone here can train models and clone their gear and we have created a simple workflow to allow users accomplish this easily. Check it out: AIDA-X Modeling Guide - MOD Audio website

AIDA-X will also be released for Mac and Windows in the form of VSTs and also standalone, so that people can also use it on their desktops and notebooks. Download them here. AIDA-X Neural Model files can be loaded in any version of the plugin.

We have prepared a page dedicated to the theme. Check it out: Neural modelling - MOD Audio website

To accommodate this new exciting step, this new “Neural Modelling” category was created in the forum, together with two subcategories:

  • Neural Models: for posting model files and discuss their sound
  • Model training: to discuss and improve the audio capturing and training methods

I invite you all to download AIDA-X and share your opinions here. We will be constantly posting new models here in this forum section and I invite you to check them out and share your experience.

MOD is focusing efforts for the whole month of April into this topic and we want to offer users the best possible experience. We are preparing a lot of cool things for the next weeks and I promise we will have a remarkable month!

Happy modelling!!



Ah big news!

Ah yes, checking this out tomorrow!


Looks like this will be awesome for a lot of users!

I’m excited to learn more about it



New pages, new info, a training procedure, …
You guys have been doing some hard work!

So now we have formats that are so protable that the come in the form of JSON (models of whatever you want) and IR’s (impulse responses of typically cab sims or reverbs).

  • I can run that plugin on my Dwarf
  • Meanwhile I run the same tech in my DAW
  • I transfer JSON and IR files from wherever I want
  • Our lead guitarist can use it in his DAW to create our mixes while I can have the identical sound via my Dwarf.

oh boy…
First of all I’m going to be exploring some models that were already made.

Can’t wait to try the modelling itself as well. (Fender Blues Deluxe, Udo Rösner Da Capo acoustic amp or even my little Joyo Zombie lunchbox amp…)
Even though the training takes some attention, the sound part is easy: Record a dry and wet signal.


Oh, yes! Tomorrow is the great day!
Yesterday I installed it and tried, many people in Italian modelers groups on FB, and on discord channels, asked me about Dwarf because of this new plugin, I already show this on a private live, they were excited!
I hope you know that if this works you must produce many Dwarves than now :rofl:

Anyway, I tried to contact AIDA DSP on fb, if they are Italians they could help me to find some info for show the plugin in a proper way


@madmaxwell is the person you should speak to :slight_smile:


@gianfranco Create a YuoTube video! This is a hot topic in YouTube, the audience should know, that there is even a piece of rig available for the Neural DSP now. This is a kind of missing link. And call the video “Neural DSP in a pedal” or something like that. Maybe it goes viral…


I’m on it - as soon as the beta is over promotional stuff drops all over the place.


one minor thing though: its not the stuff from Neural DSP (the company of the quad cortex) - we are just using a similar technology.

1 Like

ahh… yes. I mixed it up, sorry. I mean NAM, Neural Amp Modeller. This is the hot topic in Youtube. Of course not Neural DSP

“Neural Amp Modeller in a pedal”.
Also this is not the truth, but it is “good” marketing.

1 Like

Tried out some json’s, and found out some tones I could not achieve with a regular amps approach (maybe I am just bad in tone creation though). Cool.

Chain of AIDA-X → IR CabSim → IR Reverb seems quite futuristic to me.

Are there any known hardware requirements for training models locally? Would be nice to try when I will finally convince damned pytorch that I have a CUDA gpu. (dealing with cuda/tensorflow/torch dependencies is always a hell).


other than the pytorch cuda stuff - no. I’ll have a fairly old notwbook with a 1050ti and most models train in 15 - 20min.


I assume you are Mac or Linux user?
As far as I can see from the colab notebook, I’ll have to adapt it for windows, due to linux-style file path’s. But if somebody had done it for windows already, I’ll be glad to steal the modified version.

I’m also on windows and have it working there. Either directly via WSL or docker.

@spunktsch Wow! What a good start of the video! Very cool!

What image do you use for the docker? Hope I would be able to borrow some python packages versions from there. Unfortunately WSL and docker stuff conflicts with virtualbox installation I use, and I’ve stuck in the dependency hell while trying to set up windows anaconda environment for the local colab notebook execution. On every step all packages are incompatible with each other and complain about missing methods and properties, and I’ve already had a lethal dose of stack traces and pip reinstalls. But maybe the exact list of python packages with versions would help.

Well done @madmaxwell and @spunktsch! lots of hard work, but oh so worth it!


docker pull aidadsp/pytorch:latest, which is based on

FROM pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime


we meeted some amazing people on the road, who helped us and with which we are willing to collaborate further. Just naming a few: @keyth72, which provided the very first “core” with NeuralPi, checkout his brand new NeuralSeed project! @chowdsp, who implemented and maintains our inference engine, RTNeural, without this, our plugin would have been a lot more complicated. Alec (et al.), who worked on the paper which is at the very root of our training algorithm. And that are studying and pushing this new technology into audio/music. And finally MOD, @falkTX, Kais and the whole team. But it’s not over, it is just begun!