The Neural Amp Modeler - NAM - has arrived!

Hi everyone,

Once again, we have exciting news!

We are thrilled to announce the release of the Neural Amp Modeler (NAM), developed by Steven Atkinson, following the successful launch of the AIDA-X neural model player in March.

NAM was initially designed for desktop use, with a strong emphasis on model accuracy and user-friendly model training. It has quickly gained popularity within the guitarist community, as mentioned previously in the forum, and has led to the creation of a comprehensive model database called ToneHunt, which already hosts thousands of models.

In the beginning, the earlier versions of NAM and its models were too resource-intensive to run on embedded devices like ours. However, with continuous software evolution and optimization efforts, the NAM team has recently introduced a new training setting that produces lightweight models suitable for the MOD Dwarf and Duo X devices.

We have collaborated with @MikeOliphant to bring his LV2 implementation of NAM to our platform, and now everyone can download it from the Plugin Store. Thank you, Mike! :blush:

Please note that you must have MOD OS 1.13.2 or a higher version installed, as it includes the NAM section in the File Manager. This is where you should place your NAM model files.

Regarding the management of model files, we strongly recommend utilizing ToneHunt as the central repository for NAM models. It is crucial that we refrain from using our Forum to share NAM models. The AIDA team is already in contact with the ToneHunt developers, and we hope that soon it will also be possible to share and download AIDA-X models from there, rendering the Forum unnecessary as a model exchange tool.

Previously, NAM models were heavier and unable to run on our devices. However, the current standard workflow allows for models that can be run on our devices, although not optimally yet. These lighter models are referred to as “nano” weights.

Both projects are actively evolving, and developments can occur rapidly. For the time being, let’s adhere to the following guidelines:

When searching for a model on ToneHunt, look for the “nano-mdl” tag. These models are specifically designed to be lightweight and compatible with our devices. You can directly visit: #nano-mdl Models | ToneHunt

If you have previously trained an AIDA model and still have the captures, it is easy to retrain them for NAM and share them on ToneHunt. Kais has provided detailed instructions in this post: Training for NAM (Neural Amp Modeler)

To contribute effectively to the communities and facilitate meaningful discussions and comparisons among different models and weights, let’s provide NAM files, both LSTM and WaveNet, to ToneHunt users. This will not only support their amazing work but also offer valuable resources for users.

As our interaction with ToneHunt gains traction, they will undoubtedly work towards better supporting and filtering different model structures and weights.

That concludes the information dump for now! :blush:

I am extremely excited about this release and anticipate an influx of new users who will undoubtedly be enthralled by this neural technology.

Below, you’ll find an appendix with my understanding of the technical details about the two projects. It provides further insight into the reasons behind the guidelines mentioned above and offers a glimpse into what lies ahead. Brace yourself for an abundance of acronyms! :smile:

Best wishes and happy neural rocking!

Gianfranco, aka The MOD Father


When training a model, we essentially capture the behaviour of a physical device. These captures can be made using different numerical structures, each with its own advantages and disadvantages, much like using JPEG and PNG for images. The three structures commonly used are LSTM, GRU, and WaveNet. Generally, LSTM and GRU require less CPU than WaveNet but sacrifice accuracy to achieve that efficiency.

In addition to the model structure, we have the concept of model size, which refers to the numerical density within the structure. This is comparable to the quality setting of a JPEG or the bitrate of an MP3. The model size directly impacts CPU usage when using the model, regardless of the structure employed. Larger models require higher CPU resources. The model size is defined during training and can range from lighter to heavier settings. It is a parameter set during training, independent of whether LSTM, GRU, or WaveNet structures are used.

As with other areas, discussions about the quality outcomes of these parameters can be endless. While using a powerful Intel or AMD CPU allows for maximum sizes to achieve the best possible quality, many of us are willing to make compromises in exchange for convenience and practicality. Similar to the popularity of JPEGs and MP3s, “good enough” quality does not require bloated and heavy files.

Furthermore, we have seen in practice that proper capturing and training speak much more than model formats in regards to the resulting sound quality and, if the capture is crap, there is no WaveNet model capable of sounding good.

If you go now to the beta shop and search for “AIDA” you will see three plugins that are going to be released to stable very soon. Those are all using LSTM / GRU and I am still to hear better sounding models than these jaw dropping plugin, thanks to the great work made by @madmaxwell and @spunktsch

NAM’s primary mission is accuracy, and its standard approach is to use WaveNet structures during training. However, NAM can also train LSTM models, albeit with some additional steps instead of using the user-friendly training colab notebook. NAM does not support GRU. The NAM plugin can seamlessly load both LSTM and WaveNet models, as long as they were trained using NAM.

On the other hand, AIDA primarily uses LSTM as the standard structure when training with the simple online colab notebook, but it can also train GRU models when using the local training method. AIDA-X can load both LSTM and GRU models interchangeably. AIDA-X does not support WaveNet.

Currently, the lightest NAM models available are the nano-weighted WaveNet models. These models consume 66% of the CPU on a Dwarf device. However, by utilising local training, you can produce an LSTM model that sounds very similar to the nano WaveNet while consuming only 34% of the CPU on the Dwarf.

By providing ToneHunt with different formats for the same capture, we can offer their community not only the models themselves but also a basis for objective comparisons, ultimately helping to popularize LSTM and GRU models.

In the near future, it is highly likely that both players will support each other’s formats, simplifying the user experience.

In an ideal scenario, we will at one point have a unified training method that combines the best of both AIDA and NAM, allowing users to create heavy or light models according to their preferences.

As the great @harryhaaren used to say, “The future is a big place” :blush:



Wauw, I didn’t expect this to happen so soon!

I can’t wait to get back from vacation to check it all out.
Even today, during lunchbreak at work, Telling colleagues how the MOD Dwarf changed my complete approach and setup and only a moment later I read this.

Thaks for sharing this, a lot of useful info here already.
Looks like I can, as soon as my “AIDA-X best precatices” article isdone, I can write a second chapter :smiley:


I’m trying NAM on dwarf now, that simple plugin with Modern Cabsim plugin use more than 80% of CPU, I think that we should find a way to use less resources or have in the future a “Dwarf on steroids” with more performant CPU :rofl:

Joking, but not too much, I dream a big brother of dwarf with more in/outs and more CPU :sweat_smile:

Anyway, good work, guys! :heart:


Thanks Gian, mod team and all the people in the wider FOSS audio community!

More evidence of a wider FOSS community coalescing around the mod platform!

“We be rollin’”!!



On ToneHunt I’ve found this NAM file: HIWATT CUSTOM 100 DR103
I know that it is not directly usable on Dwarf, there is a way yo convert it to nano-mdl format?


Hi! I’m using the old Mod Duo and it says my system is up to date but I can’t see the NAM folder in my file manager. Does that mean I’m not able to use NAM?

No :frowning:

The person who did the capture needs to re-train it.

What one could do is use the nam computer plugin on some daw and pass the aida-x input.wav trough it. So you get something you can use as a target.wav to train an aida-x model

Alright, went on holeday and finally gettigng back into a routinewhere there is time for music(tech)

Got around giving the NAM plugin a test run and it definitly works. Fired up a Marshall nano model and it sounded like a marshall alright.

The tonehunt anly showed a handfull of nano-mdl amps (less than a page), so obviously the AIDA approach is still the prefered way to go in my workflow, especially since I can use a free AIDA vst in my DAW.


Can someone please summarize, in the most basic terms, the difference between the NAM and Aida-X plugins, from just a user perspective?

Not the process of capturing models, but rather just using freely available ones.

Is it just a different format, but the same result? Is one “better” or more desirable than the other? Why would I choose one over the other?

I apologize for the simplistic questions.

If you just want to use instead of training models, the first thing is try to find which of the effects has the amp that you are trying to use. But it is a bit tricky, because despite NAM has much more models, only a small subset works as well on MOD Dwarf because of the amount of necessary processing. Aida X is more lighter than NAM, so you can use more plugins at the same time with it.


Yes, different loaders for different file formats.

That’s subjective, but the NAM files take significantly more CPU to run, in theory providing better fidelity (a closer match to the original). I believe the NAM loader is more of an experimental / edge of possible right now, and you’d be very limited using other effects AFAICT.

I’m not knowledgeable in the specific tradeoffs, but I’d guess the NAM models might cover a wider range of dynamics and playing styles “better” than the AIDA models can. Like one NAM model could work well enough for both lead solos and chunky power chords, but you’d need separate AIDA models to achieve the same. I’ve only briefly tried a handful, but some users seem very pleased with the available AIDA models.


It’s a matter of using the right tools for the job and the AIDA-X is a lot more suitable for MOD devices because of the mentioned reasons.

NAM is popular for desktops but it needs processor power and it only runs the Nano models on the MOD.

Meanwhile the AIDA-X is available as free VST so you can use the same files on MOD devices and desktops which brings your “produced” and “live” sounds so much closer together!

because of the limited supply of nano models for the NAM, the introduction of the NAM plugin on MOD hasn’t been too groundbreaking for me while the integration of AIDA-X in my workflow was quite…revolutionairy actually! I invested some time in making models, promoting and I just finished an article on best practices. (still in review by the MOD people)


Anyone have a response to my question?

Duo is no longer officially supported and does not get any more automatic updates.
Updates need to be manually installed from Releases - MOD Wiki images. Or send me a PM for me to add an exception and have your Duo unit still receive automatic updates anyway.

1 Like

Does this mean that it’s likely the duo will support NAM for at least a few more releases? I’m comfortable doing the manual updates but I do it so seldom I have to look up the procedure again online.

We don’t limit the plugin builds yet, so sure you got NAM for Duo too.
But the poor Duo can’t even load the feather/nano models, so don’t get your hopes up…

The AIDA-X stuff with LSTM-16 models works fine though.

1 Like

Thanks! I haven’t gotten back to the mod world for a while and had the duo listed for a while on reverb and ebay, but more and more I’m thinking I’ll just hang on to it and leave it on my desk and dive into the NAM world and hopefully integrate it into my dwarf chain. Anything usable that I come up with will be pretty simple so the duo may be enough to at least dip my toe into the NAM world.

1 Like

NAM for desktop: Wavenet captures, top quality
NAM plugin for Dwarf: Wavenet nanocaptures, lower quality, but still overall good to great quality
AidaX plugin for Dwarf: LSMT captures, lower quality, overall ok quality