Training for NAM (Neural Amp Modeler)

As we’re introducing NAM to our platform (check this forum post), we want to be able to train AI models to play with it!

For that, we’ll be releasing a video on how to train NAM models using audio captures of your AMP.
The video will only be available after NAM adds the “nano” model architecture to their mainline.

So in the meantime, here are some guidelines that you can follow to start training now:

NAM Training steps

This is the google colab notebook the NAM community use for training: easy_colab.ipynb

  1. Have audio captures of your amp/pedal using the NAM capture file (provided in the notebook)
    here’s the MOD guide on how to capture your AMP
    Make sure your captures are in 48kHz, 24-bit, mono.
  2. In the easy_colab.ipynb upload your capture files
  3. When running the Step 2: Installation code cell, you need to use the code line that uses the very latest version. Meaning the cell code should be like this:
# !pip install neural-amp-modeler
# Hint: use the next line instead for the very latest!
!pip install git+https://github.com/sdatkinson/neural-amp-modeler.git@main
...
  1. When running the Step 4: Train! code cell, you need to change the architecture parameter to “nano”. Meaning the cell code should be like this:
%tensorboard --logdir /content/lightning_logs
run(
    epochs=100,
    architecture="nano",  # standard, lite, feather
    fit_cab=False  # Change me to True for full-rig modeling!
)

Otherwise follow the instructions in the notebook there.
Happy modeling!

Regarding previous AMP/Pedal captures for AIDA-X

Some of us have already captured guitar AMPs to train for AIDA-X. Those capture files will not directly work when you try to use them with the NAM training script.

Therefore, here’s a small guide on how make your AIDA-X captures work for NAM training. Note that you will need python installed on your computer.

  1. Install Python. Download Python | Python.org
  2. Install librosa (Installation instructions — librosa 0.10.0 documentation) and soundfile (SoundFile — PySoundFile 0.10.3post1-1-g0394588 documentation) python libraries.
  3. Download this python script: convert2nam.py (548 Bytes)
  4. Put the convert2nan.py file in the same folder as the captures you want to make compatible with the NAM training (include your input.wav file)
  5. Run this command:
    python convert2nam.py
    The script will then create a folder named “NAM compatible” in which you’ll find the new wav files.
  6. Change the capture file names:
    input.wavv1_1_1.wav
    target.wavoutput.wav
  7. Enjoy training!

Feel free to ask questions about the process or send us your AIDA-X capture files to help make them useable for NAM training.

5 Likes

@itskais It would also be possible without too much effort to convert most Aida models directly to NAM without re-training. As long as the models are single-input and don’t use the “skip” connection they would be compatible with NAM’s LSTM implementation - you would just have to reformat the json and shuffle the weights a bit.

1 Like

That said, re-training from the source capture as WaveNet “nano” will result in a higher quality model.

Thanks for this huge news! Would it be possible to convert NAM models to AIDA-X?

Directly? Only LSTM models - of which there really aren’t any.

1 Like

Thanks for the insight @MikeOliphant!
Any ideas on how to convert from AIDA to NAM?

Hi!
I tried making a profile and i get an error saying:
708 ve_exc = ValueError(“%r is not a valid %s” % (value, cls.qualname))
709 if result is None and exc is None:
→ 710 raise ve_exc
711 elif exc is None:
712 exc = TypeError(

ValueError: ‘nano’ is not a valid Architecture

I loaded the easy colab.
Thanks

Hi!
This part is what you’re missing:

You need to do a little change in the code of the first cell before running it for it to work

1 Like

Okay - slightly off topic.

I nominate for “Amp Model Name of the Year”:

“The Grotesque World of Tonal Purgatory”

2 Likes

They are both json files - just with a different format. The metadata is pretty straightforward to map. The weights are a bit tougher, as they are stored in a different order. NAM also stores an initial state for the LSTM cells (which Aida doesn’t), but it has no real impact to you can just use zeros.

I don’t have any code for converting, but I do have code that loads both (it is in C#):

Oh ok i had to drop it down a line…
I`m clueless with code…
Thanks!!

1 Like

@MikeOliphant

They are both json files

Any ideas on how to convert from AIDA to NAM?

It would also be possible without too much effort to convert most Aida models

Would it be possible to convert NAM models to AIDA-X?

sorry but why we can’t simply work together in exporting the weights into an RTNeural compatible format??? We started this very same conversation a while ago New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone - #152 by madmaxwell. To me it would just make a lot of sense: RTNeural is more powerful and offers a backend agnostic structure that will be really useful in future: for now it supports eigen and xsimd. The xsimd backend is performing better, it’s just that atm on specific Dwarf toolchain (MPB) is producing a plugin that would generate crackling noises, so we stick with eigen for now to be safe. But on other platforms, such Aida DSP OS, the plugin runs without issues when builded with xsimd backend.

In a nutshell, I’m simply asking why we can’t just make a script to export output of nam training to RTNeural format and as a consequence they will be compatible with AIDA-X

WaveNet “nano” will result in a higher quality model

This will also use 66% CPU on Dwarf correct?. At the same time, the current NAM lstm nano architecture uses MSE as loss function and a simple high pass filter as pre-emphasis filter

"val_loss": "mse",
"mask_first": 4096,
"pre_emph_weight": 1.0,
"pre_emph_coef": 0.85,         

while on AIDA-X training we’re using ESR and A-Weighting plus low pass, which according to our tests produce models that sound better.

I think finally organize a call to better sync efforts between teams would really help both us and community, since it’s obvious to me that we’re doing the same thing twice, with pro/cons on either sides. Since we’re both fully open source I really have no clue why we couldn’t do that.

6 Likes

I don’t disagree in principle, but NAM is now an established format with over 6000 models out in the wild. Also, Aida and GuitarML both use RTNeural, but the file formats aren’t compatible.

NAM is an open source project. If you would like to influence the direction it takes, the best place to do that is on the relevant GitHub repositories.

1 Like

Aida and GuitarML both use RTNeural, but the file formats aren’t compatible

well actually GuitarML is using Automated-GuitarAmpModelling output format as-is, and then is reportedly extracting the weights from the file and manipulating them in the plugin source code. I thought instead that was a better approach to process the weights BEFORE using them with the plugin, to let RTNeural parser do it’s job, and eventually perform sanity checks on the model file. That’s why it’s already possible using the converter to use models for Proteus / Guitar ML into AIDA-X for example.

If you would like to influence…the relevant GitHub repositories

I would like to have a common format for all those models. I think that this format should be the one used by RTNeural, since now the biggest and better maintained inference engine for rt audio. So I would like to have

  • Automated-GuitarAmpModelling to RTNeural
  • NAM to RTNeural

starting the discussion here, maybe in pvt, then we can move somewhere else. If it’s me and you that will work on this functionality we could also stay here and maybe send a PR once finished

1 Like

I think that unifying under one training platform

I agree on that. The following are the features of our current Aida DSP / AIDA-X fork of Automated-GuitarAmpModelling that we would need to port to NAM training in order to complete the switch

  • Support for GRU models
  • ESR loss available also for LSTM/GRU models (why in NAM they use MSE that is reportedly less precise?)
  • Pre-emph filter to be A-Weighting + Low Pass, with coefficients for low pass adapted to 48kHz (so to make an example, 0.85 is wrong in NAM training)
  • We need to split into train val test, not using val also for test like NAM is doing
  • We would like to inject the split points using csv files, since using a single Dataset is very pratical but limit the usability for advanced users who use extended (longer) Dataset to perform training
  • We need a patience mechanism on validation to stop the training, I see NAM users were wrongly instructed that 1000 epochs is better than 300 epochs because of final ESR, when in reality this could simply lead to overfitting issues of the final model…who told them it was a best practice???

And other stuff, for example I would need to port my ongoing work with Clippers Units which is promising but needs further testing, I would need to know more about NAM Dataset since there is no documentation on how it was generated…so all those points require some time that’s why we are still using our training script right now.

I think that NAM is established enough that trying to completely change the file format

I see the pain there, on the other hand right now AIDA-X has a better UX and more flexible engine (because again it’s based on RTNeural) both on embedded devices and Desktop so thinking about options, what if we integrate the converter in Tonehunt? So that when a user download, can choose the output format? Who can I contact to propose that?

Ps I will need help to port all the above stuff to NAM, so your help @MikeOliphant @itskais would be really appreciated…would really improve and offer to our large community the best of the best!

As I’ve said - any further discussion is better done on GitHub. I’m am only one of the people working on NAM. As for ToneHunt, I have no involvement there.

1 Like

any further discussion is better done on GitHub

Allright. I will do my best to pose my questions there

2 Likes

Restored :slightly_smiling_face:

1 Like

How to spot LSTM models on tonehunt.org?

I don’t know that there are any yet. NAM has always supported LSTM models, but WaveNet has been the favored (and default) training method.

1 Like