Training for NAM (Neural Amp Modeler)

Hi!
I tried making a profile and i get an error saying:
708 ve_exc = ValueError(“%r is not a valid %s” % (value, cls.qualname))
709 if result is None and exc is None:
→ 710 raise ve_exc
711 elif exc is None:
712 exc = TypeError(

ValueError: ‘nano’ is not a valid Architecture

I loaded the easy colab.
Thanks

Hi!
This part is what you’re missing:

You need to do a little change in the code of the first cell before running it for it to work

1 Like

Okay - slightly off topic.

I nominate for “Amp Model Name of the Year”:

“The Grotesque World of Tonal Purgatory”

2 Likes

They are both json files - just with a different format. The metadata is pretty straightforward to map. The weights are a bit tougher, as they are stored in a different order. NAM also stores an initial state for the LSTM cells (which Aida doesn’t), but it has no real impact to you can just use zeros.

I don’t have any code for converting, but I do have code that loads both (it is in C#):

Oh ok i had to drop it down a line…
I`m clueless with code…
Thanks!!

1 Like

@MikeOliphant

They are both json files

Any ideas on how to convert from AIDA to NAM?

It would also be possible without too much effort to convert most Aida models

Would it be possible to convert NAM models to AIDA-X?

sorry but why we can’t simply work together in exporting the weights into an RTNeural compatible format??? We started this very same conversation a while ago New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone - #152 by madmaxwell. To me it would just make a lot of sense: RTNeural is more powerful and offers a backend agnostic structure that will be really useful in future: for now it supports eigen and xsimd. The xsimd backend is performing better, it’s just that atm on specific Dwarf toolchain (MPB) is producing a plugin that would generate crackling noises, so we stick with eigen for now to be safe. But on other platforms, such Aida DSP OS, the plugin runs without issues when builded with xsimd backend.

In a nutshell, I’m simply asking why we can’t just make a script to export output of nam training to RTNeural format and as a consequence they will be compatible with AIDA-X

WaveNet “nano” will result in a higher quality model

This will also use 66% CPU on Dwarf correct?. At the same time, the current NAM lstm nano architecture uses MSE as loss function and a simple high pass filter as pre-emphasis filter

"val_loss": "mse",
"mask_first": 4096,
"pre_emph_weight": 1.0,
"pre_emph_coef": 0.85,         

while on AIDA-X training we’re using ESR and A-Weighting plus low pass, which according to our tests produce models that sound better.

I think finally organize a call to better sync efforts between teams would really help both us and community, since it’s obvious to me that we’re doing the same thing twice, with pro/cons on either sides. Since we’re both fully open source I really have no clue why we couldn’t do that.

6 Likes

I don’t disagree in principle, but NAM is now an established format with over 6000 models out in the wild. Also, Aida and GuitarML both use RTNeural, but the file formats aren’t compatible.

NAM is an open source project. If you would like to influence the direction it takes, the best place to do that is on the relevant GitHub repositories.

1 Like

Aida and GuitarML both use RTNeural, but the file formats aren’t compatible

well actually GuitarML is using Automated-GuitarAmpModelling output format as-is, and then is reportedly extracting the weights from the file and manipulating them in the plugin source code. I thought instead that was a better approach to process the weights BEFORE using them with the plugin, to let RTNeural parser do it’s job, and eventually perform sanity checks on the model file. That’s why it’s already possible using the converter to use models for Proteus / Guitar ML into AIDA-X for example.

If you would like to influence…the relevant GitHub repositories

I would like to have a common format for all those models. I think that this format should be the one used by RTNeural, since now the biggest and better maintained inference engine for rt audio. So I would like to have

  • Automated-GuitarAmpModelling to RTNeural
  • NAM to RTNeural

starting the discussion here, maybe in pvt, then we can move somewhere else. If it’s me and you that will work on this functionality we could also stay here and maybe send a PR once finished

1 Like

I think that unifying under one training platform

I agree on that. The following are the features of our current Aida DSP / AIDA-X fork of Automated-GuitarAmpModelling that we would need to port to NAM training in order to complete the switch

  • Support for GRU models
  • ESR loss available also for LSTM/GRU models (why in NAM they use MSE that is reportedly less precise?)
  • Pre-emph filter to be A-Weighting + Low Pass, with coefficients for low pass adapted to 48kHz (so to make an example, 0.85 is wrong in NAM training)
  • We need to split into train val test, not using val also for test like NAM is doing
  • We would like to inject the split points using csv files, since using a single Dataset is very pratical but limit the usability for advanced users who use extended (longer) Dataset to perform training
  • We need a patience mechanism on validation to stop the training, I see NAM users were wrongly instructed that 1000 epochs is better than 300 epochs because of final ESR, when in reality this could simply lead to overfitting issues of the final model…who told them it was a best practice???

And other stuff, for example I would need to port my ongoing work with Clippers Units which is promising but needs further testing, I would need to know more about NAM Dataset since there is no documentation on how it was generated…so all those points require some time that’s why we are still using our training script right now.

I think that NAM is established enough that trying to completely change the file format

I see the pain there, on the other hand right now AIDA-X has a better UX and more flexible engine (because again it’s based on RTNeural) both on embedded devices and Desktop so thinking about options, what if we integrate the converter in Tonehunt? So that when a user download, can choose the output format? Who can I contact to propose that?

Ps I will need help to port all the above stuff to NAM, so your help @MikeOliphant @itskais would be really appreciated…would really improve and offer to our large community the best of the best!

As I’ve said - any further discussion is better done on GitHub. I’m am only one of the people working on NAM. As for ToneHunt, I have no involvement there.

1 Like

any further discussion is better done on GitHub

Allright. I will do my best to pose my questions there

2 Likes

Restored :slightly_smiling_face:

1 Like

How to spot LSTM models on tonehunt.org?

I don’t know that there are any yet. NAM has always supported LSTM models, but WaveNet has been the favored (and default) training method.

1 Like

Hi everyone?
Would it be possible to turn a normal nam file to a nano without retraining?
Cheers!

There is not an option for that yet.

From what I understood, you can upload LSTM models and NAM will play them seamlessly, but ToneHunt does not offer the filters to find them.

There is a tag system, but users cannot create tags, so you need to rely on their dev team to decide on which tags to create. I have enquired about that with them and they will add tags for different model types - WaveNet, LSTM, GRU - if those become popular.

They already have tags for the model weight (standard-mdl, feather-mdl, nano-mdl) but the usability is a bit skewed yet, as there is no straightforward way to search for a tag yet.

Let’s hope that, with the need of making better distinctions between the models, ToneHunt implements improved filters and also starts fetching metadata from the actual model files.

1 Like

Is there any way of “compressing” a .nam file into a nano one?

Unfortunately not :frowning:

1 Like

I have quite a lot of LSTM NAM models at (http, not https) coginthemachine(dot)ddns(dot)net/mnt/nam and I continuesly try to improve them. Compared to the default settings my models sound so much better/accurate.

1 Like

One thing I have noticed is that the training parameters for LSTM are extremely sensitive.