No succes in retraining NAM caputres for AIDA-X. Help needed

Hello,

i want to retrain some NAM models to AIDA-X but I have no success. I am getting bad ESR values like 0.9… all the time.

For example I want to retrain the NAM model “Orange Rockerverb MK3 - Jim-Root - Canov+Arnold.nam” from Tonehunt:

Here are my input and my target wav-files:

https://drive.google.com/drive/folders/1ZidIUJkEoQzgPFRo30xH3zc32e5HQcy3?usp=sharing

I have created the target file on my linux machine within Reaper and the “Neural Amp Modeler” lv2 plug in from Mike Oliphant. I think input and target file align quite good:

I have tried several attempts with the google colad aida-x trainer but I had no good results.

Can somebody of the more experienced profilers point me into the right direction and tell me what am I doing wrong? Or is there a problem with my input and target files?

Thank you!

You may give neuralrecord a try, you could find ready to use binaries here:

I also gave it a try with the original input.wav and with another one. Both times it did not work.
I’ll give Brummers neuralrecord a try tomorrow.

Did also not work with a direct recording through the neuralrecord. Got the response that nothing useful was recorded. When I exchanged the NAM with another plugin it worked though.

Worked here:

True, at least the modeler output must be connected to the MOD output to get it going.

1 Like

Okay, I checked the files from Orange Rockerverb MK3, and only got the clean channel to work.
Seems like the signature signal gets somewhat destroyed that there is no way to capture the latency.

Okay, I found a way to do it with Ratatouille. Loading Orange Rockreverb MK3 - Clean on the first channel, and one of the high distortion channels (Orange Rockverb MK3 - Jim-Root for example) on the second channel, then start recording with blend lower then 33%. After reach 1% recording set blend to 100%. This way you get useful target files you could use for the aida-x trainer.

@CharlyRebell do you mean like that?

or_rockverb_LSTM-16-0.aidax (53.3 KB)

General tip: don’t use the NAM dataset - does not work with the training. I used a custom 10min one.
Just a bunch of riffs from every genre. If you don’t want to play all the riffs yourself you can look at Guitar - Fraunhofer IDMT for some stuff.

1 Like

Thank you all for your help. I will look if I can get it to work within the next days.

@spunktsch do you mean that the input.wav file provided with the AIDA-X colab is not a good choice for training AIDA-X models?

I think the provided input.wav is well suited for capturing real pedals, but not for captures from nam files.
I don’t know what the current state of the art is. NAM seems to focus heavily on files with clicks and sweeps and white noise and so on, while others only use real guitar samples for their captures.
@spunktsch Do you have any insights what currently works best?

as @CharlyRebell mentioned the dataset doesn’t work great with digital stuff - specially high gain (don’t ask me why) but works for capturing amps and pedals.

Like I said before. A combination of real guitar riffs works best (with AIDA-X used arcitecture). Throw in some bass riffs, low tunings, different pickups and you are good to go. If you have some dynamic riffs to capture than that results in an even better model. So you can dial in you tone with the vol knob of the guitar.

The easiest way to solve this and keep an open source approach: A few people record some riffs of that nature. We combine and have a new, better dataset for AIDA-X.

5min in total + blips and sweeps would be a good starting point. Who is in? @CharlyRebell @Funkeq

1 Like

Sure, I’m in. I can share a few bass riffs. For guitar others are for sure more capable than me.
How would you approach the individual contributions? Should we record a set of short licks of predefined length?

Not sure if it’s relevant here, but I just went through a ton of work to make the training work locally on Linux, thread here:

The packages include a script to convert the trained NAM files to AIDA, but it DOES NOT WORK with many of the possible training models and unit types in the training script. After a ton of trial and error, my best results have been this config file:

{
“model”: “SimpleRNN”,
“hidden_size”: 12,
“unit_type”: “GRU”,
“input_size”: 1,
“output_size”: 1,
“skip_con”: 1,
“device”: “MOD-DWARF”,
“samplerate”: 48000.0,
“file_name”: “OUTPUT”,
“based”: “Someamp”,
“author”: “Me”
}

This eats up my 32GB of RAM, and it takes hours, but it has created a very good replica of my favorite amp. So close I can’t tell the difference in extensive blind testing. More importantly, the conversion script worked with this setup.

I realize this may not be what you are after. Just offering it in case you want to try it another way.

2 Likes

Thanks for the insights! I will record some guitar riffs in the next days and will share my results.

@MODPOCALYPSE I think my computer has not enough power to do local training in a reasonable time. Apart from that, I would love to see a solution which does not depend on google services.