I have created the target file on my linux machine within Reaper and the “Neural Amp Modeler” lv2 plug in from Mike Oliphant. I think input and target file align quite good:
I have tried several attempts with the google colad aida-x trainer but I had no good results.
Can somebody of the more experienced profilers point me into the right direction and tell me what am I doing wrong? Or is there a problem with my input and target files?
Did also not work with a direct recording through the neuralrecord. Got the response that nothing useful was recorded. When I exchanged the NAM with another plugin it worked though.
Okay, I checked the files from Orange Rockerverb MK3, and only got the clean channel to work.
Seems like the signature signal gets somewhat destroyed that there is no way to capture the latency.
Okay, I found a way to do it with Ratatouille. Loading Orange Rockreverb MK3 - Clean on the first channel, and one of the high distortion channels (Orange Rockverb MK3 - Jim-Root for example) on the second channel, then start recording with blend lower then 33%. After reach 1% recording set blend to 100%. This way you get useful target files you could use for the aida-x trainer.
General tip: don’t use the NAM dataset - does not work with the training. I used a custom 10min one.
Just a bunch of riffs from every genre. If you don’t want to play all the riffs yourself you can look at Guitar - Fraunhofer IDMT for some stuff.
I think the provided input.wav is well suited for capturing real pedals, but not for captures from nam files.
I don’t know what the current state of the art is. NAM seems to focus heavily on files with clicks and sweeps and white noise and so on, while others only use real guitar samples for their captures. @spunktsch Do you have any insights what currently works best?
as @CharlyRebell mentioned the dataset doesn’t work great with digital stuff - specially high gain (don’t ask me why) but works for capturing amps and pedals.
Like I said before. A combination of real guitar riffs works best (with AIDA-X used arcitecture). Throw in some bass riffs, low tunings, different pickups and you are good to go. If you have some dynamic riffs to capture than that results in an even better model. So you can dial in you tone with the vol knob of the guitar.
The easiest way to solve this and keep an open source approach: A few people record some riffs of that nature. We combine and have a new, better dataset for AIDA-X.
5min in total + blips and sweeps would be a good starting point. Who is in? @CharlyRebell@Funkeq
Sure, I’m in. I can share a few bass riffs. For guitar others are for sure more capable than me.
How would you approach the individual contributions? Should we record a set of short licks of predefined length?
Not sure if it’s relevant here, but I just went through a ton of work to make the training work locally on Linux, thread here:
The packages include a script to convert the trained NAM files to AIDA, but it DOES NOT WORK with many of the possible training models and unit types in the training script. After a ton of trial and error, my best results have been this config file:
This eats up my 32GB of RAM, and it takes hours, but it has created a very good replica of my favorite amp. So close I can’t tell the difference in extensive blind testing. More importantly, the conversion script worked with this setup.
I realize this may not be what you are after. Just offering it in case you want to try it another way.
Thanks for the insights! I will record some guitar riffs in the next days and will share my results.
@MODPOCALYPSE I think my computer has not enough power to do local training in a reasonable time. Apart from that, I would love to see a solution which does not depend on google services.