@itskais It would also be possible without too much effort to convert most Aida models directly to NAM without re-training. As long as the models are single-input and don’t use the “skip” connection they would be compatible with NAM’s LSTM implementation - you would just have to reformat the json and shuffle the weights a bit.
I tried making a profile and i get an error saying:
708 ve_exc = ValueError(“%r is not a valid %s” % (value, cls.qualname))
709 if result is None and exc is None:
→ 710 raise ve_exc
711 elif exc is None:
712 exc = TypeError(
They are both json files - just with a different format. The metadata is pretty straightforward to map. The weights are a bit tougher, as they are stored in a different order. NAM also stores an initial state for the LSTM cells (which Aida doesn’t), but it has no real impact to you can just use zeros.
I don’t have any code for converting, but I do have code that loads both (it is in C#):
It would also be possible without too much effort to convert most Aida models
Would it be possible to convert NAM models to AIDA-X?
sorry but why we can’t simply work together in exporting the weights into an RTNeural compatible format??? We started this very same conversation a while ago New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone - #152 by madmaxwell. To me it would just make a lot of sense: RTNeural is more powerful and offers a backend agnostic structure that will be really useful in future: for now it supports eigen and xsimd. The xsimd backend is performing better, it’s just that atm on specific Dwarf toolchain (MPB) is producing a plugin that would generate crackling noises, so we stick with eigen for now to be safe. But on other platforms, such Aida DSP OS, the plugin runs without issues when builded with xsimd backend.
In a nutshell, I’m simply asking why we can’t just make a script to export output of nam training to RTNeural format and as a consequence they will be compatible with AIDA-X
WaveNet “nano” will result in a higher quality model
This will also use 66% CPU on Dwarf correct?. At the same time, the current NAM lstm nano architecture uses MSE as loss function and a simple high pass filter as pre-emphasis filter
while on AIDA-X training we’re using ESR and A-Weighting plus low pass, which according to our tests produce models that sound better.
I think finally organize a call to better sync efforts between teams would really help both us and community, since it’s obvious to me that we’re doing the same thing twice, with pro/cons on either sides. Since we’re both fully open source I really have no clue why we couldn’t do that.
Aida and GuitarML both use RTNeural, but the file formats aren’t compatible
well actually GuitarML is using Automated-GuitarAmpModelling output format as-is, and then is reportedly extracting the weights from the file and manipulating them in the plugin source code. I thought instead that was a better approach to process the weights BEFORE using them with the plugin, to let RTNeural parser do it’s job, and eventually perform sanity checks on the model file. That’s why it’s already possible using the converter to use models for Proteus / Guitar ML into AIDA-X for example.
If you would like to influence…the relevant GitHub repositories
I would like to have a common format for all those models. I think that this format should be the one used by RTNeural, since now the biggest and better maintained inference engine for rt audio. So I would like to have
Automated-GuitarAmpModelling to RTNeural
NAM to RTNeural
starting the discussion here, maybe in pvt, then we can move somewhere else. If it’s me and you that will work on this functionality we could also stay here and maybe send a PR once finished
I agree on that. The following are the features of our current Aida DSP / AIDA-X fork of Automated-GuitarAmpModelling that we would need to port to NAM training in order to complete the switch
Support for GRU models
ESR loss available also for LSTM/GRU models (why in NAM they use MSE that is reportedly less precise?)
Pre-emph filter to be A-Weighting + Low Pass, with coefficients for low pass adapted to 48kHz (so to make an example, 0.85 is wrong in NAM training)
We need to split into train val test, not using val also for test like NAM is doing
We would like to inject the split points using csv files, since using a single Dataset is very pratical but limit the usability for advanced users who use extended (longer) Dataset to perform training
We need a patience mechanism on validation to stop the training, I see NAM users were wrongly instructed that 1000 epochs is better than 300 epochs because of final ESR, when in reality this could simply lead to overfitting issues of the final model…who told them it was a best practice???
And other stuff, for example I would need to port my ongoing work with Clippers Units which is promising but needs further testing, I would need to know more about NAM Dataset since there is no documentation on how it was generated…so all those points require some time that’s why we are still using our training script right now.
I think that NAM is established enough that trying to completely change the file format
I see the pain there, on the other hand right now AIDA-X has a better UX and more flexible engine (because again it’s based on RTNeural) both on embedded devices and Desktop so thinking about options, what if we integrate the converter in Tonehunt? So that when a user download, can choose the output format? Who can I contact to propose that?
Ps I will need help to port all the above stuff to NAM, so your help @MikeOliphant@itskais would be really appreciated…would really improve and offer to our large community the best of the best!