Hello, everybody.
As we can noticed, the amp modeling with machine learning stuffs is a hot topic. We have NAM, Tone X, AIDA X and others. The community is doing a awesome work getting their great sounds accessible for everyone.
I noticed that some of our friends are requesting converters from or to NAM or Tone X. As you maybe already discovered, this is not technically possible, because they use different “internal structure” (different machine learning models), also Tone X is closed source.
Other are requesting a NAM implementation for MOD Dwarf. At the current state, isn’t currently possible: although AIDA is implemented a generic way, that enables load future improved models (LSTM that you can see on some .json
files as an type of model), NAM files doesn’t use this generic pattern. Also, NAM models are generally CPU heavy for MOD Dwarf.
I would like to you think: is the AIDA DSP the current state of art? Maybe/probably. But… Until when it will be? Do you think that someone will improve it or maybe create an alternative that is incompatible with AIDA? Maybe the next evolution would improve the sound similarity, maybe it can consume less CPU, we don’t know. But one thing we know. The evolution will occurs.
So I’m asking you, my profiler friends, would you like to try the future pedal? Of course, right? But do you like to use the previous captures, probably with a better quality with the future technology?
If you answer yes, so please share the input.wav
and target.wav
files too. So, in the future, new AI models could use them for making amps and pedals compatible.
An analogy: Sharing the input.wav
and target.wav
is just like share the source code of a video game. Maybe you did the software for running on Nintendo Switch. But someone can use it for generate a new “executable” for XBOX One or PS5, or can also improve and generate a better polished for Nintendo Switch too, like with anti-aliases stuffs.
Now there are a lot of NAM model files, if the modelers shared the training files (input.wav
and target.wav
), now we could use them now for training and use on AIDA too.
So, for the technology future improvement, please share also the input.wav
and target.wav
.
Edit: I rewrite it for better clarification and add an analogy.