Is there any chance models can be converted from the .NAM models, such as those shared here: https://tonehunt.org/ ?
File extension is .nam, but the file format is json and looks very similar?
Is there any chance models can be converted from the .NAM models, such as those shared here: https://tonehunt.org/ ?
File extension is .nam, but the file format is json and looks very similar?
Is this right that NAM used to used json in the beginning and there is a converter in the direction json->nam… should be possible for nam->json? That would be really fantastic!!!
The .json files look similar but cater to a different network type. The difference is in the underlying architecture so to speak.
We talked about converting NAM files but the only thing that makes sense is to retrain the amps input and output with our training. The goal on our side is to provide a training script that outputs both for NAM and AIDA-X.
Do I get it right that the target is going to be created with NAM model for training Aidax in this approach?
Therefore it would be like a non-precise copy of already non-precise copy?
Reminds me of the VHS era of my third-world childhood, when cassettes from copies from copies were blurry and faded, reaching me only after several copy generations.
That’s a very interesting question I asked my self a lot in the last weeks. Maybe it is not so problematic, could that be, because of differences occure mainly because of non modelable waveform details, which won’t occure when modeling Models ,-) ?
this is also possible but as you said would be a copy of a copy. I also remember those VHS from friends that you can’t understand a thing because the audio was so broken.
But what I mean ist to take the input and the direkt output of the amp (in that regard the NAM and AIDA-X training is identical) before you register it for training.