About the Neural Modelling category

Neural modelling of amps and other gear. Usage, training and exchange of neural model files.

3 Likes

Sorry, this question may seem simple but is this about copying existing Neural DSP amp model plugins (like the Archetypes) and/or about doing something similar to ToneX/Neural DSP amp/pedal captures/profiles?

We are doing something similar to ToneX and Neural DSP, using neural trained models.

3 Likes

I usually play with NAM with an M1 on Logic, when I tried it some weeks ago I sold my QC so imagine that I’m really excited to try this new plugin from AIDA, tomorrow I will download the beta and start testing this feature.

I have some questions before:

  1. I can imagine this plugin as a “capture loader”, is there also a “trainer”?

  2. The plugin can read files .nam or we need a conversion?

  3. I imagine that this feature could be the real gamechanger and some friends of mine already asked me a review, so can you (or AidaDSP) write a F.A.Q. about this plugin?

2 Likes

Nice! So it loads captures/profiles?

If so, that’s a major game-changer and could definitely make the platform much more appealing.

Will there be a way to capture/profile on the device or will it need to be done using a computer then converted to work with the Mod Devices?

2 Likes

Training AI models is so CPU intensive than using Mod Dwarf could spend maybe days for traning models. Actually, using a laptop or common computer also spend a lot of hours.
In NAM community, there is an option to use Google Cloud (specifically Google Colab) for training. I’m not sure about how time it is required, but I thing that something like 3 hours. Probably AidaDSP will going to do at that direction.

But there a thing that Mod could do: a Capture Plugin.
The idea is sending to the amp/pedal/etc the expected input training data and receiving the processed data from the amp/pedal/etc. This plugin saves the processed data into an audio format that will be used for training the model on cloud, probably Google Colab.

So a user will not have to do some steps like use an external audio interface for recording the sounds, treating the audio for sync the generated audio, etc.

3 Likes

Indeed, there’s a colab notebook for AIDA-X as well, in which you upload the audio of the AMP/Pedal you want to clone, simply run the training script and export the model to run in real-time on your Dwarf ^^

Here is my first capture of my bass sound that i am using for my band Halogram. I have captured the NEURAL DSP DARKGLASS plug in with my personal settings and without a cab. It doesn’t sound exactly as the original source but I will definitely use this to play live with my Dwarf. If anybody could advice how to get better results please let me know.
Halogram - Darkglass Distor.json (53.4 KB)
Halogram Darkglass Sound Comparison Real/Capture

1 Like

can you identify what sounds different? Is it less gain as the original or is it in the frequency?
Narrowing it down helps us to optimize the training script.

1 Like

Hi @spunktsch It looks that the capture has less distortion than the original source. Sorry ,i am not good on technical sound stuff so I wouldn’t be able to explain what is lacking. I did post a link with the Original Source and the capture to compare the sound. Maybe you will be able to notice the difference :smiley:
i did 2 training procedures and in both i got the same result.Thanks in advance dude and congrats for this fantastic plugin!

1 Like

Hello!!
I found out the forum and the Aida plugin a few days ago and it seems great!I would like to train a few models myself!I tried a few models and i notice that the training stops way earlier than the 500 epochs.Why is that??
Cheers!