About the Neural Modelling category

Neural modelling of amps and other gear. Usage, training and exchange of neural model files.

4 Likes

Sorry, this question may seem simple but is this about copying existing Neural DSP amp model plugins (like the Archetypes) and/or about doing something similar to ToneX/Neural DSP amp/pedal captures/profiles?

We are doing something similar to ToneX and Neural DSP, using neural trained models.

3 Likes

I usually play with NAM with an M1 on Logic, when I tried it some weeks ago I sold my QC so imagine that I’m really excited to try this new plugin from AIDA, tomorrow I will download the beta and start testing this feature.

I have some questions before:

  1. I can imagine this plugin as a “capture loader”, is there also a “trainer”?

  2. The plugin can read files .nam or we need a conversion?

  3. I imagine that this feature could be the real gamechanger and some friends of mine already asked me a review, so can you (or AidaDSP) write a F.A.Q. about this plugin?

2 Likes

Nice! So it loads captures/profiles?

If so, that’s a major game-changer and could definitely make the platform much more appealing.

Will there be a way to capture/profile on the device or will it need to be done using a computer then converted to work with the Mod Devices?

2 Likes

Training AI models is so CPU intensive than using Mod Dwarf could spend maybe days for traning models. Actually, using a laptop or common computer also spend a lot of hours.
In NAM community, there is an option to use Google Cloud (specifically Google Colab) for training. I’m not sure about how time it is required, but I thing that something like 3 hours. Probably AidaDSP will going to do at that direction.

But there a thing that Mod could do: a Capture Plugin.
The idea is sending to the amp/pedal/etc the expected input training data and receiving the processed data from the amp/pedal/etc. This plugin saves the processed data into an audio format that will be used for training the model on cloud, probably Google Colab.

So a user will not have to do some steps like use an external audio interface for recording the sounds, treating the audio for sync the generated audio, etc.

3 Likes

Indeed, there’s a colab notebook for AIDA-X as well, in which you upload the audio of the AMP/Pedal you want to clone, simply run the training script and export the model to run in real-time on your Dwarf ^^

Here is my first capture of my bass sound that i am using for my band Halogram. I have captured the NEURAL DSP DARKGLASS plug in with my personal settings and without a cab. It doesn’t sound exactly as the original source but I will definitely use this to play live with my Dwarf. If anybody could advice how to get better results please let me know.
Halogram - Darkglass Distor.json (53.4 KB)
Halogram Darkglass Sound Comparison Real/Capture

1 Like

can you identify what sounds different? Is it less gain as the original or is it in the frequency?
Narrowing it down helps us to optimize the training script.

1 Like

Hi @spunktsch It looks that the capture has less distortion than the original source. Sorry ,i am not good on technical sound stuff so I wouldn’t be able to explain what is lacking. I did post a link with the Original Source and the capture to compare the sound. Maybe you will be able to notice the difference :smiley:
i did 2 training procedures and in both i got the same result.Thanks in advance dude and congrats for this fantastic plugin!

1 Like

Hello!!
I found out the forum and the Aida plugin a few days ago and it seems great!I would like to train a few models myself!I tried a few models and i notice that the training stops way earlier than the 500 epochs.Why is that??
Cheers!

Marshal JMP Low Sensitivity Input - Pedal platform, clean.
Two Notes Captor
Sounds great with Guitarix Gxcabinet 4x12 with treble set at 10 o’clock.
Heavy - ESR 0.003 (on third attempt, the first 2 were 0.011 - didn’t change anything just kept running the cell)
A/B against real amp very very close, amazed.
Sounds great with DIY OCD, Rat and TS-9
JMP Low Input.json (81.5 KB)

Did this sound good to you
  • :+1:
  • :-1:

0 voters

Really amazing capture! One of my favourites fot Hi-gain stuff with the metal zone after amp and before cab in signal chain. Laugh, but when you set up that boss right, it really screams terror. And the amazing eq on the pedal, very sensitive (almost too, for the dwarfs knobs)
And for the clean parts with some delay it’s also great.
Keep up the great work with captures, toneX can clean dwarfs shoes.

1 Like

@danimourinho

The AIDA-X team has release the AIDA-X Cloud: https://cloud.aida-x.cc/

Would you upload these models there? :pray:

2 Likes

Hi!
I just wanted to say that it is a pitty that Aida in not as known as NAM because i think it is sounding really nice!
The traffic on the aida cloud is really low compared to the NAM.
I hope things pick up…
Cheers!!

Hello everyone,

I started creating models with a local AIDA-DSP setup based on the github AidaDSP/Automated-GuitarAmpModelling last week-end and started training locally my first models. I was previously doing tests with the online (Colab) notebook before that. I tried training the same models and I did some A/B comparisons between the same models trained locally and trained online. During the tests, I used Genome and loaded the same model (1 trained locally , 1 trained online) on 2 CODEX instances : one paned left and the other panned right, routed to the same IR in the same IR block : if I am not mistaken, the online model version has its phase fully reversed. Maybe I did some mistake in my training target file, but usually I do not mess with the phase of the recording… Has anybody seen this before or experienced this ?

Hello guys, please disregard my previous post : it looks like a Genome/CODEX issue or a normal issue as different models may end-up producing different phases …

1 Like

When I open Neural Amp Modeling
I couldn’t find anything!
My MOD Dwarf OS 1.13 now
:sob:
What can I do?