MOD presents the AIDA-X and dives into Neural Modelling

the NAM files are not compatible (atm) with AIDA-X.

1 Like

Good to know. Is there any base of profiles for aida?

Nothing centralized yet to my knowing, but there is a pack of several profiles in this thread to start with -NeuralPi ToobAmp models

Also forum posts with people sharing their profiles seem to be tagged with a “Model sharing” tag and can be found here: Model Sharing - MOD Audio Forum (probably that is the closest we’ve got to “base” at the moment.)

2 Likes

Yeah, all posts for AIDA-X models should go into the Model Sharing - MOD Audio Forum category.

Things are not yet in a good state where we can just have a “good to go” pack.
Some of models I uploaded (none of my authorship, I simply adjusted or converted them) do not work on a MOD Dwarf in a reasonable way as they take too much CPU. They are still useful on desktop.

Once we have better and more performant models I might delete the current links, so people move into the good stuff.

We also need to pay attention to audio levels between these models. For the 2 packs I uploaded I manually roughly matched their volume, but it is within a pack, not globally.

Still, I think some kind of centralized place to download a good starting set of models would be great to have, we just do not have enough good/performant models yet.

3 Likes

Is there a way on our side to adjust the audio level of our models prior to share it ?

1 Like

I am not sure , but there is a gain value in the json.

{
    "in_shape": [
        null,
        null,
        1
    ],
    "in_skip": 1,
    "out_gain": -12,
    "layers": [

If it actually does something that I expect it to do - I think it should be possible to make a step in the colab document, which will take the the full “predicted” track, calculate how much gain we need to normalize it, and save this gain value to the model json.

This way we would get all the profiles created with colab document producing more or less the same volume, at least if all the models are made from the same input file.

It could be awesome - typical problem with all the kinds of the presets for different plugins is that volume is often different, and it is hard to scroll presets without sudden loud guitar screams.

Having all the profiles normalized to the same volume… mmmm… sweet.

OK, what I see is that all downloaded models have this out_gain value, but not the models you get with the AIDA-X Trainer;

{
“in_shape”: [
null,
null,
1
],
“in_skip”: 1,
“layers”: [
{

So, if you add ( i.e. for a clean sound ) an “out_gain” ;

“in_skip”: 1,
“out_gain”: 10,
“layers”: [

It works !

1 Like

yeah the out_gain does what you expect, with its value being in dB.
you can also use in_gain, though I think that is less needed.

Trying to do something like that, at least I was able to convert the whole dry input data to the modelled array, and trying to calculate the gain.

Injected this step after Evaulation step, that’s where we get full_dry variable data to work with.

I do not have much of the understanding of the normalization snippet I’ve borrowed, so I am not sure if it does what it should for this particular array, but if I am lucky - it probably produces the required gain value to normalize output into -3db RMS. This can be later can be used in json. (Yet to be implemented)

Would be nice to add something like that to the colab file, and put gain into produced json so all the trained models would share the same volume.

p.s. repeating the step several times forces my CUDA to run out of RAM , so looks like modeled_full generation RAM usage stacks up, adding 0.5gb of ram on each run.

rms_level = -3  #db

# Apply model to full dry signal 
modeled_full =   model(full_dry[:, None, None].to(device)).cpu().flatten().detach().numpy()

#https://superkogito.github.io/blog/2020/04/30/rms_normalization.html # linear rms level and scaling factor

sig = modeled_full

r = 10**(rms_level / 10.0)
a = np.sqrt( (len(sig) * r**2) / np.sum(sig**2) )

output_gain_to_normalize = a
print (f"gain to achieve rms_level {rms_level}db is {output_gain_to_normalize}"

gain to achieve rms_level -3db is 2.227024804497091

p.p.s.
Also would like to add, that if volume normalization of the models does make sense to you guys, then the earlier this would be added to colab document, the less models would be generated with random volumes to hurt our ears.

2 Likes

Very exciting news!!!

Is the plugin also able to run on a Duo with acceptable performance?

by itself it runs most of the lower-cpu-usage models, but not much CPU left for other things like a cabsim.
a little portal could help, forcing the system to use the 2 Duo cores, but this adds latency…

so yes, it mostly runs. if it is useful enough depends on how many plugins you need to have on a pedalboard.

2 Likes

My current pedalboard setup standard is 1 comp, 1 overdrive, 1 Aida-x (switching 2 heavy profiles) portal, 1 cab sim, 1 delay, 1 reverb, 1 x2 gain, latency is about 7,5 ms at 78/82% CPU.
Before when I used a QC with same profiles/fx and latency was more than 9 ms

4 Likes

Well, better than nothing. I was expecting it to be a no way thing. By the way, why should anyone need a cab sim with Aida? When one samples and models an amp doesn’t the target sound come out of the amp cone and through a mic?

Let’s say at least one could convert a duo in a cab modeler and put it in their effect chain or before or after a Duo and get some use of it.

BTW I’m not sure I understand what portal does and how it’s supposed to be used.

Not necessarily @Tarrasque73

One can sample from just one element of the sound chain to the full chain itself.

But you are right. If the used model on the AIDA-X includes the cabinet, the extra cabinet plugin can be removed.

Results with separate plugins seem to be better though.

2 Likes

I’ve tried it and it’s simply amazing.

However, I’ve noticed that when adding new models’s json files to the file list, they don’t appear straight away in the list displayed in the plugin’s settings. I had to delete the plugin and stick a new one on my pedalboard to see them being displayed.

3 Likes

You can simply reload the page

2 Likes

…or reload the pedalboard…

Hello,

I am facing a python issue during the training at step 2.

Traceback (most recent call last):
File “/content/Automated-GuitarAmpModelling/dist_model_recnet.py”, line 211, in
val_output, val_loss = network.process_data(dataset.subsets[‘val’].data[‘input’][0],
File “/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py”, line 132, in process_data
output[(l + 1) * chunk:-1] = self(input_data[(l + 1) * chunk:-1])
UnboundLocalError: local variable ‘l’ referenced before assignment

What can cause this issue ?

Well, it seems it was an indentation error.
In the file CoreAudioML/networks.py, in the function process_data, there is this for loop :

for l in range(int(output.size()[0] / chunk)):
     output[l * chunk:(l + 1) * chunk] = self(input_data[l * chunk:(l + 1) * chunk])
     self.detach_hidden()
# If the data set doesn't divide evenly into the chunk length, process the remainder
if not (output.size()[0] / chunk).is_integer():
     output[(l + 1) * chunk:-1] = self(input_data[(l + 1) * chunk:-1])

the last line uses the variable l, while it is out of the scope of the for loop, so I just indented this if condition to be inside the for loop, and it seems to work for me. I was able to finish the training and upload the json file into my mod dwarf.

If you are ok with that fix, is there anywhere I can contribute ?

1 Like

that seems like it needs to go into GitHub - MaxPayne86/CoreAudioML at aidadsp_devel