While training: KeyError: 'test_lossESR_final

Picking up modeling amps from digital blends again.

Getting the KeyError: ‘test_lossESR_final’ error when I try to do the training step while training a model. This is not the first time I create a model but it’s the first time I’m stuck in this step. It’s been a while since I created my last model though.
Filesizes of input and target check out, so does the rate.

I fail to dirive the cause from the coude output :confused:

/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:28: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(ctx, input, min, max):
/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:33: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):

args.model = SimpleRNN
args.device = MOD-DWARF
args.file_name = PF-SpacePirate
args.input_size = 1
args.hidden_size = 16
args.unit_type = LSTM
args.loss_fcns = {'ESR': 0.75, 'DC': 0.25}
args.skip_con = 0
args.pre_filt = A-Weighting
existing model file found, loading network.. continuing training..
/usr/local/lib/python3.10/dist-packages/torch/__init__.py:955: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:432.)
  _C._set_default_tensor_type(t)
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:60: UserWarning: The verbose parameter is deprecated. Please use get_last_lr() to access the learning rate.
  warnings.warn(
  0% 1/540 [00:03<34:53,  3.88s/it]
Traceback (most recent call last):
  File "/content/Automated-GuitarAmpModelling/dist_model_recnet.py", line 238, in <module>
    val_output, val_loss = network.process_data(dataset.subsets['val'].data['input'][0],
  File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 777, in process_data
    output[l * chunk:(l + 1) * chunk] = self(input_data[l * chunk:(l + 1) * chunk])
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 698, in forward
    x, self.hidden = self.rec(x, self.hidden)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/rnn.py", line 917, in forward
    result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-6-6e2df7b97e5b> in <cell line: 46>()
     44 model_dir = f"/content/Automated-GuitarAmpModelling/Results/{file_name}_{config_file}-{skip_con}"
     45 step = max(step, 2)
---> 46 print("Training done!\nESR after training: ", extract_best_esr_model(model_dir)[1])

/content/Automated-GuitarAmpModelling/colab_functions.py in extract_best_esr_model(dirpath)
    161   with open(stats_file) as json_file:
    162     stats_data = json.load(json_file)
--> 163     test_lossESR_final = stats_data['test_lossESR_final']
    164     test_lossESR_best = stats_data['test_lossESR_best']
    165     esr = min(test_lossESR_final, test_lossESR_best)

KeyError: 'test_lossESR_final'

The google colab page which the MODAudio/AIDA DSP crew set up for model training has been neglected for a long time. Maybe they’ll get around to updating & fixing it some day or maybe it will continue to be left alone. It all depends on whatever business situation the company is in (which I do not know of) at the moment. Perhaps the team might tell us more.

1 Like

Ah, …

This process is the core of user generated content when it comes to profile tech.

Hope the program still has future :confused:

Hey @LievenDV,

I wrote about it in this thread. We still intent to fix it but it will take time.

2 Likes

Hi there,

I’m experiencing the same problem as LievenDV.
Here is the error code:

/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:28: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  def forward(ctx, input, min, max):
/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:33: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
  def backward(ctx, grad_output):

args.model = SimpleRNN
args.device = MOD-DWARF
args.file_name = Normal
args.input_size = 1
args.hidden_size = 16
args.unit_type = LSTM
args.loss_fcns = {'ESR': 0.75, 'DC': 0.25}
args.skip_con = 1
args.pre_filt = A-Weighting
no saved model found, creating new network
/usr/local/lib/python3.10/dist-packages/torch/__init__.py:955: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:432.)
  _C._set_default_tensor_type(t)
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:60: UserWarning: The verbose parameter is deprecated. Please use get_last_lr() to access the learning rate.
  warnings.warn(
  0% 1/200 [00:02<09:38,  2.91s/it]
Traceback (most recent call last):
  File "/content/Automated-GuitarAmpModelling/dist_model_recnet.py", line 238, in <module>
    val_output, val_loss = network.process_data(dataset.subsets['val'].data['input'][0],
  File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 777, in process_data
    output[l * chunk:(l + 1) * chunk] = self(input_data[l * chunk:(l + 1) * chunk])
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 691, in forward
    x, self.hidden = self.rec(x, self.hidden)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/rnn.py", line 917, in forward
    result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-4-c34516351da8> in <cell line: 46>()
     44 model_dir = f"/content/Automated-GuitarAmpModelling/Results/{file_name}_{config_file}-{skip_con}"
     45 step = max(step, 2)
---> 46 print("Training done!\nESR after training: ", extract_best_esr_model(model_dir)[1])

/content/Automated-GuitarAmpModelling/colab_functions.py in extract_best_esr_model(dirpath)
    161   with open(stats_file) as json_file:
    162     stats_data = json.load(json_file)
--> 163     test_lossESR_final = stats_data['test_lossESR_final']
    164     test_lossESR_best = stats_data['test_lossESR_best']
    165     esr = min(test_lossESR_final, test_lossESR_best)

KeyError: 'test_lossESR_final'

I hope the project is not dead, because AIDA is really a nice plugin.

Bye

Some temporary fixes have been made on the aidadsp_devel branch.
When running STEP 0 you’ll be ask to restart the session. DON’T !
Edit the code of Step five.
Find the line : shutil.copyfile(os.path.join(model_dir, ‘model_keras.json’), os.path.join(‘/content’, os.path.split(model_dir)[-1]+‘.aidax’))\

Replace .aidax with .json. Or if you’re not confortable with messing with the code, download manually the model file in the content folder.

Don’t run step 4 as it may crash the whole session.

Optionnaly at step 2, I usually change “norm=True” to “norm=False”.

1 Like

thanks for the heads up, missed that line.
Should be fixed now for step 5.

3 Likes

Ok, Tried again;

had to “restart session” because some updates but third time worked like a charm.
Need to test the model itself but it’s a high gain amp model that landed on an 0.018 ESR so that should be promising enough.

Thanks all involved! :heart:

1 Like