Picking up modeling amps from digital blends again.
Getting the KeyError: ‘test_lossESR_final’ error when I try to do the training step while training a model. This is not the first time I create a model but it’s the first time I’m stuck in this step. It’s been a while since I created my last model though.
Filesizes of input and target check out, so does the rate.
I fail to dirive the cause from the coude output
/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:28: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
def forward(ctx, input, min, max):
/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py:33: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead.
def backward(ctx, grad_output):
args.model = SimpleRNN
args.device = MOD-DWARF
args.file_name = PF-SpacePirate
args.input_size = 1
args.hidden_size = 16
args.unit_type = LSTM
args.loss_fcns = {'ESR': 0.75, 'DC': 0.25}
args.skip_con = 0
args.pre_filt = A-Weighting
existing model file found, loading network.. continuing training..
/usr/local/lib/python3.10/dist-packages/torch/__init__.py:955: UserWarning: torch.set_default_tensor_type() is deprecated as of PyTorch 2.1, please use torch.set_default_dtype() and torch.set_default_device() as alternatives. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:432.)
_C._set_default_tensor_type(t)
/usr/local/lib/python3.10/dist-packages/torch/optim/lr_scheduler.py:60: UserWarning: The verbose parameter is deprecated. Please use get_last_lr() to access the learning rate.
warnings.warn(
0% 1/540 [00:03<34:53, 3.88s/it]
Traceback (most recent call last):
File "/content/Automated-GuitarAmpModelling/dist_model_recnet.py", line 238, in <module>
val_output, val_loss = network.process_data(dataset.subsets['val'].data['input'][0],
File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 777, in process_data
output[l * chunk:(l + 1) * chunk] = self(input_data[l * chunk:(l + 1) * chunk])
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/content/Automated-GuitarAmpModelling/CoreAudioML/networks.py", line 698, in forward
x, self.hidden = self.rec(x, self.hidden)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/rnn.py", line 917, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-6e2df7b97e5b> in <cell line: 46>()
44 model_dir = f"/content/Automated-GuitarAmpModelling/Results/{file_name}_{config_file}-{skip_con}"
45 step = max(step, 2)
---> 46 print("Training done!\nESR after training: ", extract_best_esr_model(model_dir)[1])
/content/Automated-GuitarAmpModelling/colab_functions.py in extract_best_esr_model(dirpath)
161 with open(stats_file) as json_file:
162 stats_data = json.load(json_file)
--> 163 test_lossESR_final = stats_data['test_lossESR_final']
164 test_lossESR_best = stats_data['test_lossESR_best']
165 esr = min(test_lossESR_final, test_lossESR_best)
KeyError: 'test_lossESR_final'