The Google Colab for AIDA-X training is failing again

I’ve been trying to capture a NAM of a Hellwin amplifier for several days, but when I run step 1, called ‘set-up’, I get the following error:

---
Checking GPU availability... GPU available!
Getting the code...
Checking for code updates...
Installing dependencies...
Mounting google drive...
Mounted at /content/drive
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-2-727379d7cf67> in <cell line: 0>()
     50   os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:2"
     51
---> 52   from colab_functions import wav2tensor, extract_best_esr_model, create_csv_aidax
     53   from prep_wav import WavParse
     54   import plotly.graph_objects as go

5 frames
/usr/local/lib/python3.11/dist-packages/torch/_library/fake_impl.py in register(self, func, source)

RuntimeError: operator torchvision::nms does not exist

I’ve tried it from several computers, but the error persists, and I want to continue capturing amps. I waited a few days for someone else to report the error, but I think I’m the only one experiencing this."

it is one of the same “classic” errors.

did you get the message to “update” and then try again or was it not the case now

It’s been a while but since I work locally via a docker image, I had no issues.
Will need to try tomorrow, IF I find time…
.

I’ve been having issues like that too. Here’s a workaround that I have found which helps me do model training.

First, replace the current code at the Deps stage with an older version of the code shown below…

Check PyTorch and CUDA versions

import torch
import re

pytorch_version = torch.version
cuda_version = torch.version.cuda

required_pytorch_version = “2.0.1”
required_cuda_version = “11.7”

def version_higher(version1, version2):
def extract_numeric_version(version):
return tuple(map(int, re.findall(r’\d+', version)))
return extract_numeric_version(version1) > extract_numeric_version(version2)

if version_higher(pytorch_version, required_pytorch_version) or version_higher(cuda_version, required_cuda_version):
print(f"WARNING: Your environment has PyTorch {pytorch_version} and CUDA {cuda_version}. This environment is not supported.")
print(“Proceeding to install required dependencies…”)
!pip3 uninstall --disable-pip-version-check -y torch torchvision torchaudio tensorflow tensorboard
!pip3 install --disable-pip-version-check --no-cache-dir torch==2.0.1+cu117 torchvision==0.15.2+cu117 torchaudio==2.0.2+cu117 -f https://download.pytorch.org/whl/torch_stable.html
#!pip3 install --disable-pip-version-check --no-cache-dir tensorflow==2.12.0 tensorboard==2.12.0

Then run the step 0 deps cell and run the step 1 to mount drive.
You’ll get an error message afterwards but don’t fret, go back to step 0 again and swap the old version code with the default code and run step 0 once again. when that’s done, run step 1 (mount drive) again. It should say that the drive has been mounted. If so then you can move on to the step 2 and so on.
Before you start model training, create a new cell before the model training stage and type
!pip install tensorboard
Then run that cell you’ve created. This prevents you from getting that tensorboard error that messes up training.

That’s been my recent experience with the model trainer. Eventually, the trainer will break in new ways and I might no longer find any more workarounds to future problems.

The colab page keeps finding new ways to fail time & time again which is really quite embarrassing when you compare it to the likes of ToneZone 3000 and so on.

1 Like

Well, it didn’t work for me lol. As soon as I make the change in Deps, it throws a syntax error, and no matter how much I’ve tried to figure it out, I can’t fix it

I did a successful test using the “next” branch. You might try it. Next

“Well, it did work, it gave me a capture, but of very poor quality. It’s a high-gain amplifier and it sounds like the input, without any kind of gain or anything.”

Could you share you input/target files?

Yes, I could pass you my target and input. Write to my email and I’ll gladly send them to you, my email is "niideaf3gmail.com. Perhaps it’s my mistake as the capture might be of poor quality

The Google Colabs have been totally abandoned by the AIDA and NAM devs. It’s pretty disappointing, especially since updating them now and then probably wouldn’t take much effort. They really don’t make things easy for the average user. I ended up giving up on AIDAX and just use the ToneHunt website now for my NAM captures—much simpler. I really wanted to support AIDA, and if they ever make the capture process easier, I’d definitely give them another shot.

1 Like

last time there was a change and aida/mod needed to update, they were here to help and it isn’t that long ago.

I think we need to make some kind of guide to set up a docker image and run the profile on our own end. With some help from @madmaxwell and ChatGPT I coudl get it to work

rough and cutting corners;

on Windows;

  • install docker desktop (free)
  • install git
git clone https://github.com/aidadsp/Automated-GuitarAmpModelling.git
cd Automated-GuitarAmpModelling && git checkout next && git submodule update --init --recursive
docker compose up -d

then open your browser and type

http://localhost:8080

you should see the Jupyter interface. Note that it require you to setup docker and you need GPU support and nvidia drivers in place. How to do so is well documented for several platforms and OSes since everybody do training these times!!!

2 Likes

That sounds like the best option for the MOD community—just a simple step-by-step on how to install the Docker image locally. That way, if it needs updating or support, people can just help each other out in the comments.
I’ll give it a try myself.
By the way, is the Git just for NAM, or can it also handle AIDA captures?

Also apologies to @madmaxwell for my earlier comment — I just caught up with the latest forum post and really appreciate all the work you’ve been putting in. I hadn’t checked the forum in a while, so it’s great to see you’re keeping things so well updated. Thanks for that.

1 Like

Well, at least for those having an NVIDIA GPU and a computer science degree :smiley:

1 Like

I used if only for AIDAX, not sure how to use it for NAM

1 Like

I have a question, isn’t Aida DSP not supposed to be for audio mods? It’s very disappointing that the Google Colab and a system that works well are being abandoned. It just needs more work. Personally, I use Guitarix on a RPi 4 and Aida X works great for me. Is there no way for the community to at least keep the Google Colab going?

1 Like

well it’s not abandoned but needs a fix from time to time. It’s not the ideal solution and takes a little while for us to fix it.

I could also setup a cloud solution that is just a upload form that does everything automatically and you get the amp model as a download. But GPU is expensive and no one is paying 1€ per file.

We’d also like to be able to use ToneZone3K but if the demand for that is low the devs a like not bothering to add it.

I can also setup a google drive for people to throw their files in and run it trough my setup. But this is slow and you have to wait a few hours before you get something back.

Sorry to say but this is the best at the moment we can do.

4 Likes

And what if, in order to have a greater voice in tonezone3k, the AIDAX file starts to go viral? I was looking and the vast majority of pedalboards have NAM but in their own format, and if it’s not compiled this way, they wouldn’t run it. If we manage to make these pedalboards also open the way for AIDAX in their system, since AIDAX is much lighter than NAM, its use would increase, leading to this system being implemented in pedalboards like the GP-200, which is very well-known and has NAM but only runs NANO. We could achieve a lot.