If you have a pc with a decent GPU, you can try using “local environment” instead, for free.
Your google collab document would connect via your browser to the local running jupyter, and perform all the calculations there.
I’ve taken this route, because I had no colab free resources initially for some reason.
However, getting this to work is quite tricky, at least on my windows box, because tower of the dependencies involved is quite high, and each part was a pain to configure.
In general or my windows machine it required:
- CUDA installed,
- WSL2 installed
- Docker desktop installed to run the learning image.
- Modified aidadsp/pytorch docker image with additional packages and jupyter plugin for enabling remote execution from colab.
I run all this stuff locally, and it provides me a link with a token that I can feed to colab document so it executes in that local docker container on my local gpu.
Also, some google-based functions like mounted gdrive or uploading files do not work this way, I had to make small modifications to the notebook, so I could put the input/target wavs directly into the container.
“upload your own dry guitar files, and listen to the predicted output of the model” thing still does not work for me and requires additional notebook fixing.
Let’s say it is possible, works for me, but it had taken a lot of effort and frustration to prepare the environment, even while I have a systems engineering background.
I can try to share more details, if required.
P.S. Actually, I think that training model locally should not use this colab document as I do currently, it is just a too long way around, it should be as simple as running a docker container once without UI against a couple of files and params.