Collab workbook ran out of capacity? | "usage limits in Colab"

EDIT: there is a way to continue with CPU alone…standby while I but it seems to be a lot slower!

Hi guys,

Wanted to continue my modeling journey.
(Me, into modeling oh behaaave)

anyway, in the first step, When trying to connect, I get this message

Unable to connect to GPU backend
You cannot currently connect to a GPU due to usage limits in Colab. More information
If you want more access to GPUs, you can buy Colab compute units with Pay As You Go.

Does this mean I’m done modeling for now?

Short answer is yes, you can disable GPU and use only CPU, which has less limits. For that you can go to Runtime → Change runtime type → Hardware Accelerator → None.

Colab is product by google that allows you to run python code in a cloud instance that can even have GPU. Thing is it’s a limited resource, you can’t keep using that infinitely, and the limits for the free subscription are not documented anywhere because it can change depending on the traffic they have.
Here’s more info Google Colab

Last thing: those limits are linked to your google account. So a different google account that haven’t used colab this month would have all the resources available again :slight_smile:

Have fun modeling!

1 Like

If you have a pc with a decent GPU, you can try using “local environment” instead, for free.

Your google collab document would connect via your browser to the local running jupyter, and perform all the calculations there.
I’ve taken this route, because I had no colab free resources initially for some reason.

However, getting this to work is quite tricky, at least on my windows box, because tower of the dependencies involved is quite high, and each part was a pain to configure.

In general or my windows machine it required:

  1. CUDA installed,
  2. WSL2 installed
  3. Docker desktop installed to run the learning image.
  4. Modified aidadsp/pytorch docker image with additional packages and jupyter plugin for enabling remote execution from colab.

I run all this stuff locally, and it provides me a link with a token that I can feed to colab document so it executes in that local docker container on my local gpu.

Also, some google-based functions like mounted gdrive or uploading files do not work this way, I had to make small modifications to the notebook, so I could put the input/target wavs directly into the container.

“upload your own dry guitar files, and listen to the predicted output of the model” thing still does not work for me and requires additional notebook fixing.


Let’s say it is possible, works for me, but it had taken a lot of effort and frustration to prepare the environment, even while I have a systems engineering background.

I can try to share more details, if required.

P.S. Actually, I think that training model locally should not use this colab document as I do currently, it is just a too long way around, it should be as simple as running a docker container once without UI against a couple of files and params.

Thanks for the info @itskais!

That seems like a LOT of work @ignis32
I just let the “cpu only” run while doing some work around the house and that was fine.

A smoother workflow for training on my local machine in the futre would be nice though :smiley:

1 Like