Local training

Has anyone tried to train AIDA-X models locally? Is this documented anywhere? I know a number of people with a large collection of captures that would be happy to train AIDA-X, but don’t want to have to do them one by one on a collab notebook.

4 Likes

Count me in on the list for this. I’ve got some high quality gear that we created tons of Kemper Profiles (20k+ bass specific, since it’s so hard to find) that I would love to recreate for AIDA-X to put on a Dwarf. But there’s no way I’m running through a Colab notebook for that.

3 Likes

I have found this instructions

It is not user friendly, and since it requires an NVIDIA cura enabled GPU, which I don’t have, I haven’t been able to test it

1 Like

there is actually:

Its just the first time setting everything up and enabling the GPU. that takes time.
If you need help for windows or Linux just ask away.

Quick rundown:

install docker
install pytorch
install git

download repository
run docker image
open notebook website
set everything up
train

joy :slight_smile:

2 Likes

I was able to run that notebook in a jupyter running within a local docker container environment at my windows machine with RTX 3060 video card. Basically, that’s possible. Collab workbook ran out of capacity? | "usage limits in Colab" - #3 by ignis32

But it took some time to set up all the dependencies. (especially adding wsl2 was a pain). I used dockerfile as below to have necessary python libraries and jupyter.

FROM aidadsp/pytorch

USER root

RUN apt update && \
    apt -y install git
	
RUN mkdir /content && \
    chown aidadsp /content
	
USER aidadsp

RUN pip install jupyter_http_over_ws && \
    jupyter serverextension enable --py jupyter_http_over_ws && \
    pip install librosa plotly 
 
USER aidadsp
#instead of the google drive input
RUN mkdir /content/drive
-



#### adding colab functions. bit of a mess.
USER root
RUN  apt -y install build-essential python3-dev python3-numpy
# fails in pip for unknown reason, going with conda. Also had not installed without root.
RUN  conda install -y -c conda-forge google-colab

USER aidadsp
WORKDIR /content
ENTRYPOINT ["jupyter", "notebook", "--ip='*'", "--port=8888", "--allow-root", "--NotebookApp.allow_origin='https://colab.research.google.com'", "--NotebookApp.port_retries=0"]

Notebook depends on some google stuff which is not available locally, so I had to modify it and also had to put wav files to docker container by hand via Docker desktop interfaces, as soon as file upload does not work that way.

I do not have a good step by step documented approach on that unfortunately, but here are some notes I’d taken for myself in the process, hope it can might help

 build image from Dockerfile with

 docker build -t aidax_win_local_colab .

 STEP 1)
  comment  in code (we will upload in another way)
   #from google.colab import files
   
   
  
  #from google.colab import drive
  #print("Mounting google drive...")
  #drive.mount('/content/drive')
  
 STEP 1.3)
 
	Use Docker Desktop files interface, and put  a folder with input.wav and target.wav into /content/drive/ folder.
	e.g.  /content/drive/:
		
	/content/drive/Elmwood3100-reamp.io
	/content/drive/Elmwood3100-reamp.io/target.wav
	/content/drive/Elmwood3100-reamp.io/input.wav

STEP 3) 

	Evaulation upload does not work yet. Sad face.  




docker run -dp 8888:8888 --rm -it  --gpus all aidax_win_local_colab   

P.S.

However, I have a feeling that my attempt to run that notebook locally was really excessive and not straightforward.
Looks like notebook was based on the Automated-GuitarAmpModelling, and if I had to set up local training environment again, I would instead try to find out how to use this Automated-GuitarAmpModelling code directly, without the notebook and it’s dependency hell. Looks like it was initially created for the local execution of the same procedure, and notebook just provides a GUI and google colab processing power, which are not much of use locally anyway.

2 Likes

yeah that’s right. You really just need the python scripts and the the setup of pytorch with cuda enabled.
The Notebook is just there for easy managment.

My idea still is to have a easy frontend talking to the pyscripts. With an all-in-one pyinstaller (sandboxed from everything else) on all major platforms. But this will likely not happen that fast.

I have a Ryzen 5 laptop with AMD/RADEON GPU

I have managed to install pytorch with ROCm (Radeon Open Computing, some kind of CUDA-like stuff)

I have been able to run the scripts from command line.

The reported elapsed time to train the same dataset (HIWAT model) that con colab took 15 minutes is about 15 hours.

I know it is a dumb question not having the HW specs, but do you think it is normal or it is “too close” to “CPU only” pytorch performances (meaning that the ROCm stuff is not working or useless)?

Edit: it took about 50 minutes, because it halted at 150th era, while it expected lo run for 2000, so I will need some mote tests to compare my setup with the colab
Z

unfortunately amd is a little behind the nvidia cuda stuff.
Maybe you can try Enable PyTorch with DirectML on Windows | Microsoft Learn

But I have nox experience with this or any other amd card.
I have to say even the old 1050ti on my notebook works well for training. This might be an option if someone has an old computer. The graphic cards are pretty cheap (80€) used here in Germany.

Thanks,

I am on Linux so no directml, but it hinted me to look for the vulkan backend of pytorch

Hello there,

i got the docker up and running on my Linux machine. But if i run the Set-Up cell i get the error:

‘’’
Checking GPU availability… GPU available!
mkdir: cannot create directory ‘/content/temp’: No such file or directory
touch: cannot touch ‘/content/temp/logs.txt’: No such file or directory
Installing dependencies…
/bin/sh: 1: cannot create /content/temp/logs.txt: Directory nonexistent
Getting the code…
/bin/sh: 1: cannot create /content/temp/logs.txt: Directory nonexistent
/bin/sh: 1: git: not found
‘’’

Any help?