Local training

Has anyone tried to train AIDA-X models locally? Is this documented anywhere? I know a number of people with a large collection of captures that would be happy to train AIDA-X, but don’t want to have to do them one by one on a collab notebook.

5 Likes

Count me in on the list for this. I’ve got some high quality gear that we created tons of Kemper Profiles (20k+ bass specific, since it’s so hard to find) that I would love to recreate for AIDA-X to put on a Dwarf. But there’s no way I’m running through a Colab notebook for that.

3 Likes

I have found this instructions

It is not user friendly, and since it requires an NVIDIA cura enabled GPU, which I don’t have, I haven’t been able to test it

1 Like

there is actually:

Its just the first time setting everything up and enabling the GPU. that takes time.
If you need help for windows or Linux just ask away.

Quick rundown:

install docker
install pytorch
install git

download repository
run docker image
open notebook website
set everything up
train

joy :slight_smile:

3 Likes

I was able to run that notebook in a jupyter running within a local docker container environment at my windows machine with RTX 3060 video card. Basically, that’s possible. Collab workbook ran out of capacity? | "usage limits in Colab" - #3 by ignis32

But it took some time to set up all the dependencies. (especially adding wsl2 was a pain). I used dockerfile as below to have necessary python libraries and jupyter.

FROM aidadsp/pytorch

USER root

RUN apt update && \
    apt -y install git
	
RUN mkdir /content && \
    chown aidadsp /content
	
USER aidadsp

RUN pip install jupyter_http_over_ws && \
    jupyter serverextension enable --py jupyter_http_over_ws && \
    pip install librosa plotly 
 
USER aidadsp
#instead of the google drive input
RUN mkdir /content/drive
-



#### adding colab functions. bit of a mess.
USER root
RUN  apt -y install build-essential python3-dev python3-numpy
# fails in pip for unknown reason, going with conda. Also had not installed without root.
RUN  conda install -y -c conda-forge google-colab

USER aidadsp
WORKDIR /content
ENTRYPOINT ["jupyter", "notebook", "--ip='*'", "--port=8888", "--allow-root", "--NotebookApp.allow_origin='https://colab.research.google.com'", "--NotebookApp.port_retries=0"]

Notebook depends on some google stuff which is not available locally, so I had to modify it and also had to put wav files to docker container by hand via Docker desktop interfaces, as soon as file upload does not work that way.

I do not have a good step by step documented approach on that unfortunately, but here are some notes I’d taken for myself in the process, hope it can might help

 build image from Dockerfile with

 docker build -t aidax_win_local_colab .

 STEP 1)
  comment  in code (we will upload in another way)
   #from google.colab import files
   
   
  
  #from google.colab import drive
  #print("Mounting google drive...")
  #drive.mount('/content/drive')
  
 STEP 1.3)
 
	Use Docker Desktop files interface, and put  a folder with input.wav and target.wav into /content/drive/ folder.
	e.g.  /content/drive/:
		
	/content/drive/Elmwood3100-reamp.io
	/content/drive/Elmwood3100-reamp.io/target.wav
	/content/drive/Elmwood3100-reamp.io/input.wav

STEP 3) 

	Evaulation upload does not work yet. Sad face.  




docker run -dp 8888:8888 --rm -it  --gpus all aidax_win_local_colab   

P.S.

However, I have a feeling that my attempt to run that notebook locally was really excessive and not straightforward.
Looks like notebook was based on the Automated-GuitarAmpModelling, and if I had to set up local training environment again, I would instead try to find out how to use this Automated-GuitarAmpModelling code directly, without the notebook and it’s dependency hell. Looks like it was initially created for the local execution of the same procedure, and notebook just provides a GUI and google colab processing power, which are not much of use locally anyway.

2 Likes

yeah that’s right. You really just need the python scripts and the the setup of pytorch with cuda enabled.
The Notebook is just there for easy managment.

My idea still is to have a easy frontend talking to the pyscripts. With an all-in-one pyinstaller (sandboxed from everything else) on all major platforms. But this will likely not happen that fast.

I have a Ryzen 5 laptop with AMD/RADEON GPU

I have managed to install pytorch with ROCm (Radeon Open Computing, some kind of CUDA-like stuff)

I have been able to run the scripts from command line.

The reported elapsed time to train the same dataset (HIWAT model) that con colab took 15 minutes is about 15 hours.

I know it is a dumb question not having the HW specs, but do you think it is normal or it is “too close” to “CPU only” pytorch performances (meaning that the ROCm stuff is not working or useless)?

Edit: it took about 50 minutes, because it halted at 150th era, while it expected lo run for 2000, so I will need some mote tests to compare my setup with the colab
Z

unfortunately amd is a little behind the nvidia cuda stuff.
Maybe you can try Enable PyTorch with DirectML on Windows | Microsoft Learn

But I have nox experience with this or any other amd card.
I have to say even the old 1050ti on my notebook works well for training. This might be an option if someone has an old computer. The graphic cards are pretty cheap (80€) used here in Germany.

Thanks,

I am on Linux so no directml, but it hinted me to look for the vulkan backend of pytorch

Hello there,

i got the docker up and running on my Linux machine. But if i run the Set-Up cell i get the error:

‘’’
Checking GPU availability… GPU available!
mkdir: cannot create directory ‘/content/temp’: No such file or directory
touch: cannot touch ‘/content/temp/logs.txt’: No such file or directory
Installing dependencies…
/bin/sh: 1: cannot create /content/temp/logs.txt: Directory nonexistent
Getting the code…
/bin/sh: 1: cannot create /content/temp/logs.txt: Directory nonexistent
/bin/sh: 1: git: not found
‘’’

Any help?

Very old thread, I realize, but I would love if this process was someday much easier for the average user. I would like to get my amp into an AIDA profile!

I do not have (or want) a Google account for the Colab thing, and I couldn’t get the scripts to work on a Linux/AMD type computer, though I tried for many hours.

It’s been more than a year ago, and I don’t have anymore that laptop with the AMD CPU

I can barely remember that I had to use this implementation of pytorch

Hope it helps

Z

2 Likes

I appreciate it, maybe I’ll give it a try. It takes hours to download all the packages just for it to throw some error I have no idea about… just wish it was easier for a Python noob like me.

Do you remember how you got that other ROCm Docker file to run with the aidadsp one? Obviously I have to combine them somehow, but I don’t know how.

Edit: I ran the DOCm one by itself and mounted the folder with “Automated-GuitarAmpModelling” stuff in it. Seems to do something, but getting error:

Error: config file doesn’t have file_name defined

Edit: Fixed that by adjusting my json config file, which was not intuitive or easy to understand. In particular I’m not understanding how to get my target.wav file aligned with the blips, there are three separate settings that do this. I just deleted them all and let it run.

It’s now running but only using one core, so painfully slow. 17/2000 after three hours or so. It’s not using my GPU either.

OK, I got it to work. To save people trying this many hours, this was my process. I’m on Linux with AMD/Radeon, and am NOT using Colab.

Install docker
sudo systemctl start docker (or your system equivalent)
sudo docker pull rocm/pytorch:latest (installs many GB of files)

Make a temp folder, I called mine “aida”
cd into the folder

git clone GitHub - AidaDSP/Automated-GuitarAmpModelling (another ~500MB)
cd Automated-GuitarAmpModelling
git checkout aidadsp_devel
git submodule update --init --recursive

In the Automated-GuitarAmpModelling folder, go to Configs and edit the LSTM-12.json file:

{
“model”: “SimpleRNN”,
“hidden_size”: 12,
“unit_type”: “LSTM”,
“loss_fcns”: {“ESR”: 0.75, “DC”: 0.25},
“pre_filt”: “A-Weighting”,
“device”: “MOD-DWARF”,
“samplerate”: 48000.0,
“file_name”: “OUTPUT”,
“samplerate”: “48000”,
“source”: “”,
“style”: “Clean”,
“based”: “Some Amplifier”,
“author”: “YOUR NAME”,
“dataset”: “”,
“license”: “”
}

You can tweak those values.

Put input.wav and target.wav in the temp folder you created (the “aida” folder). THIS POST DOES NOT COVER HOW TO MAKE THOSE. In particular I have no idea how to set all the “blip” variables. All I did was make sure they were aligned and the same length, the script seems to have done the rest.

Then run:

sudo docker run -v /path/to/temp/folder:/aida:rw -w /aida -it --cap-add=SYS_PTRACE --security-opt seccomp=unconfined --device=/dev/kfd --device=/dev/dri/renderD128 --group-add video --cpuset-cpus 1-10 --memory=20g --ipc=host --shm-size 8G rocm/pytorch:latest

The above command will require tweaking:

  • Adjust path to temp folder that you made, -v parameter “mounts” it as /aida
  • The device might require tweaking, I had to try different ones out to get it to use the right one.
  • ls /dev/dri/render* to list the possible options
  • CPU and RAM usage are tweakable, but if I didn’t specify RAM it went up to 90+ percent and almost crashed computer, so you may want to limit it. Adjust “1-10” to the amount of cores you want to use.

Now your terminal is running the Docker environment and you will be in the /aida directory that you mounted. Now we have to install some packages (if you leave the environment you will have to install again):

pip install auraloss librosa
pip install tensorflow==2.13.0
pip install tensorrt

cd Automated-GuitarAmpModelling
python prep_wav.py -f “/aida/input.wav” “/aida/target.wav” -l LSTM-12.json -n

python dist_model_recnet.py -l LSTM-12.json -slen 24000 --seed 39 -lm 0

This will take hours. Finally:

python modelToKeras.py -lm “/aida/Automated-GuitarAmpModelling/Results/OUTPUT_LSTM-12-1/model_best.json” -l LSTM-12

Once it’s done, you will have a “model_keras.json” file in the Results/OUTPUT folder. Import this to AIDA. Enjoy.

2 Likes

Thanks for doing the heavy lifting here!

Wow. What CPU/RAM specs was this run on?

Ryzen 9 + Radeon + 32G

But it’s not using the GPU at all from what I can tell, and not much of the CPU either, only one core. Not sure how to make it use more.

EDIT: Fixed it by using --cpuset-cpus 1-10

This on a 12 core CPU.

2 Likes

Thanks! How much did using more cores impact training time?

It took about 30 minutes instead of ALL NIGHT

1 Like

Well, that is certainly dramatic! :smile: thanks for the update!

More testing, the models aren’t working, I’ll line the files up better by using tips from

Edit: This helped a ton and lines up the files automatically (so you do not need the “blip” parameters at all). Got a very accurate and usable model from my favorite amp.

Now working on the best way to capture as far as levels and normalizing. The first script does NOT seem to like wav files that are normalized to peak.

2 Likes