I’m doing something stupid, just I know.
I get the same fail every time. I have not got a “good” model through yet.
What triggers “validation patience limit reached at epoch XX” ?
thx
m
<- RUN CELL (►)
This will check for GPU availability, prepare the code for you, and mount your drive.
Show code
— Checking GPU availability… GPU unavailable, using CPU instead. RECOMMENDED: You can enable GPU through “Runtime” → “Change runtime type” → “Hardware accelerator:” GPU → Save Getting the code… Checking for code updates… Mounting google drive… Mounted at /content/drive Ready! you can now move to step 1: DATA
1. The Data (upload + preprocessing)
Step 1.1: Download the capture signal
Download the pre-crafted “capture signal” called input.wav from the provided link.
Step 1.2 Reamp your gear
Use the downloaded capture signal to reamp the gear that you want to model. Record the output and save it as “target.wav”. For a detailed demonstration of how to reamp your gear using the capture signal, refer to this video tutorial starting at 1:10 and ending at 3:44.
[4]
2s
<- RUN CELL (►)
Step 1.3 upload
- In drive, put the 2 audio files with which you would like to train in a single folder.
-
input.wav
: contains the reference (dry/DI) sound. -
target.wav
: contains the target (amped/with effects) sound.
-
- Use the file browser in the left panel to find a folder with your audio, right-click “Copy Path”, paste below, and run the cell.
- ex.
/content/drive/My Drive/training-data-folder
- ex.
DATA_DIR:
"
"
Show code
— Input file name: /content/drive/MyDrive/input.wav Target file name: /content/drive/MyDrive/target.wav Input rate: 48000 length: 14523000 [samples] Target rate: 48000 length: 14523000 [samples] Preprocessing the training data… Data prepared! you can now move to step 2: TRAINING
2. Model Training
[5]
22m
<- RUN CELL (►)
Training usually takes around 10 minutes, but this can change depending on the duration of the training data that you provided and the model_type you choose.
Note that training doesn’t always lead to the same results. You may want to run it a couple of times and compare the results.
Choose the Model type you want to train:
Generally, the heavier the model the more accurate it is, but also the more CPU it consumes. Here’s a list of approximate CPU consumption of each model type on a MOD Dwarf:
- Lightest: 25% CPU
- Light: 30% CPU
- Standard: 37% CPU
- Heavy: 46% CPU
model_type:
Some training hyper parameters (Recommended: ignore and continue with default values):
skip_connection:
epochs:
Show code
— device = MOD-DWARF file_name = MyDrive unit_type = LSTM size = 16 skip_connection = 0 36% 71/200 [21:31<37:59, 17.67s/it]validation patience limit reached at epoch 72 36% 71/200 [21:52<39:45, 18.49s/it] done training testing the final model testing the best model finished training: MyDrive_LSTM-16 Training done! ESR after training: 0.801609218120575