Friedman HBE V1 (from axefx3 amp block)

My first capture for you:
From the amp block of AxeFx3 recommended a Marshall Greenbacks IR
Standard training 300 epochs.
esr: 0,083

friedman hbe v1.json (53.4 KB)

8 Likes

thanks to @madmaxwell
without noisegate and cab block it works better!

3 Likes

Can you eloborate on that a little more please?
What do those things mean an what are typical values?

What do you mean for typical values?
Amp settings?
This is a capture of an amp block of axefxIII (not the real amp: I don’t have a reamp box yet)
When the reamp box will arrive (there are some problems with strikes here) I will send other captures.

In the future I will upload other captures with plugins, Marshall and Marshall based amps, Triaxis

1 Like

I think that @LievenDV was meaning which amp settings (how much gain, EQ, etc).

Although I’m wondering about what is typically used for those as well, I was wondering what these mean:

300 epochs? Is it a number of cycles or something? What is the gain of more epochs? more precise profile but longer time to train/process?

esr 0,0083? is that some kind of deviation margin? What are typicall ranges of esr value?

2 Likes

here there are 3 different trainings with settings as in the picture above.

These are results of trainings:

  1. standard 300 (completed 300/300)
    device = MOD-DWARF
    file_name = AidaX
    unit_type = LSTM
    size = 16
    skip_connection = 1
    100% 300/300 [11:04<00:00, 2.21s/it]
    done training
    testing the final model
    testing the best model
    finished training: AidaX_LSTM-16
    Training done!
    ESR after training: 0.04983718693256378

  2. light 500 (stops 471/500)
    device = MOD-DWARF
    file_name = AidaX
    unit_type = LSTM
    size = 12
    skip_connection = 0
    existing model file found, loading network… continuing training…
    94% 471/500 [17:14<01:03, 2.17s/it]validation patience limit reached at epoch 472
    94% 471/500 [17:17<01:03, 2.20s/it]
    done training
    testing the final model
    testing the best model
    finished training: AidaX_LSTM-12
    Training done!
    ESR after training: 0.08226656168699265

  3. heavy 300 (completed 300/300)
    device = MOD-DWARF
    file_name = AidaX
    unit_type = LSTM
    size = 20
    skip_connection = 0
    100% 300/300 [11:08<00:00, 2.23s/it]
    done training
    testing the final model
    testing the best model
    finished training: AidaX_LSTM-20
    Training done!
    ESR after training: 0.02749236859381199

fried heavy 300.json (80.9 KB)
fried light 471.json (31.7 KB)
fried standard 300.json (53.3 KB)

2 Likes

I will try now other trainings of the same model but with another input.wav and with an IR included.

I think that the real issue is to find a way to reduce CPU consumption of Dwarf and at the same time get better results for live use.

2 Likes

@Teuvosick Cool! Sounds really good!

1 Like

Yes, I was also thinking of this “bounced” method.
if you always chain the same behavior you could just as well capture them in 1 “function”.

I noticed that everybody has a different taste in cab IR’s and you’d better put your reverb in front of your cab but having your own usable tone right out of the box at low processor usage certainly is something you need to experiment with!

Love the profile btw, very curious what the “cab included” will do.

Always likes the Friedman amps and pedal but in this day and age, I love my small board and this profiling tech :smiley:

1 Like

Same amp settings + IR (I can’t upload the IR but you can search it on internet: is “LT TV mix 2” freely provided by Leon Todd, search on his yt channel or in his discord channel, if you need for reference…)

  1. standard 300 (completed 300/300)
    device = MOD-DWARF
    file_name = AidaX
    unit_type = LSTM
    size = 20
    skip_connection = 1
    100% 300/300 [11:10<00:00, 2.23s/it]
    done training
    testing the final model
    testing the best model
    finished training: AidaX_LSTM-20
    Training done!
    ESR after training: 0.05832277983427048
    fried+LT_TV_mix_2 standard 300.json (53.5 KB)

  2. heavy 300 (completed 300/300)
    device = MOD-DWARF
    file_name = AidaX
    unit_type = LSTM
    size = 20
    skip_connection = 1
    100% 300/300 [11:10<00:00, 2.23s/it]
    done training
    testing the final model
    testing the best model
    finished training: AidaX_LSTM-20
    Training done!
    ESR after training: 0.05832277983427048
    fried+LT_TV_mix_2 heavy 300.json (81.0 KB)

My 2 cents: without IR included it sounds better but Dwarf CPU consumption grows, using Portal plugin utility it helps but putting 2 AIDA -X in parallel I hear some kind of phase issues, maybe it introduce a little latency (not measured yet) and using the same trained model a little sound degradation, see pics


Now I will try some other training tests with another file “input.wav” that @madmaxwell gave me.
When I will find the best practice for me I will start to upload different amps.

1 Like

this one will for sure introduce phasing issues, due to one signal path being delayed while the other one is not. the 2nd path in the bottom would need a delay of 128 samples to compensate the top one.

maybe I should add a simple utility plugin that delays a signal by the same amount that portal does? no matter the buffer size, be it 128 or 256 setting, would always be correct

6 Likes

A very good idea @falkTX!

To be honest: I like this phasing effect. For me it is a feature, not a bug. But sometimes it should be avoided, thus, a “delay-plugin” would be a fine solution.

1 Like

Thanks for the model!

I like it best with a separate cab IR as well.

Most of the things I have tried for high gain stuff seems to work just as good on light as on standard.
And when it comes to live shows, that matters even less.

Also, try the ChowCentaur for fun as well instead of a TS when trying these models.

1 Like

I measured now the delay with portal: 5,0 ms, surely not a problem at home recording or playing by myself, but if you sum this to normal AD/DA conversion time of every digital device you could use on a stage (digital transitters for guitar - dwarf (or other modeler/profiler), digital mixing desk, in ear monitoring) you can easily arrive to 20ms that is unacceptable for many players.
So I think I will not use it for the moment in a live gig, maybe the best solution is to use 2 light profiles instead of portal and 2 standard profiles.


Here a standard training 300 epochs (not completed 293/300) with another input file provided by @madmaxwell

device = MOD-DWARF
file_name = AidaX
unit_type = LSTM
size = 16
skip_connection = 1
98% 293/300 [1:26:37<02:06, 18.11s/it]validation patience limit reached at epoch 294
98% 293/300 [1:26:56<02:04, 17.81s/it]
done training
testing the final model
testing the best model
finished training: AidaX_LSTM-16
Training done!
ESR after training: 0.05815804377198219
fried standard 300 alt input.json (53.4 KB)

how did you measure this? the introduced latency of a portal should be a fixed value based on the current buffer size. This is 128 by default on a Dwarf, which results in 2.666ms added latency.

Having double that indicates something is wrong

Last try: heavy traning 600 epochs (not completed 340/600) with last alternative input

device = MOD-DWARF
file_name = AidaX
unit_type = LSTM
size = 20
skip_connection = 1
56% 339/600 [11:51<08:25, 1.94s/it]validation patience limit reached at epoch 340
56% 339/600 [11:54<09:10, 2.11s/it]
done training
testing the final model
testing the best model
finished training: AidaX_LSTM-20
Training done!
ESR after training: 0.04179903119802475
fried heavy 340 alt input.json (80.8 KB)

ESR is lower with the file input.wav in notebook so I will use that for other models.
I think that my tests are finished so I will send other amp sims, if you want to test something I forgot let me know, cheers

1 Like

OOT since here obiouvsly he wants to use both at the same time, but for those who would prepare a pedalboard with multiple amps (but only play through one at time) there may be a solution with the “NETBYPASS” control. When this switch is asserted, the network calculations are disabled, and the amp is in passthrough. When the network is disabled, very little if no CPU is consumed. This is NOT the same as bypass. In this way with an external mapping through Snapshots you should be able to switch between amp models in the very same song, without loosing reverb tails.

Regarding multiple amp types, I would say @Teuvosick to be patient since the models will be optimized in future. This proves that there is always the need to consume less CPU, I guess if we would be able to run 2 in parallel then a user would come and ask for the possibility to run 3 or 4. As I discovered myself these things pretty much scale. If you mix two amps, probably it would be okay to use slightly less precise models. At the moment we don’t offer “scaled down” versions of the same amp. Maybe could be the next thing!

6 Likes