Well… I just commented on it and suggested a comparison of these units with the Mod Dwarf. Wouldn’t that be great?
The man kindly responded saying he would think about it and mentioned having problems to run Aida X profiling procedure on his desktop. So… maybe it’s a good opportunity for one of the Aida X experts to respond to that comment with help and maybe offering further support?
I really would love to see that video happening and it would be good publicity for the Dwarf too. It would put the Dwarf up there with two very successful units. Even if the Aida X tones happened to be slightly worse (which I don’t know) it seemed like Leo loved the unit and its features in previous videos and would talk positively about it.
He’s very kind to all comments in general and also I’m guessing very busy making other stuff but I think it’s worth a try?
We talked to Leo after the release of AIDA-X and his videos. Unfortunately his MAC setup prevented it working with the local training.
But maybe it has changed so worth a shot.
NAM is the software which produces the highest quality models and it does that by disregarding CPU constraints.
I find it kind of pointless to make the comparions that Leo makes, as he is comparing 3 embedded devices against a desktop application.
All of the compared devices - ToneX, Kemper and QC - have their product people making the following question “how can we squeeze an acceptable simulation out of a CPU that we can embed in a pedal and that is cheap enough to allow a commercialy viable device” ?
NAM has the luxury of not asking this question
If any, I think that the most valuable conclusion is that ToneX deserves a prize for pulling out an acceptable sound out of such a dinky microcontroller. Respect
you are right. I wasn’t quite precise.
But he shows in a very good way, that solutions that are very expensive are not the best on the market.
And I am honest, I had the ToneX and it did not impressed me very much. I play a lot with NAM and there are some very good models out there but unfortunately not for AIDA. Don’t get me wrong, I love the dwarf as an FX unit, but I miss good High Gain sounds. I hope many people train their models and load them up on AIDA.
Yeah to be more precise, Leo said to me that he didn’t want to use Colab and instead he would perform training on his setup for convenience. He ultimately has a Windows 11 setup with CUDA Gpu and all the stuff. But since we use docker containers to provide the training env and he has no idea how to setup/use docker, he said he would had a look on how to use docker but I think he’d been busy. Leo, like maybe others thought that since NAM was using conda env then it was the “standard” way of installing deps for training. But it’s simply not true, the docker solution is much more consistent, our Dockerfile is based directly on nvidia images and everything is just 1:1 with Colab. And docker on Windows is totally doable, and widely used too.
here are some very good models out there but unfortunately not for AIDA
saying there are no good models for AIDA-X is wrong, saying NAM has far more models it’s right, they have a 15k community, so nothing surprising. But it’s also a waste, since they are all training models with very heavy network, which won’t run on any reasonable embedded device. Also from what I see they are all training with fixed epochs (like 1k), this is not a sign of a higher quality model, the contrary it may expose the model to generalization problems. NAM can produce also lower CPU models, but the thing has not been widely adopted simply since they’re not interested in it: their reference setup is a Laptop with an usb soundcard afaik.
I miss good High Gain sounds
Being said that, we’re not 15k here but we still have plenty of tube amps do you have a preference list for those high-gain models? My idea is that to go really high gain probably atm other modeling techniques are better suited than current ML, simply because you have a lot of noise in the Dataset and also a lot of chance to get into aliasing issues. Techniques to de-noise in the context of ML modeling is one of by hot topics right now (studying).
Would non-linear thresholding of wavelet coefficients be an interesting approach for denoising in that context ? I think it can be also viewed as a single layer of a NN, but I might be wrong… Also I don’t know how well it works for denoising of guitar audio signals.
This. For me the Dwarf is still the winner if you combine all the aspects. Flexibility, the instruments it supports, the prices, the tone, the open-source aspect, The ease to craft my own models, even if they are virtual amps
This remains a persona quest too. I did try to create some high gain models and I even see some of my models appear in other youtube video’s.
It hasn’t been easy to minimize the noise factor ‘baked’ in the model somehow. I tried some captures of vst amps with some EQ tweaks to eliminate some of the sounds that has some high ‘woosh’ in them.
I found that a lot of it comes down to pairing a an AIDA X model with a suitable cab IR.
While some of these models work very well for me, I wonder what doesn’t make them work for others.
For obvious reasons, I don’t model with cabs included but offering them as an “automatically paired” combo would be something handy for people to try.
So, by all means, let me know which of the “LievenDV” models you like the most and the least and which CAB combinations you are using.
if there is one thing I’ve learned from the experience with AIDA-X, is that we’re at the very beginning of the story. And ML is definitely a science were the experience and ability to make links with pre-existing work can do a huge difference. For example I answer with another idea: why not put an enc / dec block like they did in the past (tape machines) for reducing noise handled by the network? Because adding another layer maybe would lead to state of the art performance but what about CPU consumption?Instead a comp/exp block is something we already know expecially in terms of CPU consumption profile.
Wavelet thresholding is rather cheap computationally speaking AFAIK. At least the simple version of it. My doubts are more about its suitability to the matter at hand.
I somehow disagree. We tend to consider NAM as cheap because we take for granted the computer we already have
And also, NAM is not “on the market”. There is no commercial product using NAM afaik. There are some people doing some portable arrangements using mini-PCs, but that’s it.
High gain sounds are indeed a challenge with small models, as they are very complex and turbulent, requiring a lot of information to replicate them properly.
If you train AIDA models, have you tried heavier models, liek LSTM40 or LSTM80?
I encourage you and others to send PRs here GitHub - MaxPayne86/CoreAudioML at next. For example I’ve implemented a few non-linear clippers which are taken directly from DSP audio world and adapted to torch, so that now they are trainable. Are they making a difference? I still need to finish the tests. Unfortunately ML is also a bit difficult in this direction since to say that something has improved you need a reference Dataset and a reference Dataset does not exist yet. We need a COCO for ampsims
Also worth trying next branch here, where I’ve taken care of implementing multiple layers. It’s an idea of a contributor that kindly opened an issue reporting on that. I’ve also tested that and expecially on high-gain is helping a lot, like we can simplify by saying that GRU-8-3 is better than GRU-24-1. The new notation for the model structure will be just Type-HiddenSize-NumLayers. Still need to sync fully with the team on that!
I think the point is guitar players want to have an objective answer to the question “how accurate is this or that modelling/profiling system compared to others?” when we don’t have access to try them first hand or we simply can’t trust our own ears to answer that question. Even when, in the end, most of us will eventually go with whatever sounds better to our ears (if it’s convenient to our pocket and needs). For instance, Kemper has always sounded the best for me, even when it is compared to the real amp . Idk why .
Also I like the “quirkyness” of the guy and his approach to things: one of his 5 advices to become a better guitar player was to read books . But I digress…
I just thought the Dwarf, being such a versatile and powerful unit (that exists already as a pedal) would shine against other solutions even if it’s not the most accurate at modelling.
Also thanks to all the people that are working on optimizing the Aida X captures as well as everyone that made the Dwarf such a great tool.
It is difficult to be objective in such evaluations. Especially nowadays when there is so much computer production. The requirements on the quality are much different for recording and live playing.
To be very honest, what I’d really like to see is a video from Leo comparing the Dwarf against the three (ToneX, QC and Kemper)