New neural lv2 plugin from Aida DSP based extensively on existing NeuralPi, reduced to the bone

@micahvdm Very cool, I am not, how does one get access to that?

1 Like

Discord invitation

2 Likes

@rogeriocouto Thank you! I joined, and I think I see the file you’re referring to @micahvdm , it’s 1:14 long and plays some tones followed by guitar sounds?

1 Like

Yeah that’s the one. It’s a good starting point for capturing as each guitar with different hardware would ultimately affect the outcome of the capture, so using generated frequencies is a good way to overcome this. Does your neural net algorithm look at the reference vs the outcome and determine the clipping difference to get the profile?

3 Likes

@micahvdm In a way, but there’s nothing in the code specific to finding clipping characteristics. It’s more generic than that, I think. It looks at the difference between the input and output signals and optimizes the parameters (weights and biases) of the network to behave like the amp or pedal. The training process starts out by trying random weights and biases (literally just numerical values that are either multiplied(weights), or added (biases)) and checks the output audio signal to see if it got closer to the target. If it did get closer, it will continue tweaking the numbers in that direction, if not, it tweaks the numbers in the opposite direction, and keeps going until it gets as close as it can. The technical term for this kind of optimization is “gradient descent”. The LSTM model which NeuralPi uses has a “memory”, which remembers the signal in the past to determine how it should behave in the future. Since the data is on the scale of 44.1k or 48k Hz this “memory” is happening on the millisecond scale. The exact same LSTM architecture can also be used to predict stock prices, weather, etc. The data you train it on determines what it predicts. The numbers in the network (if you open up a json file and look at it) are just matrices of decimal values. They are abstracted representations of different features of the device you are modelling.

I probably went into more detail than you wanted, I suppose the short answer to your question is “yes”.

8 Likes

Hi @keyth72
Loved the last part of your answer. :grin:

As a total noob on this subject I wonder if you and the other guys here could clarify some references that I found over the internet.

We are talking here about AI, deep learning, machine learning and such.
What are the difference between profiling or modeling an amp or a pedal?
What it means “White Box” and “Black Box” and how these terms are related to machine learning?
If the answers for this are to complex, a reference to some introductory reading may help me a lot.
Thanks

5 Likes

@rogeriocouto You are making my day! Sure thing, and anyone else feel free to correct me or add to my answers:

Profiling is a term that Kemper came up with for their process. I think it’s even trademarked when referencing guitar effects. I don’t think they use any kind of A.I. or machine learning. The best I can gather from random forums is that they have a handful of reference amp or distortion algorithms that are tweaked based on the information from their profiling process. They send a series of test tones through the device, but I believe that’s where the similarities between them and Quad Cortex ends. Modelling is just a generic term to say its a mathematical approximation of the real thing, there’s not a specific tech associated with the term in this case. But everything I create, as well as the core tech in the Quad Cortex is based on A.I.

Sidebar, there are abunch of terms for A.I. that aren’t super clear, Machine Learning, Neural Networks, and Deep Learning are all slightly different flavors of the umbrella of “Artificial Intelligence”.

White box is modelling out circuits and components, tubes, resistors, capacitors, etc. You know everything going on inside the box. It’s based on physics and the math used to represent the specific electronic components.

Neural nets are black box, where you know your inputs and what you want out, but don’t care how it gets there. The neural net abstracts info from the training data, and the math isn’t based on physics. Grey box is another term, where you mix the two. This would be if you use a neural net for just the tubes and non-linear components, and then use white box modelling for the capacitors/resistors and linear components.

Hope that helps!

I have a collection of research papers about audio and A.I. on github for anyone who really wants to go down the rabbit hole on this stuff: https://github.com/GuitarML/mldsp-papers

13 Likes

Shure it does!
Thanks for the clarification.

Loads of things to read now :grimacing:

4 Likes

@keyth72 thanks for the detailed explanation! Love it. They way I’m doing it is more along the lines of sending specific frequencies(aka that wav file) through the device that you wanna profile(or model) and then matching the eq and storing the clipping characteristics. It seems to get pretty good results, but it’s not exactly where I want it yet.

5 Likes

… And maybe also the cab solo, for use with other amp sims? Although cab sims abound, it would make for complete packages.

3 Likes

Cabs in general are well handled by IR loaders. This is because usually we’re interested in the linear properties of the cab + mic system and the impulse response own them. Or, the impulse stores all the linear information about equalization that happens.

Let me explain a few things. This plugin uses deep learning to tell a very small network, that can run in real time, how to process audio so that it sounds as close as it gets to the real thing, the guitar amplifier or the distortion effect (stompbox). This is similar speaking in generic terms as what is doing Neural DSP on their own.

This is very different from profiling technique, we have fery few if not only a single example of an open source audio plugin who does profiling: https://github.com/olegkapitonov/Kapitonov-Plugins-Pack/blob/master/LV2/kpp_tubeamp/kpp_tubeamp.dsp. The key point here is that we have a generic model or algorithm that is representing the ideal guitar amplifier, then some signals are sent through the ideal device, to capture the “main” characteristics of it and apply back to the original model. Still, some fine tuning of the existing model needs to be done by ear to compare against original. In other ways, some skills are required to profile. In addition to that, since the fixed model is a guitar amp and not a distortion device, the algorithm is oversized to emulate just a fuzz, and may eventually fail in doing that. Also note that certain types of guitar amplifiers may be not perfectly represented by the structure defined in the plugin. For all these reasons, which are also reported in academic literature on the topic (as you can imagine, we don’t have massive publications on that), neural networks are the “next thing” for amp modeling, superior to anything that we have listened so far. When I model a guitar amplifier, I exactly now what is the resulting ESR that is the difference between the original and the one predicted by the network. This is oversimplified, since this stuff is PhD grade guys.

So what’s going on with my plugin? I really want to distribute it for Dwarf users, in particular giving current situation. But unfortunately cross compilation of my plugin with Dwarf current toolchain (gcc 7.5) is resulting in a plugin that produces a lot of noise. I don’t have a Dwarf, so @redcloud @spunktsch and @dreamer are helping me. This needs to be done by trial and error. On my Mod derivative project (Aida DSP OS) I use a yocto generated sdk based on Dunfell branch and gcc 9.3.0. I’ve uploaded a new demo here:

I’ve already moved away from what is public in terms of source code for training and other plugins. For example I have implemented a new loss function pre-emphasis filter that is using A-Weighting. By doing A/B comparisons I can say there is a huge improvement in the model sound. I need to implement a frequency dependant ESR measurement in order to quantify this “huge”. I’m pushing new plugin source code and models in the following days, and I would like some feedback.

:heart:

@keyth72 (amazing to have you here sir)

12 Likes

Impressive!

5 Likes

I do not have a Dwarf either :#

1 Like

Agreed, very impressive and the demo sounds very good!
I have a dwarf and would love to try this plugin out, but I’m afraid of bricking my dwarf if i do something wrong. Compiling, etc, sounds complicated and like it might also be complex to revert if i did botch things up.
I guess I need to start getting into the deep water of experimentation, especially if the parent company is gone now.
It would be nice if there were a way to just side load stuff, like on Android.

3 Likes

I think if there is enough interest in this setting up a patreon page or something similar would make a lot of sense. Also there is a a b-stock dwarf at thomann for 365€ - maybe this is an option to for a few people to chip in.

7 Likes

It’s sounding great!

3 Likes

ne need to be afraid. The dwarf uses linux at its core so its pretty unbrickable. Worst case is you reset the device and load your backups.

Always keep a backup (2 seperate drives) of your backups and the firmware.

5 Likes

Thought I’d share here for anyone interested, I have a new plugin out called EpochAmp, here’s a video, and you can download it for free from GuitarML.com. I had almost as much fun making the video as I did making the plugin. Enjoy!

12 Likes

I just fired it up in Ableton out of curiosity and… it’s pretty good. Granted I didn’t do extensive testing or ABing, I just strummed a few chords, realized that mode two effortlessly gives me that ‘edge of breakup’ thing that I like, slapped an Ownhamer IR on it and was like “sure, I could record with this”.

Few points of feedback:
A mix knob - while potentially useful in general music production - is less useful from a strict guitar playing perspective (unless maybe someone wanted to run an acoustic trough it, but there are other ways to mix a plugin into your signal inside a DAW). A basic tone knob (even if that would have to be a post-amp eq given that this is a captured/profiled/emulated amp) would be infinitely more useful. My main guitar was a touch too dark with this amp, and I quickly remedied that with another plugin but a tone knob would be ideal.

Some basic documentation/description of what each mode resembles/emulates would be helpful. I’m sure there’s a philosophical discussion here about expectations affecting the way we perceive tone, but I think most guitar players find basic generalizations (“American black panel amp”, “plexi”, “vox-style” w/e) useful. That second mode sounded kinda-sorta like a Marshall or maybe non top boosted Vox. but I don’t really know.

CPU usage is not outrageous by any means, but it’s in the neighbourhood of something like Helix Native/Amplitube/Guitar Rig, which for a single amp is on a higher said. Though I guess amps based on machine learning are inherently more resource heavy than modelling (just guessing based on experience with other plugins).

This is just a preference so you can completely disregard it but I don’t care for skeuomorphic look.

In general, great stuff. This is the kind of Amp Dwarf needed more of - easy to use but great sounding.

6 Likes

:heart_eyes: I have also released a new version (0.93) of my plugin together with completely re-trained models and I need some feedback. You can use it on Desktop but only if you support lv2. I still need to relase binaries, working on it. I will summarize here the features of my plugin:

  • built to be a generic neural model player
  • support for three network types (ideal for: pedals, middle complex amp tone, highly complex amp)
  • complexity means network depth means cpu consumption
  • two conditioning params available for each network types (param1 and param2)
  • model file is standardized to be rt neural library compatible, plus
  • skip parameter, which tells you if the training has been made by propagating the input to the output (a sort of dry/wet mix)
  • this parameter together with network type depends on the model you’re going to emulate and you can’t know in advance if it will improve the overall result or not
  • I’ve studied the available papers that conclude that A-Weighting is the best pre-emphasis function among the ones available / used until now. While I’m experimenting on this topic (@keyth72 we can setup a thread on this?) by ear I can say that the models trained in this way sounds better
  • the plugin has a volume input and master output, plus a low pass filter with variable frequency at the input to deal with aliasing problems (still under investigation) and other stuff (a user here told me his Dwarf sometimes add a 10k tone to the main signal and this should help fight it too)

It’s a quite an amount of work for me, and I will now drink my beer in peace. If you want to help me with the plugin door is open!

you can find details about current trained models in my plugin here

like you see in the document I have a model which is called “Edge Of Breakup”

I’m thinking about adding a multiband graph eq to my plugin, something that resembles what Neural DSP is doing on its plugins.

PS: I’ve setup a companion thread for training how to here Training models for amp sims. But we could share a doc here or on Discord group with the “most favourite amps out there” and train the most wanted ones…just an idea btw!

5 Likes