Long-term feature: optimization wizard

Wouldn’t it be great if there was a button on the GUI that would do a quick analysis and point out the most resource-heavy sections of a pedalboard, maybe offer a few possible alternatives or routing options that might save some GPU?

Right now I’ve turned attempts at optimization into some kind of puzzle game in my head… but I’d rather be spending the time making music, and sometimes just end up giving up on getting that one extra module that I want in there. The wiki is helpful so far.


Here’s my take on it. You activate a recorder mode that would capture the audio at both the inputs and the outputs and then send the set of audio files and the pedalboard to a MOD machine learning service, which would return a customized plugin that can reproduce the sound. MOD could charge per plugin at different price levels. Basic level would have no parameters available, other price tiers could offer an increasing number of parameters and/or support for multiple snapshots. This would in theory allow you to replace sections of pedalboards or entire pedalboards with a much more efficient single plugin, allowing you to build off of that with further effects.


For the larger time consumers, it is unlikely that smashing them into a single plugin will save any measurable amount of time. Work is work and the basic overhead that every pedal expends is tiny compared to the work being done.

1 Like

something like Houdini’s Performance Monitor might work:

maybe the web interface could generate a series of test signals and send them through your pedalboard, then display the result.

1 Like

I’m not going to die on this hill for an off-the-cuff idea, but I wouldn’t expect the target machine learning implementation to work like that. For example, consider how Stable Diffusion “paints with noise” to get a convincing result. It doesn’t try to recreate every graphical technique in existence and re-use those, it gets to a result with a statistical algorithm that can closely approximate many different kinds of styles. I’d guess that an audio domain generator would derive systems of equations and lookup tables that can re-create the set of outputs for a given set of inputs.

Consider one pedalboard that uses an amp, a distortion, and reverb and another one that uses 26 flangers and 12 distortions and a few delays. They might sound a lot different, and certainly the second example would use significantly more processing, but to a machine learning algorithm that is comparing an input signal to an output signal, it may be able to derive a mapping for both with roughly the same processing required, and perhaps more memory required for the second case.

Explained a slightly different way - let’s take a hypothetical super-complex pedalboard that outputs something that looks basically like a sine wave. The AI-generated plugin wouldn’t try to inline all the logic from each plugin to re-create the output. It would find a relatively simple function that generates the same sine wave given the same inputs so you’d have the same (a very close approximation) output with significantly less processing required.

Here is a group using images of spectrograms for audio generation: Riffusion

Another approach of generating audio from input samples: AudioLM


Interesting thought. I have my doubts that any arbitrary chain can be represented that way, but I can imagine that there are certain chains that can. I’d be interested in seeing the results of such an analysis.


Seems like the AI assistant does just this (or something like it), but is currently limited to reverbs and amp/cabsims. I wish AI efforts could be applied to something like optimization for non-tweak-phobic users, but I get that MOD is also trying to get a lot more Dwarf units sold at this point, so the AI assistant makes sense.

1 Like