Main difference between open source amp sims vs proprietary options

I have been doing a bit of research on amp and pedal simulation and learned a lot from approaches used e.g. in dkbuilder in Guitarix, tubeAmp by Oleg Kapitonov, and some research papers/theses. Even though we’re using this really cool science/engineering, there’s still a huge gap (IMO) between the quality of opensource vs commercial approaches. Does anyone have an idea on where those differences come from? Is it just shortcomings in publicly available modelling methods, lack of resources to properly spend time modelling real amps (e.g. I know Oleg Kapitonov used Amplitube to model some amps) or is there just some sort of secret sauce that companies use that is not mentioned openly (like not necessarily simulating the analog hardware but finding combinations of saturation modeling + compression that sound similar)? I know there is a lot of secrecy in the proprietary audio “community”, but I keep wondering if somehow there’s something else we could be doing to model analog circuits better in opensource plugins.


In my opinion, it is not so much about the core DSP quality, but amount of attention and polish you give to it.
Compared to open-source stuff where it is often made just for fun and put out there, for a commercial vendor to succeed they need to stand out in some way. There is marketing involved sometimes too, but there is a limit on how much BS you can say… In the end, an overall good quality package is needed if a vendor wants to remain relevant for the long term.


I suppose it would be easier to focus on amp profiling universe against amp modeling since it seems that it requires less effort (how many work hours would require to maintain code for hundreds of amp models?) and wide usage of deep learning for profiling process. IMHO, recent cheap devices achieved great results with amp profiling technologies (look at Mooer GE300 or Overloud TH-U). Even Kemper still rules the profilers market since 2011. To me, the way forward is that of profilers in the open source world.


That makes sense. I was looking at GuitarML this morning and they already have a bunch of contributed profiles by users. Unfortunately it is a VST built using JUCE, so I know it would be a bit of work to port it to MOD devices.

GuitarML stuff is mostly VST, but his NeuralPi is being ported to LV2 currently. This would open up an insane world for guitarists. It would definitely not run well on duo though. DuoX and Dwarf should be fine with it


I contacted Overloud (THU). You’d think they would be interested in this platform, or at least LV2, since they have VST. AAX and AudioUnit. But not yet.They would need to be contacted and pressured by many more than just me to see the potential. The size of the community as a market is just one dimension. Any proprietary vendor has something to protect. It’s not clear to me how much is hackable if someone like THU supported MOD Devices. You’d think the proprietary aspect could still be black-boxed and protected. I assume though that the proprietary aspect is just how they combine the changes in saturation models, the power supply, the speaker. etc in all playing conditions (levels)… All the interactions are probably captured in samples and then recreated in algorithms or with Artificial Intelligence.


Hello, which paper did you read? From what I currently know, Guitarix is based on the following approach: the schematic is solved and parametrised with a python script, you end up with a linear portion implemented with n order digital filters and then you have a static non-linearity implemented in tables. There is no oversampling at least in dkbuilder generated plugins. But please double check latest version. It seems like a bit old technology, but who knows how many commercial effects or portions of them are still implemented in this way…who knows what’s running behind these fancy UIs? Current state of the art in virtual analog is wave digital filters and then in the latest two years we’re seeing the rise of deep learning approaches. Wave digital filters still require HUGE circuit analysis and you can’t automate anything. I think deep learning is much more viable for open source, however from what I’ve seen the current data sets expecially for guitar modeling are not so high quality and are provided just as a reference. I dream of an open source dataset for guitar modeling with high quality recorded guitar parts played with multiple guitar types and playing styles, but probably is either too soon or it will never happens and we cannot pretend too much from a single phd in his lone room which is probably doing much more than requested. Unfortunately I still can’t find the time to dive into it, but if someone wants to give it a shot, this is where I would start. Please forget about Kemper/profiling since putting resources into it now would be mostly a waste of time, it’s a limited technology and it’s already surpassed by other approaches. Cheers.

1 Like

I managed to build NeuralPi as LV2 on x86_64 based on this fork: GitHub - Chowdhury-DSP/NeuralPi at lv2.

@madmaxwell I do agree that there are much newer technologies. I myself have a PhD in deep learning applied to audio, although not to amp modeling (and I appreciate your comment on how hard we work!). I totally agree that getting something like the NeuralPi to sound great is mostly a data problem at this point. There’s also the whole “adding the knobs” situation, as most of these neural nets people have been training in the open only model a pedal/amp at a specific configuration.

I’m really interested in learning more about the WDF approach, and I’m currently reading @chowdsp’s code and paper on the ChowCentaur to learn more :slight_smile:

1 Like

I’ve contacted Chowdhury myself since his work is simply outstanding, he managed to run an audio dsp driven by neural network on a Teensy, so imagine what we could do on a Mod Duo or my platform Aida DSP OS. We discussed on how his contribution would benefit from community help in terms of data set building and maintenance. We discussed also the possibility to build a library of selected amps just like commercial plugins. We discussed how we could modify existing training code to be used painlessly on a linux build server and I think this issue it’s still pending. If I remember well there is a dependency with python ltspice which is Windows only. I’m interested in helping too but I don’t have too much time to dedicate at the moment. Feel free to put me in copy if you dive in, I’ll do what I can.


Short update on this topic: all the NeuralPi stuff is at 44.1k and for 48k support every training and also resample of the input files need to be done. There is a performance drop by simply changing from 44.1 to 48 and this drop still need to be estimated. Please follow discussion here sample rate converter · Issue #9 · GuitarML/NeuralPi · GitHub


Another quick update. It seems the author of NeuralPi is looking towards modep since Elk OS is too much of a pain (their software, sushi, is more a DAW than a virtual pedalboard and currently support only effects in cascade - that are a chain of effects applied to an existing audio track). The real problem is not the missing of time based effects since sushi supports lv2 too.

@seaandsailor lv2 support should have been merged into master can you double check? So last obstacle is switching to 48kHz.

Hello that’s another update on the topic. So first of all NeuralPi is not compiled as a GUI-less plugin and modding CMakeLists.txt by adding


to target_compile_definitions has no effect. This plugin compiles for Elk OS since in Elk SDK they use their fork of JUCE which introduces another define JUCE_HEADLESS_PLUGIN_CLIENT=1…but I think this would work for VST3 not for LV2 need to inspect their SDK… @falkTX do you have time to look at? Just to know how bad we’re dying :slight_smile:

Apart from this the potential for this plugin are enormous but I’ve investigated a bit the Dataset used…so GuitarML is currently using same data accross train validation and test which is not coherent with the referenced paper…even considering the dataset in the paper, it’s borrowed from another application which is onset note detection for automatic music notation. In other words I think another dataset need to be realized which is more focused on a
particular genre/playing style or technique/amp setting in example:

  • metal/palm mute/rythm
  • metal/legato/lead
  • country/chicken picking/clean
  • funky/right_hand_technique/clean
  • and so on…

this is gonna be the best learning material to capture the amp sound for a particular application and it’s gonna be added to existing dataset which is more focused on single notes or chords…or at least I would try in this direction…


This is a very interesting observation