I’ve not tried it yet, but apparently spotify have dropped an open source audio to midi application, just thought it might be of interest as it’s open source with research papers.
It’s a set of python scripts, so not exactly interesting (not suitable to package into a plugin and not particularly performant for realtime applications).
It requires minimum python3.7 (MOD units ship with python3.4) and needs things like tensorflow, so this likely performs terribly on our units.
Just wondering how the progress for the development has been. Any chance that it’ll be released in beta soon?
Hi. Lately I had no time to continue due to my day job. And before I was having problems with cleanly separating the harmonic overtones. Currently I doing research into neural nets for audio separation, the idea being feeding the output of the filterbanks into a net which infers which string is plucked at which fret. So, no I can’t say when this will be available as beta but I have no intention in abandoning the project or letting it ‘slide aside’
staying stoked
He Galls n Guys. I have a new release !!
GuitarMidi v1.4!
Please look at the readme at GitHub - geraldmwangi/GuitarMidi-LV2: A concept for guitar to midi as an lv2 plugin
I created a debian/ubuntu package.
The plugin runs with no xruns very stable. There are caveats documented in the readme:
Currently you must run the host with 256 frame per period at 48KHz samplerate. I will change that soon.
Please test the plugin, I am very interested in you experiences.
It took quite some time, but I decided to stop experimenting and release the best I’ve got till now.
It is absolutely not perfect, but quite fun to play with now.
To make it better, I need to deploy machine learning techniques. I could need alot of help!!
Thank you for testing
Hey this looks nice! Noob question:
This is not compiled for dwarf / duox yet?
Not yet. I still need time to get that done.
If you run ubuntu or debian linux you can try it out, I have installation packages available at Releases · geraldmwangi/GuitarMidi-LV2 · GitHub.
I cant give any guaranty but I hope to have a production version for the MOD out next year!
Ok thanks for the reply. I’m on mac, so I can’t really try this…
That sounds quite exciting! I would love to turn my guitar into midi for arpegios and generative sounds, this could really be a gamechanger!
I’ll do my best
Well take the time you need But I wouldnt be too upset about messing around with a beta version of this
New Release
Hi. I’ve released a new version of the plugin. It supports now monophonic tracking by default as well as polyphonic.
The monophonic tracking works also for chords, it just picks the lowest string played
Get the release at: Releases · geraldmwangi/GuitarMidi-LV2 · GitHub
Have fun
Is there an easy way to get this working on my Mod Duo X?
Currently no. Polyphonic detection is not quite to my satisfaction and the plugin very beefy in terms of CPU resources.
I’m planning on optimizing it first for Intel CPU.
Then for the mod duo. I only have the older mod duo, not the X
But I would be very happy if you could test it on a Linux PC.
I also need help with the topic of neural nets, I am looking into including a NN to clear up all the false notes the plugin makes. However since the false notes are overtones you can play chords in polyphonic mode and it is fun
Hello @jimsondrift
I have the MOD DUO and the Dwarf.
When you release the version for the Dwarf or the DUO I’ll be more than happy to try them.
Ill make an effort to release the current version for the duo
Far be it from me to change your plans, but is there a dwarf version in the works as well?
I only own the duo, so no, no plans for dwarf
I own a merris enzo and while that one also has it’s quirks the magic lies in the integration of the tracking and the sound design. So while it makes sense to have midi output here, I wonder if it’s possible to provide more information about the tracked signal. Like Envelope Follower etc. This could be done through Polyphonic Aftertouch or MPE.
Like you said, even quirks can have a musical potential and like this we could make the Sounddesign more connected to the actual playing. Not sure if that what the possibilities are though since I don’t know your tracking algorithm.
As dwarf is the only of these plattforms that are still officially supported it woule be great to have
I don’t have a linux computer but only a dwarf. So maybe the mod team could help here?