As far as I can remember from my physics class, at least one half-wave is necessary for an ideal curve in order to determine the frequency using Fourier analysis.
However, we hardly find the ideal curve shape in the real world.
One whole wave is necessary at least.
However, this also sets the limits for the latency.
A low E on the guitar has about 88 Hz (11.3 msec). Twice as long on the bass.
The ear has a more sophisticated way of detecting frequencies.
It uses small hairs in the cochlea, each with different resonant frequencies. If it is stimulated, a corresponding impulse is sent to the brain. This can be done faster but is not yet technically feasible.
A latency-free detection is therefore not possible by analyzing the waveform.
I don’t remember where this was used. The frets were connected to a circuit board and the resistance of the frets to a defined potential (in the saddle) was measured and the increase in frequency used as the trigger point for a tone. The tone length was defined by analyzing the oscillation amplitude. Well, quite an intricate construction.
I once had Fishman Triple Play - but it could only be used for flat sounds (pad etc.).
I, for quite some time now, have been looking for any solution to get synth sounds from the notes of an electric bass. I have had Future Impact, Sourceaudio C4, EHX Bass Micro Synth and also Korg G5. Of these, the best have been EHX and Korg, at least in the studio. Unfortunately, during a live performance none worked. The big problem has always been dynamics. During a live show, controlling dynamics, especially on the bass, is a must. With any machine tried, the output level was strongly flattened, without any correspondence with the force with which the strings are plucked. This led either to bass volumes that were too low, or (by turning up the amplifier’s volume) to volumes that were too high. With no possibility to follow the dynamics of the rest of the band. So any solution was not, in my opinion, usable in a live context. The only hope I see using a combination of pedals that modify the sound and that are the classic pedals used when you want to generate a synth bass: 1) octaver, 2) envelope filter, 3) fuzz, 4) chorus. The disadvantage is that you don’t have presets that allow you to instantly change the sound during live performances.
For pitch detection for bass guitar I have the impression that the only feasible approach is a neural net based detector which does pattern matching on sections on 40 hz signals. Its standard in computer vision for pattern matching. At work we train and deploy tiny models for OCR which run at 200ms for images of 100x100 PX. It should be possible to get down to 10-20ms for pattern matching on a 256 sample section of a 40hz signal with tensorflow. The drawback is that these models need labeling of large data sets, 5000 recordings of the notes of a variety of bass guitars
it is not very practical to use a volume pedal for bass lines during a live performance. the use of the volume pedal can be fine occasionally to adapt to changes during the evening but it is impossible, in my opinion, to follow the dynamic evolutions within the same song
I think you meant @Giga .
For guitarmidi I’m planning on extracting the envelope of the guitar signal and passing that as velocity/after touch to the synth
Which is what the Axon did, and it worked quite well. It’s a good choice – though hard to implement.
Back in the day they had to record those sounds themselves, but there might already be datasets available for that purpose. IIRC, the head of Blue Chip at the time said it took a full year for that task alone. (Axon was guitar and bass, so they had to record guitar with several different picks too.)
Luckily today, processing is much faster and cheaper (though still constrained by physics, as you point out), so the latency from the comparison process is much lower.
I own a Future Impact and you’re right when it comes to the dynamics. It sorts of “flattens” out the response, and it’s generally very picky with string plucking (pun not intended). It has to be very clean, which is not always the case when playing live.
And so is the Fishman Triple Play. It is by far the fastest and most accurate pitch to midi system ever built, but it’s a hellish nightmare to get properly set up and you have to pick very cleanly overall.
As far as I’m concerned, it’s perfectly acceptable to play at the top of the keyboard, where tracking is generally good. I have never had big problems with tracking. The big problem in live use is only the great compression of dynamics that all the systems tested so far perform.
Its not hard to implement codewise, I do this in my dayjob on the daily. The problem is acquiring labeled data. That is: a wide variety of guitars played by a wider group of guitarists with different styles. If I manage to create a community around guitarmidi- lv2, I would like to conduct such an effort.
Thanks. Other tunings are planned. Other instruments not. This project is meant to get the most out of the guitar.
Always happy to provide electric violin-related testing! I don’t know how valuable that information would prove for you, but hey, some guitars have piezos too
What I meant by hard to implement is precisely building that database – hence my comment on the possibility that such a database might already be available and/or in the public domain. (The Axon patent has already expired, but whether or not the datasets are in the open is hard to tell.)
The hurdle on building one such is that you’d need all guitar and bass players to follow a very strict set if instructions – following pre-determined, multiple tempos – and report their setups and gear (if more than one) in very precise terms. And it has to be done for all midi notes in the neck, all types of strings (gauges and wound types), all (feasible) types of picks, solid bodies, semi-hollow, hollow, possibly nylon strings, magnetic and piezo pickups, etc.
Unless you get the files in a highly standardised and fixed form, it will be a monstrous task to separate the sounds and – to use your term – label them as needed. So herein lies the difficulty.
I am tackling some problems with the Overtones. One of my ideas has led to a monophonic pitch detector which goes down to bass frequencies at reasonable latency. So I’m considering releasing a monophonic version first
I just discovered your plugin, I find it very useful and it works fine for my needs.
I am testing on my computer with Linux Mint.
Are you still planning a release?
The biggest limitation for my needs is the recognition of high notes, do you maybe have a feature branch for such feature?
I would be happy to test it and report back.
I just update the master branch so that it works somewhat well on high notes till the 12th fret
. You have to set jack to 256 frames for it to work. And you also have to cranck up the gain