Finding a MIDI Note From the Period of a Waveform or Finding it from a String/Fret Grid, That Is the Question!
Sometimes the internet is wonderful and the search engines just provide what is asked for, a veritable (if not always verifiable) cargo cult of bounty. Then it sometimes isn't. Take this formula, for instance, it calculates the note number for a Musical Instrument Digital Interface (MIDI) Note On or Note Off command from the pitch of a note.
Deriving the nearest MIDI note number from a tone of a given pitch. (Source: Google Search.)
This is all well and good if you're talking about theoretical pitches, or working with a high end, high speed microcontroller that can measure a frequency quickly and spit out a MIDI command, but I like my microcontrollers low cost, like the venerable Atmel 328p. You can measure pitch with one of these, but you have to convert the wave's period to pitch, because the processor isn't really up to having a built in frequency counter. The way to do this is to measure the wave's period directly with the pulseIn() command, like so...
byte pin = 2; void setup(){ // setup stuff here uint32_t timeout = 13811; // maximum time allowed to measure an input period before resetting pinMode(pin,INPUT); } void loop(){ uint32_t period = pulseIn(pin,HIGH,timeout) + pulseIn(pin,LOW,timeout); // measure wave period uint32_t frequency = 1000000 / period; // convert period (uS) to frequency (Hz) }
So, we can have a reasonably reliable frequency measurement, albeit a little latent on the lower end of the low E string, by measuring the high side, then low side of an incoming waveform, preferably a square one for accuracy. The problem is, we have 3 commands to take us to a frequency, then a bunch of mathematics commands to find our MIDI pitch. Just getting to the frequency has taken more than 30000uS on an open low E for a guitar. That's a bit laggy. It does get better on higher notes. The last thing we need is an extra step in there. So, why not plug high and low pulsewidth values directly into the formula? Indeed, so I googled "formula to calculate MIDI note from wave period." Crickets... a tumbleweed. All of google's roads, bings and DDG's, too, lead back to the Pitch to MIDI formula. Surely I'm not the only one doing this kind of algorithm?
It turns out that it's actually not that hard to transpose the formula for period. I scratched my chin, considered the fact that period is the reciprocal of frequency (see last line of code, converting microseconds to Hertz) and realised that, if I put the period of the reference pitch (A 440Hz reciprocated equals 2272uS) on the top line of the fraction inside the brackets and the measured period of the incoming guitar note on the bottom line of this fraction, I get the desired integer MIDI note out of the total equation. I've run a spreadsheet that proves it...
Spreadsheet is in Apple Numbers format presently. I'll add a link as an Excel compatible file shortly. However, I'm sure the image gives you the gist.
So, the final formula is...
MIDI note number = 12 * log2(2272/period) + 69
...where period is the timed HIGH pulse plus the timed LOW pulse, 2272 is the reciprocal of the A 440hz reference pitch and 69 is the MIDI note number corresponding to the reference pitch of 2272uS.
Writing a routine in C to do this is going to be noticably latent below A 110Hz, regardless, because using the math.h library needed for period to MIDI conversion will suck up a few processor cycles, as will the actual time measurement of lower notes. There's no real way around this, although, starting the note on trigger logic with the previous note value, switching in the new note value when read, might fool the ears, especially if a patch, on the sound module a circuit like this would be driving, setup with a quick(ish) glissando on attack is used. The first note would be late, subsequent notes would rapidly glide down or up to the newest note.
Above the open A string, playablity comes into its own as the latency falls below the magical 20000uS and the instrument will sound tight on lead lines. Generating basses then becomes a matter of transposing the measured note number down by subtracting 12 or 24 from the measured value.
What I'm envisaging is a guitar, with a divided pickup that is buffered and split 2 ways, a trigger signal, derived from the same rising edge that triggers the period measurement and an envelope follower for updating note velocity until a noteOff signal level is reached. The envelope follower would be an optional feature, selectable via a switch. Transpose would also be selectable via a switch - note for note, down 1 octave and down 2 octaves. The guitar would also have a more conventional pickup, fed to a normal guitar amp, via normal pedals.
Regarding latencies. There is another, somewhat more complicated way to derive note data. A divided bridge, each connected to a 328P output pin, say D2 to D7 and each fret is wired down the neck to a I2C serial input expander, providing 16 frets with a single board or 2 boards, using the chip select code between 00 and 07, providing 24 frets and 8 strings array. This system would still use the divided pickup for trigger and velocity data, but would rapidly cycle a high, successively through each string, checking which fret, successively from 24 to 1, is the first to read high and, if none do, that string is playing the open note.
This kind of bridge has the benefit of making sequential string/fret sensing, or triggered string, sequential fret sensing possible.
Sensing a trigger from the divided pickup for each string could trigger the string to logic high, so that the frets are sequenced very quickly. This system could read a note for each string in 50 to 100 microseconds per string. Six noteOn commands in under millisecond, by the time you add (and update) velocity! Picking or strumming, of course, trigggers notes individually a few mS apart, making the the reading of each note less latent than if all were plucked at once.
The downside of this method is increased build complexity. Up to 24 wires, 12 on each side of the trussrod, one for each fret, would need to be run through a channel beneath the fingerboard. Each wire would need to make a reliable connection to each fret. Necks are difficult and delicate luthiery at the best of times. It's not impossible, but it does push some design boundaries.
However, a prototype need not have an audio pickup, only a divided trigger/envelope sensing pickup system. It perhaps needs only 12 frets, 6 strings and a 3D printed neck and body, where the fret bus-wires run down a metal tube, doubling as the trussrod. This would allow for designing the fret slots into the neck, with a hole at each for threading the wire into the slot. At the bridge end, there is potential for thermal-insert nuts to be set into the body, allowing the guitar to have bridge tuning.
The guitar controller does need to be tunable but, perhaps later, it might even sense the difference between the note played on the string and use a MIDI Control Change command to bend the note to the degree the string is detuned, allowing note bends to be passed through to the receiving synth the guitar is used to play, keeping the synth line in tune with the guitar sounds. However, this feature is not for a prototype.
Many of these design considerations have been filling my days, lately, but I felt I needed to write them down, because this is a more complex instrument than adapting a real drumkit to be played by a drum machine via MIDI. The latter is simple power control, the notes are pre-programmed and the MIDI note map is predictable. A MIDI guitar needs the expression of real guitar. 6 note polyphony, velocity sensitivity, aftertouch, low latency, like a keyboard, but expressive like an acoustic instrument. I think writing this down has helped consolidate some of the things I need to design algorithms for.
Comments
Post a Comment