littleBits for audio mods?

Here are a few experiments testing littleBits audio post-processing. In the first few cases, audio is produced by a Yamaha SHS-500 synthesizer fed into the LINE IN of a littleBits Microphone module. Outgoing audio is sent through a littleBits Speaker module connected to an external amplified speaker.

I did not draw the littleBits Power module into every example circuit. If you’re experimenting at home, hey, “One, Two, you know what to do…”

The first circuit filters incoming audio:

          PowerSnap 
|
V
Envelope <-- Button <-- PowerSnap
|
V
Mic --> Filter --> Speaker

The Filter modulation input is driven by a littleBits Envelope module. The (audio) input of the Envelope is connected to a littleBits PowerSnap which supplies a constant +5 Volts to the input of the Envelope. A littleBits Button module is connected to the Envelope’s trigger input. (The second PowerSnap assures a full 5 Volt ON signal through the Button.) The Envelope sweeps from 0 to 5 Volts when the Button is pressed. Of course, the Envelope is shaped by its attack and release settings.

The first circuit operates successfully. The audio is filtered according to the Filter’s cut-off and resonance settings. The Filter quacks (a very scientific term!) when the Button is pushed.

The second circuit replaces the Button with a littleBits Pulse module:

          PowerSnap 
|
V
Envelope <-- Pulse <-- PowerSnap
|
V
Mic --> Filter --> Speaker

The Pulse module repeatedly sends a trigger signal to the Envelope module. The triggers cause the Filter to quack correctly. However, there is an audible click when the Pulse module fires — even if no audio is playing. This noise is unacceptible and I don’t know why it is occurring. Power glitches perhaps?

At this point, I began experimenting with the littleBits Threshold module. The (third) simple test circuit below:

    Power --> Dimmer --> Threshold --> Number

demonstrated that my intuition about the Threshold behavior is correct: when the voltage into the Threshold exceeds the threshold setting, the Threshold turns ON and outputs +5 Volts. When the input voltage falls below the threshold setting, the Threshold output turns OFF (0 Volts).

Testing tip: The Number module has a “Voltage” setting in which Number displays the incoming input voltage. You can use a Number module as an in-circuit volt meter.

Given that, I couldn’t determine why the Threshold was not acting like a gate generator when driven by a littleBits audio signal, i.e., driven by the Microphone module in its “Sound” setting. Turns out, the littleBits Microphone module converts the incoming LINE IN signal into its own notion of audio — a signal centered around 2.5 Volts. I connected a Bargraph (or Number) module to the output of Microphone, and indeed, the Microphone sends 2.5 Volts when the audio is silent.

Arg! Once again bitten by the lack of signal documentation! When the Microphone is in its “Other” setting, it converts the input signal to swing from 0 to 5 Volts. Bad news, however. The Speaker module expects audio in the 2.5 Volt centered, littlebits convention and it distorts like a bandit when driven with the “Other” setting.

The 2.5 Volt convention also explains why some folks have observed only a 2.5 Volt sweep in the Envelope output. All of this has serious implications when mixing audio and control signals in littleBits. I need to think about this for a while…

The fourth test circuit demonstrates filtering of regular line level audio:

                              Powered Speaker 
LINE IN
|
Power --> Proto --> Filter --> Proto
|
Synthesizer
LINE OUT

This circuit filters incoming audio. Fortunately, the 2.5 Volt convention does not preclude a simplified signal chain, that is, a chain omitting the littleBits Microphone and Speaker modules. A filter is a filter is a filter, I guess.

Although the Filter module operates on a “regular” audio signal, the Delay module does not. Substituting the Delay module into the fourth test circuit produces nasty noise and a whine. It will process the audio (you can hear repeats, etc.), but the noise/whine is horrible. Screams like a banshee. Bummer.

Bottomline, the littleBits Filter module has potential as an add-in for a PSS-A50 mod (or any other mod) without Microphone and Speaker modules. The littleBits Delay is simply too noisy by itself; one needs the Microphone and Speaker to perform signal conversion. As to the Filter, I need to explore alternatives for modulation. Experiments with using the Oscillator module as an LFO were underwhelming. So far, I haven’t successfully cobbled together an envelope following or audio-trigger envelope. Stay tuned.

Interested in littleBits synth control signals?

Copyright © 2021 Paul J. Drongowski

PSS modding: A few ideas

I’m still thinking about Yamaha PSS mods, most notably, the PSS-A50. Open box A50s are coming on the market and I get the itch to modify an A50. I don’t want to buy a brand new unit since I will immediately tear into it with a screwdriver, drill, and worse! 🙂 Here’s a few more thoughts.

After looking at the PSS-E30 Remie teardown, that speaker has got to go. Even without the speaker, I don’t think there is enough room for the Korg NTS-1 as I first planned.

littleBits filter module

Second-besties, I’m considering a littleBits solution. Lots of folks mod the Korg Monotron to get access to its filter, but oddly, they don’t consider the littleBits filter module. I did a few preliminary experiments with the filter and delay modules using the Yamaha SHS-500 Sonogenic as a stand-in for the PSS-A50 sound generator. The filter and delay sound great although I need to add an envelope generator to make the filter quack and bark.

My main concerns at this point are:

  • Driving littleBits audio without the Microphone module and the Speaker module. Both modules would take up unnecessary space. I’m just don’t know (yet) if regular headphone levels are strong enough for the littleBits 0 to +5 Volt signaling convention.
  • Physically and electrically securing the littleBits modules to themselves and the A50 chassis.
  • Finding 5 volt power in the A50 in order to supply the litleBits modules.

Of course, there’s the problem of mounting the littleBits modules so that the controls (potentiometers) poke through the A50 speaker grill.

I investigated the PSR-F50 audio and digital electronics. The PSS audio amp is mostly likely different than the F50. So, I need to get the A50 service manual. The service manual should help me find the +5 Volt rail, too.

I took another look at the Yamaha YMW830-V processor pin-out. The YMW830-V is also known as the “SWLL” processor. It is a system-on-a-chip (SOC) containing the CPU, memory, and tone generator. The SWLL has five pins (TRST, TDI, TMS, TMS, TCK, and TDO) for serial input/output — most likely USB. This doesn’t bode well for people who want to add 5-pin MIDI to the A50 (or other SWLL-based keyboards).

Reface YC key scan matrix

The PSS series, the Reface series and the SHS-500 share the same 37-key keybed. The key switch matrices are similar. They all break the key range into groups of six keys. Each keybed is a 6 group by 6 key matrix with a dedicated group to scan the fourth C key. The PSS and Reface/SHS differ in the number of key contacts as the Reface/SHS are velocity sensitive and the PSS is not. The Reface/SHS have two contacts per key and the PSS has one contact per key. The Reface/SHS have a total of twelve sense lines (2 lines per key) while the PSS has only six sense lines.

6×6 must minimize ribbon cable width or something because Yamaha will subdivide 61 keys into upper and lower banks in order to deploy six keys per group with 6 groups per bank maximum. You’ll see this practice in the synth product line, too. Just sayin’.

The Yamaha SHS-500 and Reface series use the same MIDI I/O dongle. I came across this rather nice diagram (below) of the SHS’s MIDI port. It should help you to whip up a custom cable or two. [Click image to enlarge.]

Yamaha SHS-500 MIDI circuit and connector pin-out

Hope these observations help someone out.

Copyright © 2021 Paul J. Drongowski

Combo organ: Top octave emulation

Given the scarcity of combo organ top octave generator ICs, what’s a hack supposed to do? Emulate!

I posed a “bar bet” against myself — can I emulate a top octave generator chip with an Arduino? The Arduino is a bit slow and I wasn’t sure if it would be fast enough for the task. Good thing I didn’t best against it…

If you browse the Web, you’ll find other solutions. I chose Arduino UNO out of laziness — the IDE is already set-up on my PC and the hardware and software are easy to use. Plus, I have UNOs to spare. Ultimately, one can always cobble together a barebones solution consisting of an ATMEGA328P, a 16MHz crystal and a few discrete components, if small size is an issue.

A simple passive volume control

There’s not much ancilliary hardware required. A few jumper wires bring out ground and audio signals from the UNO. I passed the audio through a trim pot volume circuit in order to knock the 5 Volt signal down to something more acceptable for a line level input. The trim pot feeds a Sparkfun 3.5mm phone break-out board which is connected to the LINE IN of a powered speaker.

That’s it for the test rig. The rest is software.

I assigned a “root” pitch to Arduino digital pins D2 to D13:

#define CnatPin 13 
#define BnatPin 12
#define AshpPin 11
#define AnatPin 10
#define GshpPin 9
#define GnatPin 8
#define FshpPin 7
#define FnatPin 6
#define EnatPin 5
#define DshpPin 4
#define DnatPin 3
#define CshpPin 2

Thankfully, the Arduino has just enough available pins to do the job while avoiding pins D1 and D0. D1 (TX) and D0 (RX) carry the serial port signals and it’s best to let them do that job alone.

My basic thought algorithm-wise was to implement 12 divide-down counters (one per root pitch) that decrement during each trip through a non-terminating loop. Each counter is (pre-)loaded with the unique divisor which produces its assigned root pitch. Whenever a counter hits zero, the code flips the corresponding digital output pin. If the loop is fast enough, we should hear an audio frequency square wave at the corresponding digital output. This approach is (probably) similar to the actual guts of the Mostek MK50240 top octave generator chip, except that the MK50240 counters operate in parallel.

Each root pitch needs:

  • A digital output pin
  • A note count variable
  • A divisor
  • A state variable to remember if the output is currently 0 or 1

For the highest pitch, C natural, we need declarations:

    #define CnatPin 13 

byte CnatCount ;

#define CNAT (123)

byte CnatState ;

and count down code to be placed within the loop body:

    if (--CnatCount == 0) { 
digitalWrite(CnatPin, (CnatState ^= 0x01)) ;
CnatCount = CNAT ;
}

These are the basic elements of the solution. The rest of the pitches follow the same pattern.

Now, for the fun — making the loop fast enough to be practical. This was a bit of a journey!

First off, I tried the MK50240 divisor values which require at least 9 bits for representation. Using INT (16-bit) counter variables, everything worked, but the final note frequencies were too low — not much “top” in top octave. I cut the divisor values in two, switched to BYTE (8-bit) counter variables, and doubled the output frequencies. Yes, AVR (Arduino) BYTE arithmetic is roughly twice as fast as INT arithmetic. That was the first lesson learned.

The next lesson had to do with how the counters were stored (register vs. memory). If I were writing the code in assembler language, I would have stored all of the counters in AVR CPU registers. (AVR has 32 CPU registers, after all.) Register storage would provide the fastest counter access and arithmetic. However, this is where C language and the Arduino setup()/loop() structure fight us.

Ultimately, I put all code into setup() and ditched loop(). I declared all twelve counters as register BYTE variables in setup():

    register byte CnatCount ; 
register byte BnatCount ;
register byte AshpCount ;
register byte AnatCount ;
register byte GshpCount ;
register byte GnatCount ;
register byte FshpCount ;
register byte FnatCount ;
register byte EnatCount ;
register byte DshpCount ;
register byte DnatCount ;
register byte CshpCount ;

The compiler allocated the counter variables to AVR CPU registers. This enhancement doubled the output frequencies, again. Now we’re into top octave territory!

The third and final lesson was tuning. The Mostek MK50240 is driven by a crystal-controlled 2000.240 kHz master clock. The emulated “master clock” is determined by the speed of the non-terminating loop (cycling at the so-called “loop frequency”):

    for (;;) { 
if (--CnatCount == 0) {
digitalWrite(CnatPin, (CnatState ^= 0x01)) ;
CnatCount = CNAT ;
}

...

delaySum = delaySum + 1 ;
}

My original plan was to tune all twelve pitches by changing the speed of the non-terminating loop. I discovered that such timing was too sensitive to code generation to be controllable and reliable. The biggest delay that I could add to the non-terminating loop was “delaySum = delaySum + 1 ;“. In the end, I manually tuned the individual note divisors.

A fine point: I chose the divisors to achieve a wide resolution in 8 bits. Eight bits is “close enough for rock and roll,” but not really enough for accurate tuning.

As usual, the path to the solution was zig-zaggy and not straight. Here is a ZIP file with all of the code and my working notes. I included source code for the intermediate experiments so you can re-trace my steps. Have fun!

Copyright © 2021 Paul J. Drongowski

Review: zplane deCoda

A recent thread on the Keyboard Forum (“What was the first song you figured out by ear?”) brought up memories of high school and combo organs. One of the most popular tunes of the time was “Space Rock Part 2” by The Baskerville Hounds. Air play on Ghoulardi was a big boost to its popularity and it really brought people onto the dancefloor. “Space Rock” is pretty much a rip of The Stones’ “2120 South Michigan Avenue,” albeit way up-tempo than the Stones’ version. Sly Stone knocked out his own titled “Buttermilk.” If you’re wanting a modern update, listen to the 2015 Jerry Cortez cover.

The “Space Rock” organ solo was the solo to know as a teen. At the time, I was paying off my Farfisa and didn’t have any money for records, so the old “drop the needle until you got it” method wasn’t for me. My bandmates were always on my back about it and they didn’t accept my “Guys, they’re just jammin'” excuse. I tried to cover the head and then improvise. Oh, well.

I’m pulling together a backing track for “Space Rock/2120” and it seemed like the time to transcribe the solo. Enter zPlane deCoda. In a nutshell, zplace have deployed their time/pitch stretching and DSP expertise to the problem of picking out tunes from audio. (See the Sound On Sound review for more information.) It’s not an audio-to-MIDI converter, so you still need to use your ears and eyes.

First, ya drag (or open) an audio file in deCoda. deCoda gives you two views: a standard (amplitude) waveform view of the audio and a spectrographic plot. The spectrographic plot is like a DAW piano roll. Instead of notes, however, it displays the sonic energy present at each pitch. You can (kind of) see the notes in the song — pitch and duration. I spent most of my time in the spectrographic display.

zplane deCoda in action

Optionally, you can turn on an XY panel (at right in the image above) that lets you focus playback and analysis on a particular “region” of the stereo field and audio frequency spectrum. Thanks to the XY panel, you can eliminate the bass and high-end sibilance. Blobs light up in the XY panel during playback as notes come and go.

deCoda offers a number of playback controls. You can playback at 1/4, 1/2, 3/4 and full speed. You can transpose up and down. You can change the tempo. My recommendation is to get the best mix of key, tempo and XY region before deep diving transcription.

Like Yamaha’s Chord Tracker, deCoda discovers key, tempo, chords and song sections. As mentioned, you can modify deCoda’s decisions. Compared against Chord Tracker, I would give Chord Tracker the edge. (Yamaha have invested a pile ‘o’ cash into music analysis.) Both tools handle simple chords OK, but forget jazz chords (no 11th and 13th chords) or gospel voicings.

After quickly munching “Space Rock”, deCoda had the key right, but half the actual tempo. The Hounds played “Space Rock” at a blistering 156 BPM. deCoda says 78 BPM. That’s OK, but…

One of the coolest deCoda features is the ability to draw notes on top of the spectrographic display. The notes play back through a simple synthesizer (think Casiotone). A mixer controls the relative level of song audio and synthesized tones. Once I got a little more skilled with deCoda, I found myself changing the relative levels quite often in order to A/B the original audio and the drawn notes. I wish deCoda offered a few different synth voices as the simple tones blended with the organ notes making it difficult to sort out the sounds by ear.

With a little practice, you can begin to pick out the notes by sight as well as ear. 1/4 playback is good for checking note start and duration. “Space Rock” is a mono mix and a mess of frequencies with that danged 60’s ring-y reverb. Thus, there are many false positives — places where you think there is an organ note, but it’s actually the frapping guitar. I wish deCoda could color notes according to timbre. Man, that would be quite the time-saver.

To compensate, I found myself playing the notes on keyboard, mainly to check fingering. “Space Rock” is one of those lazy blues tunes where the keyboardist just rocks his or her fingers around in one basic hand position. It’s difficult to read piano roll notes in real-time; I’d love to have even a simple staff viewer.

Now for the “but…”. deCoda can export the notes as a Standard MIDI File (SMF). Very good. deCoda produces MIDI notes that follow the identified tempo. When the SMF is imported into a DAW or notation tool, it arrives with the identified (or tweaked) tempo. Note play back sounds right, but if you change to the correct tempo in the tool, note starts and note durations are off (i.e., half of what they should be). I had to fix the note starts and durations in Sonar, save another SMF and import the modified SMF into Sibelius. Bummer.

I discovered the tempo and note export gotcha downstream. I also found that I wanted to work and play in G Major, not F# Major, AKA “that frapping guitar key.” I had already started a draft in Sibelius and it became a question of how much work I wanted to throw away. It’s better to get these considerations right from the start before export and downstream work.

Speaking of Sibelius, I really longed to have the identified notes in notation form, not piano roll. Under Windows 10, I couldn’t work in Sibelius and deCoda simultaneously. They did not work together — some kind of MIDI or audio system conflict. One or the other tool wouldn’t play when they were both open at the same time. In a few cases, I had to resort to notation paper and pencil to transfer identified notes into Sibelius, exit Sibelius, then re-start deCoda. Painful.

After all is done, I arrived at a decent transcription in Sibelius. (See image below; click to enlarge.) After seeing and playing the solo against a rough backing track, I think deCoda is about an eighth to a quarter note ahead of the beat. I’m not sweating it too much as playing “Space Rock” comes down to feel. However, it might be a factor when an accurate score is needed for chart-driven players.

Space Rock Part 2 (solo)

zplane deCoda is a worthwhile tool. It won’t automatically transcribe, but it is a decent assistant. The note editor and MIDI export are a real boon as I a keep a book of charts for reference. The XY plot is also a worthy mix visualization tool and has been repurposed in zplane peel. I hope that deCoda continues to invest in deCoda as I would love to have timbre coloring. That’s a tough technical problem, but cracking that nut would put zplane way out in front competetively.

Copyright © 2021 Paul J. Drongowski