Yamaha VKB-100 redux

OK, sooner or later, you knew I would circle back to the Yamaha VKB-100 VOCALOID keyboard. This little gem is a vocal keytar that lets you play pre-loaded lyrics using an installed Vocaloid library sound. Up to five libraries may be installed including the VY-1 library that ships with the VKB-100.

The VKB-100 can be had for the relatively low price of roughly $400USD, depending on shipping cost from Japan. The VKB-100 is only available in Japan at the current time.

Here’s a quick summary of the specs (translated from Japanese):

Number of keys: 37
Keyboard type: HQ (High Quality) MINI keyboard
Maximum polyphony: Vocaloid (mono), Instruments (48)
Number of voices:
    Vocaloid: Up to 5 libraries, Preset: 1 library (VY-1)
    Instrument: Preset: 13
Effects: Reverb, distortion, chorus, tremolo
Equalizer: Flat, Boost, Bright, Mild
Lyric operation: Loop, phrase return/forward, head search, recover
Skill
    Vocaloid: 2 assignable skills
    Instrument: Skill 1: Sustain, Skill 2: Portamento
Memory slots: Vocaloid: 20, PCM sound source: 20
Main controls Pitch Bend Wheel, Expression Wheel, Effect Knob, 
    Select Knob, Selector Slider, Transpose button, Phrase Button, 
    Memory Button,  Skill Button, Octave Button, Loop Button, 
    Master Volume Knob
USB: USB to HOST, USB to DEVICE
Audio connections
    Headphone out
    AUX in
    Line out
Amplifier output: 0.7W
Internal speaker size: 3.6cm
Power adapter: PA-150B
Battery power: 6 x AA alkaline or rechargeable NiMH batteries
Battery life: About 7 hours when using alkaline batteries
Width x depth x height: 821 mm x 121 mm x 65 mm
Width x depth x height: 32.3 in x 4.8 in x 2.6 in
Weight: 1.5kg
Accessory soft case: SC-KB350 (5,500 Yen)

The keybed is probably the Reface keybed. (Sturdy, slightly clack-y.) What’s that? Instruments? Hmmm…

I don’t want to run through the operational particulars, again. Please see the following pages for background information:

It’s cheap, it straps on, it has a keyboard and wheels. Can it be used as a MIDI controller? As an instrument in it’s own right?

The build quality looks pretty decent and the ergonomics are nice. There are two wheels on the neck: pitch bend and expression. And here we hit the first bump in the road — no modulation. A quick look at the MIDI chart and we find that the VKB-100 sends:

  • Pitch bend
  • Portamento time (CC# 5)
  • Expression (CC# 11)
  • Portamento ON/OFF (CC# 65)
  • Release time (CC# 72)

That’s it. Now, I suppose one could remap CC# 11 to modulation, but it would be better to have modulation generated natively.

The other gotcha. The VKB-100 has two modes: normal and keyboard mode. The VKB-100 only sends MIDI controller/key messages in keyboard mode, not normal mode. The user must select keyboard mode through a menu and this setting is not retained across power off. That means changing to keyboard mode after every shutdown.

The VKB-100 doesn’t receive much of MIDI anything except some undocumented SysEx. Thus, forget about sequencing.

The final gotcha is MIDI over USB only. If you want to drive an old school 5-pin DIN MIDI module or keyboard, you will need to bridge USB to 5-pin.

So, what about those preset instruments? Here’s a list:

  1. Synth 1
  2. Synth 2
  3. Synth 3
  4. Synth 4
  5. E. Guitar
  6. Harmonica
  7. Tenor Sax
  8. Piano
  9. E. Piano
  10. Synth Bass
  11. Slap Bass
  12. Air Choir
  13. Applause

Quality is comparable to an entry-level PSR keyboard. The Tenor Sax voice, for example, sounds like the Sweet! Tenor Sax in a PSR-E443. At least you have four effect types (reverb, distortion, chorus, tremolo) with four effect depth levels each. Kinda basic.

After a few insipid J-Pop YouTube demos, you want to blow your brains out. I watched them, so you don’t have to. Here are a few demos to check out.

Hey, Blake! You got to go to Superbooth? I’m jealous!

Bottom line, I was intrigued until I dove into the (Japanese) manuals and found the VKB-100’s limitations as a controller and stand-alone instrument. Ordering from Japan is no big deal, but the VKB-100 would have to offer some real cool features and sounds to compensate for the labor of translating Japanese to English and dealing with display messages in Japanese.

If you want to get your feet wet with Vocaloid, I recommend the Gakken NSX-36 Pocket Miku module. No keytar action, but you get a multi-timbral MIDI module that does Miku Vocaloid at a very modest price (less than $50 USD).

Copyright © 2018 Paul J. Drongowski

Pocket Miku (Thanks, David!)

I usually unwind with a book or Keyboard Magazine before turning out the light for a good night’s rest. Some of you know Keyboard Magazine as Electronic Musician. πŸ™‚

Imagine my surprise when I read David Battino’s “Adventures in DIY” and it’s about Gakken’s Procket Miku. And further, David gives a shout out to your’s truly and this blog (sandsoftwaresound.net).

Thank you, David! “Adventures in DIY” is one of the main reasons that I keep subscribing to Keyboard Magazine. David has a playfulness in his projects and approach that I really like. Plus, anyone who likes Japanese monsters and toys would fit right into our family.

David continues a long tradition of DIY writing that goes back to Polyphony Magazine, where I really got the bug to create. (There’s still a few treasured issues of Polyphony in our basement.)

So, if you came looking for Gakken Pocket Miku, NSX-39 or Yamaha’s NSX-1 integrated circuit, here’s a quick list of pages related to those topics:

While you’re here, please browse around. This site is my mental storage unit and you’ll never know what you might find. Lately, I’ve been diving into the new Yamaha Genos™. Maybe you need some content like scat vocal samples, converted DJXII patterns, or Motif performances converted to PSR/Tyros styles? Maybe you’re interested in taking a tour inside Montage, PSR/Tyros, or Kronos? Use soft synths on Linux and use Raspberry Pi to bridge 5-pin MIDI and USB.

And then there are reviews of products that I’ve tried or eventually purchased: Yamaha Montage, Genos, Reface CP, Reface YC, Korg Triton Taktile, Roland GO:KEYS, Nord Stage 2ex, etc.

There are several Arduino-based projects to browse (with downloadable code). Heck, there are even notes about data structures, computer architecture and VLSI design from back in the day.

Have fun!

Book-wise, I’m currently reading David Weigel’s “The Show That Never Ends: The Rise and Fall of Prog Rock.” Fun stuff.

Vocaloid keyboard announced

At long last, Yamaha have announced their Vocaloid™ keyboard, the VKB-100. The VKB-100 is a keytar design similar to the prototype shown at the “Two Yamahas, One Passion” exhibition at Roppongi Hills, Tokyo, July 3-5, 2015.

More details will be released in December 2017. However, this much is known:

  • Lyrics are entered using a dedicated application for smart phones and tablets via Bluetooth.
  • VY1 is the built-in default singing voice.
  • Up to 4 Vocaloid singers can be added using the application.
  • Four Vocaloid voices will be available: Hatsune Miku, Megpoid (GUMI), Aria on the Planets (IA), and Yuzuki Yukari.
  • Melody is played by the right hand while the left hand adds expression and navigates through the lyrics.
  • A speaker is built-in making the VKB-100 a self-contained instrument.

The VKB-100 was demonstrated at the Yamaha exhibition booth at the “Magical Mirai” conference held at the Makuhari Messe, September 1-3, 2017. Price is TBD.

VY1 is a female Japanese voice developed by Yamaha for its own products. VY1 does not have an avatar or character like other Vocaloid singers. This makes sense for Yamaha as they can freely incorporate VY1 in products without playing royalties or other intellectual property (IP) concerns.

The Vocaloid keyboard has had a long evolution, going through five iterations. The first three models did not use preloaded lyrics. Instead, the musician entered katakana with the left hand while playing the melody with the right hand. This proved to be too awkward and Yamaha moved to preloaded lyrics. The left hand controls on the neck add expression using pitch and mod wheels. The left hand also navigates through the lyrics as the musician “sings” via the instrument. The current lyrics are shown in a display just to the left of the keyboard where the musician can see them.

Yamaha will release more information on the Vocaloid keyboard site.

If you want to get started with Vocaloid and don’t want to spend a lot of Yen (or dollars), check out the Gakken NSX-39 Pocket Miku. Pocket Miku is a stylophone that plays preloaded Japanese lyrics. The NSX-39 also functions as a USB MIDI module with a General MIDI sound set within a Yamaha XG voice and effects architecture.

Be sure to read my Pocket Miku review and browse the resource links available at the bottom of the review page.

Copyright © 2017 Paul J. Drongowski

Pocket Miku software resources

This page is a collection of resources for using and programming Gakken Pocket Miku, also known as the “NSX-39”. It starts out with a cheat sheet for using Pocket Miku, moves on to Web-based applications, and finishes with customization and MIDI System Exclusive (SysEx) messages.

Be sure to read the Pocket Miku user’s guide before starting. The material below is not a hand-holding tutorial!

Pocket Miku cheat sheet

Stylus area

The lower part of the stylus area is a chromatic keyboard which plays notes. The upper part of the stylus area is a ribbon controller. Touch the stylus to either area to make music.

This is a classic resistive keyboard/ribbon controller. Stylus actions are converted to MIDI note ON, MIDI note OFF and pitch bend messages. The MIDI note is fixed: F#. MIDI pitch bend messages determine the actual final pitch which is heard.

Operating modes

Pocket Miku has two major operating modes:

  1. Normal mode
  2. NSX-1 compatibility mode

Pocket Miku boots into normal mode. In this mode, the NSX-39 recognizes and responds to stylus actions, button presses, etc.

Pocket Miku has three submodes in the normal operating mode:

  1. Do-Re-Mi mode with scales (default)
  2. A-I-U-E-O mode with vowels (SHIFT + vibrato button)
  3. Preset lyric mode with 5 lyrics (SHIFT + one of the AEIOU buttons)

The default phrases in preset lyric mode are:

    SHIFT + A    Konnichiwa Arigato (Hello, thank you)
    SHIFT + I    Butterfly song (choucho)
    SHIFT + U    Cherry blossom song (Sakura)
    SHIFT + E    Auld Lang Syne (Hotaru no hikari)
    SHIFT + O    Irohanihoheto

The magic key combination U + VOLUME UP + VOLUME DOWN switches between normal mode and NSX-1 compatibility mode. Pocket Miku plays a high hat hit when changing modes (not a “beep”). The Yamaha Web applications use NSX-1 compatibility mode. NSX-1 compatibility mode is also good for DAW-based sequencing since it decreases latency by disabling the interpretation of MIDI System Exclusive messages that are meaningful only to the NSX-39 microcontroller.

Buttons

Pocket Miku responds to single button presses and combinations:

    A-I-U-E-O    Selects on of the vowel phonemes
    VIBRATO      Adds vibrato to the sound
    SHIFT        Selects additional functions and modes
    VOLUME UP    Increase volume
    VOLUME DOWN  Decrease volume

    SHIFT + A, SHIFT+I, ...   Select A-I-U-E-O vowel mode
    SHIFT + VIBRATO           Select Do-Re-Mi mode
    SHIFT + VOLUME UP         Octave up
    SHIFT + VOLUME DOWN       Octave down
    VIBRATO + VOLUME UP       Pitch bend up (up one semi-tone)
    VIBRATO + VOLUME DOWN     Pitch bend down (down one semi-tone)

    A + VOLUME UP + VOLUME DOWN        Panic reset
    U + VOLUME UP + VOLUME DOWN        Select NSX-1 mode
    O + VOLUME UP + VOLUME DOWN        Retune Pocket Miku
    SHIFT + VOLUME UP + VOLUME DOWN    Initialize (factory reset)

Web-based applications

Gakken NSX-39 applications

Gakken provide three applications specifically for the NSX-39 (in normal mode). The applications are at http://otonanokagaku.net/nsx39/app.html.

Google Chrome version 33 or later is required because the Gakken applications use the Web MIDI API.

Connect NSX-39 to your computer with a USB cable and set the power switch of the NSX-39 to “USB”. If you do not connect the NSX-39 before you start Google Chrome, the NSX-39 will not be recognized by the application.

The Web MIDI API must be enabled in Google Chrome. After starting Chrome, enter:

    chrome://flags/#enable-web-midi

in the address bar as shown on the first “Browser Settings” screen. Then, enable the API in the “Enable Web MIDI API” column. Please click the appropriate button (e.g., “Use Windows Runtime MIDI API”) and restart Google Chrome.

Launch the desired application from here:

Once you agree to the End User License Agreement (EULA), you can connect to the NSX-39 (Pocket Miku’s model number).

If this procedure does not work, please restart the computer and proceed from the first step.

Application: Input lyrics

You can edit the lyrics by pressing the “E” button in the lyric input slot. Only Hiragana can be input. After inputting lyrics, pressing “Enter” on the keyboard and the app sends lyric data to Pocket Miku.

After sending lyrics data, when playing Pocket Miku, Pocket Miku sings according to the sent lyrics.

Lyrics can input 64 letters per slot. There are 15 slots and they are selected with [A] - [O], [SHIFT] + [A] - [O], [VIBRATO] + [A] - [O].

Press [SHIFT] + [VIBRATO] during editing to switch to Do-Re-Mi mode.

Application: Play in realtime

This is an application where you can input and play lyrics in realtime. If you hover over the tile where the letters are written on the screen, you can pronounce that character.

Tiles can be selected from 50, mentai (voiced, semi-voiced), small letters (1) (2), jiyuu (free arrangement) mode.

Jiyuu is a mode that allows you to place characters freely using the “frog” menu:

  • Tsukasa … You can add up to 50 letters, panels.
  • Move … Move the panel to the desired position by dragging.
  • Ken … You can delete the panel by clicking it.
  • Reading … Read the saved character panel setting file.
  • Upload … Save the character panel setting as an external file.

Google Translate didn’t do so well with these instructions! Sorry.

Change configuration

Config is an abbreviation for configuration and means “setting.” With this application, you can change the settings of Pocket Miku and add new functions. The following four operations are supported:

  • Startup sound for function addition pack
  • SHIFT button Character heading / character advance
  • Effect ON / OFF
  • Harmony

Please press the “Install” button and read the displayed message and if there is no problem press the “Send” button. When all the functions are installed, a voice saying “Owarai” appears, and writing the settings is completed.

If you want to restore the settings back, please click the “Uninstall” button, read the explanation carefully, and press the “Send” button if there is no problem.

Yamaha NSX-1 applications

Yamaha provide open source sample apps (Japanese language) at http://yamaha-webmusic.github.io/. The Yamaha applications use the Web MIDI API. See the directions above in order to set up Google Chrome.

In order to use these applications, you must change Pocket Miku to NSX-1 compatibility mode by pushing U + VOLUME UP + VOLUME DOWN simultaneously.

Aides Technology application

Aides Technology is the developer of the Switch Science NSX-1 Arduino shield.

They have one very handy Web application when MIDI sequencing. The application translates romaji (kana text) lyrics to an NSX-1 System Exclusive (SysEx) message. You can copy the HEX SysEx message from the page and paste it into your DAW. On Windows, the application will put the SysEx message on the Windows clipboard automatically.

You may also need this ASCII to Hex text converter when debugging your SysEx messages.

I’m a long time SysEx HEX warrior. Trust me, this is the way to go!

Customization and MIDI System Exclusive messages

Customization is the most difficult topic due to its complexity and the general lack of English language resources. Customization is performed through MIDI System Exclusive messages instead of simple textual commands. This approach enables use of the Web MIDI API, but makes it darned difficult to compose messages by hand.

I’m told that the Gakken Official Guide Book (Gakken Mook) contains a short section about customization via SysEx. However, one cannot cram a paper magazine through Google Translate. πŸ™‚

The next best thing is the Pocket Miku Customization Guide (PDF) by Uda Denshi (polymoog). This guide and Google Translate will only take you so far.

The absolute best English language resource is the series of blogs written by CHH01:

Please note that Pocket Miku has two major subsystems: a microcontroller and the Yamaha NSX-1 integrated circuit. Each subsystem has its own SysEx messages. See the Yamaha NSX-1 MIDI implementation reference manual for information about its SysEx messages. Messages interpreted by the microcontroller are described in the Pocket Miku Customization Guide. These messages are turned OFF when Pocket Miku is in NSX-1 compatibility mode.

The NSX-39 SysEx implementation is very powerful. You can change the lyrics which are stored in flash memory (15 lyric slots), change the way the NSX-39 responds to button presses (120 command slots), read switch states, and much more. Here is a list of the main customization message types (thanks to CHH01):

F0 43 79 09 11 d0 d1 d2 d3 d4 d5 ... F7

Request Version Data          d0=0x01 d1=N/A
Version Data Reply            d0=0x11 d1=NSX-39 version data
Lyrics Entry                  d0=0x0A d1=lyrics slot number   d2=character data
Request Command Slot Details  d0=0x0B d1=command slot number
Command Slot Reply            d0=0x1B d1=command
Change Command Slot           d0=0x0C d1=command slot number  d2=command
Command Direct Entry          d0=0x0D d1=command
Lyric Number Data Request     d0=0x0E d1=N/A
Lyric Number Data Reply       d0=0x1E d1=Slot number          d2=Slot data
Lyric Details Request         d0=0x0F d1=Slot number
Lyric Details Reply           d0=0x1F d1=character count      d2=character 1, etc.
Switch State                  d0=0x20 d1=000000ih             d2=0gfedcba
NSX-39 Status                 d0=0x21 d1=Status

Good luck with your investigations and experiments!

Copyright © 2017 Paul J. Drongowski

Pocket Miku pictures

Thanks very much to our friends at japan24net on eBay! They did a superb job of packing and Pocket Miku arrived at our house in record time. γ©γ†γ‚‚γ‚γ‚ŠγŒγ¨γ†γ”γ–γ„γΎγ—γŸ

Now, the obligatory pictures! Please click on the images for higher resolution. Front:

The back:

With the rear cover off:

And finally, the money shot:

That looks like a 12.000 MHz crystal. Sorry, I didn’t have time to work through the data sheet and compute the CPU clock frequency. (96MHz maximum)

Copyright © 2017 Paul J. Drongowski

Pocket Miku hardware resources

Pocket Miku, also known as “NSX-39,” has three major integrated circuit components:

Here is the Pocket Miku NSX-39 circuit schematic.

The Generalplus GP3101A is a system on a chip (SOC) advanced multimedia processor. The GPEL3101A is an ARM7TDMI processor with integrated RAM and many peripheral interfaces including:

  • 136KByte SRAM
  • Universal Serial Bus (USB) 2.0 interface
  • 8 channel sound processing unit (SPU)
  • SPI (master/slave) interface
  • Programmable general I/O ports (GPIO)
  • 6-channel, 12-bit analog to digital converter (ADC)
  • 16-bit stereo (2-channel) audio digital to analog converter
  • 0.5W class AB mono audio amplifier

Here is the Generalplus GP31P1003A product brief. The NSX-39 schematic does not specify the clock crystal frequency, but the GP31P1003A can operate up to 96MHz.

The Yamaha NSX-1 eVocaloid processor communicates with the GPEL3101A via SPI. MIDI messages, commands, and initialization data are communicated serially. The GPEL3101A control software converts MIDI over USB to MIDI messages sent to the NSX-1 via the SPI connection.

The GPEL3101A senses the keyboard and stylus inputs through its 6-channel, 12-bit ADC.

The NSX-1 generates a digital audio stream which is sent to the GPEL3101A digital audio auxiliary input. The GPEL3101A converts the digital audio to analog audio using its DAC. (This is a neat solution — no discrete DAC component!) The GPEL3101A sends analog audio to the external PHONE OUT and amplified audio is driven into the NSX-39’s speaker.

The Macronix MX25L1635E is a 16Mbit CMOS serial flash memory. It communicates with the GPEL3101A via SPI (4xI/O mode). The memory can retain 2MBytes of data. The MX25L1635E holds the NSX-39 control program and (probably) the initial eVocaloid database. The eVocaloid database must be loaded into an internal RAM memory within the NSX-1 eVocaloid processor.

We can infer that the eVocaloid database cannot be larger than 2MBytes. The NSX-1 typically sets aside 2MBytes for the database within its large capacity internal RAM memory. Because this memory volatile RAM, it must be initialized with the eVocaloid database at start-up. It would be a sweet hack to replace the eVocaloid database with an English language database or Real Acoustic Sound (RAS) waveforms.

The NSX-39 software keeps the lyric slots and the command slots in the Macronix flash memory. This arrangement retains lyrics and commands across power-down.

Copyright (c) 2017 Paul J. Drongowski

Yamaha NSX-1 resources

Here are some of the Yamaha NSX-1 resources that I’ve found on-line. It took a lot of browsing to find English language resources! I apologizing for writing a rather terse blog post — just the facts, documents and links!

Please check out my own posts on this site:

I hope these resources help your exploration of the NSX-1, eVocaloid and Pocket Miku!

Sound source specifications

Sound source methods  EVocaloid, Real Acoustic Sound, Wavetable 
                      method (General MIDI)
Maximum polyphony     64
Multi-timbral         Sound source 16 parts, A / D input part Γ— 2
Waveform memory       Equivalent to 4 Mbytes
Number of voices      EVocaloid (eVY 1 (Japanese)) / Real Acoustic 
                      Sound Γ— 30 types, General MIDI Γ— 128 kinds
Number of drum kit    1 Drum Kit (General MIDI)
Effects               Reverb Γ— 29, Chorus Γ— 24, Insertion Γ— 181,
                      Master EQ (5 Bands)

Hardware specifications

Host Interface        SPI / 8 bit parallel / 16 bit parallel
Audio interface       Input Γ— 2, output Γ— 2
Power supply          1.65 V - 3.6 V [Core] 1.02 V - 1.20 V
Power consumption     [Standby] 10 Β΅A [Operating] 12 mA to 22 mA
Package               80-pin LQFP (0.5 mm pitch, 12 mm Γ— 12 mm),
                      76-ball FBGA (0.5 mm pitch, 4.9 mm Γ— 4.9 mm)

Software specifications

Serial Comm Interface      Bit length     8
                           Start bit      1
                           Stop bit       1
                           Parity bit     none
                           Transfer rate  31250 bps or 38400 bps
Program change             CH.1    eVocaloid only (eVY1)
                                   Not receive program change messages
                                   Monophonic pronunciation
                           CH.2 - CH.16   General MIDI voices
System exclusive message   GM ON, XG parameter, Lyrics data etc.
                           Not received other than Yamaha ID
                           Some Yamaha ID still does not received
                           (such as music instrument specific)
Other MIDI messages        Channel message
                           NRPN, RPN
Lyrics data                Transfer by System Exclusive or NRPN messages
Continuous operating time  8 hours (eVocaloid specification)
                           If exceeded, requires power off, reset,
                           and NSX-1 reboot, etc.

Real Acoustic Sound

As mentioned in my earlier post, the Yamaha NSX-1 integrated circuit implements three sound sources: a General MIDI engine based on the XG voice architecture, eVocaloid and Real Acoustic Sound (RAS). RAS is based on Articulation Element Modeling (AEM) and I now believe that eVocaloid is also a form of AEM. eVocaloid uses AEM to join or “blend” phonemes. The more well-known “conventional” Vocaloid uses computationally intensive mathematics for blending which is why conventional Vocaloid remains a computer-only application.

Vocaloid uses a method called Frequency-domain Singing Articulation Splicing and Shaping. It performs frequency domain smoothing. (That’s the short story.)

AEM underlies Tyros Super Articulation 2 (S.Art2) voices. Players really dig S.Art2 voices because they are so intuitively expressive and authentic. Synthesizer folk hoped that Montage would implement S.Art2 voices — a hope not yet realized.

Conceptually, S.Art2 has two major subsystems: a controller and a synthesis engine. The controller (which is really software running on an embedded microcomputer) senses the playing gesture made by the musician and translates those gestures into synthesis actions. Gestures include striking a key, releasing a key, pressing an articulation button, moving the pitch bend or modulation wheel. Vibrato is the most commonly applied modulation type. The controller takes all of this input and figures out the musician’s intent. The controller then translates that intent into commands which it sends to the synthesis engine.

AEM breaks synthesis into five phases: head, body, joint, tail and shot. The head phase is what we usually call “attack.” The body phase forms the main part of a tone. The tail phase is what we usually call “release.” The joint phase connects two bodies, replacing the head phase leading into the second body. A shot is short waveform like a detached staccato note or a percussive hit. A flowing legato string passage sounds much different than pizzicato, so it makes sense to treat shots separately.

Heads, bodies and tails are stored in a database of waveform fragments (i.e., samples). Based on gestures — or MIDI data in the case of the NSX-1 — the controller selects fragments from the database. It then modifies and joins the fragments according to the intent to produce the final digital audio waveform. For example, the synthesis engine computes joint fragments to blend two legato notes. The synthesis engine may also apply vibrato across the entire waveform (including the computed joint) if requested.

Whew! Now let’s apply these concepts to the human voice. eVocaloid is driven by a stream of phonemes. The phonemes are represented as an ASCII string of phonetic symbols. The eVocaloid controller recognizes each phoneme and breaks it down into head, body and tail fragments. It figures out when to play these fragments and when bodies must be joined. The eVocaloid controller issues internal commands to the synthesis engine to make the vocal intent happen. As in the case of musical passages, vibrato and pitch bend may be requested and are applied. The NSX-1 MIDI implementation has three Non-Registered Parameter Number (NRPN) messages to control vibrato characteristics:

  • Vibrato Type
  • Vibrato Rate
  • Vibrato Delay

I suspect that a phoneme like “ka” must be two fragments: an attack fragment “k” and a body fragment “a”. If “ka” is followed immediately by another phoneme, then the controller requests a joint. Otherwise, “ka” is regarded as the end of a detached word (or phrase) and the appropriate tail fragment is synthesized.

Whether it’s music or voice, timing is critical. MIDI note on and note off events cue the controller as to when to begin synthesis and when to end synthesis. The relationship between two notes is also critical as two overlapping notes indicate legato intent and articulation. The Yamaha AEM patents devote a lot of space to timing and to mitigation of latency effects. The NSX-1 MIDI implementation has two NRPN messages to control timing:

  • Portamento Timing
  • Phoneme Unit Connect Type

The Phoneme Unit Connect Type has three settings: fixed 50 msec mode, minimum mode and velocity mode in which the velocity value changes the phoneme’s duration.

As I mentioned earlier, eVocaloid operates on a stream of phonetic symbols. Software sends phonetic symbols to the NSX-1 using either of two methods:

  1. System Exclusive (SysEx) messages
  2. NRPN messages

A complete string of phonetic symbols can be sent in a single SysEx message. Up to 128 phonetic symbols may be sent in the message. The size of the internal buffer for symbols is not stated, but I suspect that it’s 128 symbols. The phoneme delimiter is ASCII space and the syllable delimiter is ASCII comma. A NULL character must appear at the end of the list.

The NRPN method uses three NRPN message types:

  • Start of Phonetic Symbols
  • Phonetic Symbol
  • End of Phonetic Symbols

In order to send a string of phonetic symbols, software sends a start NRPN message, one or more phonetic symbol NRPN messages and, finally, an end of phonetic symbols NRPN message.

Phonetic symbols are stored in a (128 byte?) buffer. The buffer lets software send a phrase before it is played (sung) by the NSX-1. Each MIDI note ON message advances a pointer through the buffer selecting the next phoneme to be sung. The SEEK NRPN message lets software jump around inside the buffer. If software wants to start at the beginning of the buffer, it sends a “SEEK 0” NRPN message. This capability is really handy, potentially letting a musician start at the beginning of a phrase again if they have lost their place in the lyrics.

When I translated the Yamaha NSX-1 brochure, I encountered the statement: “eVocaloid and Real Acoustic Sound cannot be used at the same time. You need to choose which one to pre-install at the ordering stage.”. This recommendation is not surprising. Both RAS and eVocaloid must have its own unique database; RAS has instrument samples and eVocaloid has human vocal samples. I don’t think, therefore, that Pocket Miku has any RAS (AEM) musical instrument samples. (Bummer.)

Speaking of databases, conventional Vocaloid databases are quite large: hundreds of megabytes. eVocaloid is intended for embedded applications and eVocaloid databases are much smaller. I’ll find out how big once I take apart Pocket Miku. Sorry, Miku. πŸ™‚

I hope this article has given you more insight into Yamaha Real Acoustic Sound and eVocaloid.

Copyright © 2017 Paul J. Drongowski

LSI, LSI

More fun with large scale integration (LSI).

I went mad with desire when I heard about the Switch Science eVocaloid eVY1 shield for Arduino. The bad news is Switch Science is out of stock and is not making the board any longer.

I started to deep dive the Yamaha NSX-1 eVocaloid IC at the heart of the eVY1 shield and eventually found some specs. The NSX-1 responds to sixteen MIDI channels. Channel 1 is dedicated to eVocaloid — a monophonic singing voice. Channels 2 through 16 are assigned to the polyphonic, multi-timbral MIDI synthesizer. The MIDI synthesizer conforms to the XG voice and effects architecture. Unfortunately, the wave memory is about 2MBytes, putting it at the same level as an old school QY-70. (Got one of those already.)

I uploaded Yamaha’s NSX-1 brochure. Take a peek. Please note the waveform diagram on page 2 (i.e., head, body, joint, tail) eVocaloid and Articulated Element Modeling (AEM) are definitely siblings. “Conventional” Vocaloid uses computational heavy mathematics to blend phonemes. eVocaloid and conventional Vocaloid are more like cousins.

Assessing the MIDI implementation, software needs to pump abbreviations for eVY1 phonemes into the NSX-1 to make it sing. A string of abbreviated phonemes is sent via SysEx message. Looks like the developers got burned by the long SysEx message problem in Windows XP as they recommend using Windows Vista or later.

The vocal database (consisting of samples and more) is stored in a surface mount IC beneath the board. It isn’t possible to replace the vocal database with instrument samples in order to take advantage of the NSX-1’s Real Acoustic Sound (RAS) synthesis. eVocaloid mode and RAS mode are exclusive and cannot be used at the same time. Doesn’t look like we can get Super Articulation 2 voices on the cheap. (Bummer.)

Given these limitations, my ardour cooled rather quickly! However, leave it to Katsunori UJIIE to lift my spirits. Check out UJIIE’s demonstration of the Gakken NSX-39, Pocket Miku.

Meanwhile, my quest for a light-weight, self-contained, battery-powered rehearsal keyboard goes on. Recently, while I waited for the GC associate to process my returned Roland GO:KEYS, I plinked away on a Yamaha NP-12. The NP-12 is certainly cheap enough ($170 USD) and light enough (just shy of 10 pounds). Although it has only ten voices, I could MIDI the NP-12 to the MidiPlus miniEngine USB sound module for non-piano voices. A quick experiment with the miniEngine and the PSR-S950 proved feasibility.

I became curious about the level of tech inside the Yamaha Piaggero products and scrounged the Web for service manuals. I couldn’t find anything on the NP-12, but did find service manuals for the NP-30 (32 voice polyphony, 2007) and the current NP-32 (64 voice polyphony, 2016).

As I suspected, the upgrade in polyphony signaled an upgrade in the internal processor. The NP-30 is based on the SWL01T (YMW767-VTZ) workhorse that is part of many entry-level, battery-powered Yamaha products. The NP-32 is based on the SWX03. I haven’t seen the SWX03 before and I think the SWX03 is a new version of the SWX02 (which appears in the PSR-650 and MOX, for example). The SWL01T fetches sample data from the CPU’s system memory while the SWX02 fetches samples through a dedicated memory channel. Thus, the SWX02 processors have higher memory bandwidth and can support higher polyphony.

Physical wave memory is 8MBytes (64Mbits): 4M x 16-bit words. Uncompressed sample size is approximately 16MBytes. It is a testament to Yamaha’s sound design prowess that they can synthsize a decent sounding acoustic piano with such little memory. Sure, the NP-12 is the absolute bottom of the line, but it does sound decent given its modest street price.

And your keytar can sing

A day with excessive heat and humidity can strand you indoors as effectively as a New England snow storm. Time for a virtual quest into parts unknown.

I stumbled onto this beautiful web page on the Japanese Yamaha web site. Lo and behold, a Vocaloid™ keyboard in the shape of a keytar. I strongly suggest visiting this page as the commercial photography is quite stunning in itself.

The Vocaloid keyboard is a prototype that was shown at the “Two Yamahas, One Passion” exhibition at Roppongi Hills, Tokyo, July 3-5, 2015. Some form of Vocaloid keyboard has been in the works for several years and this prototype is the latest example.

The overarching idea is to liberate Vocaloid from the personal computer and to create an untethered performance instrument. The Vocaloid engine is built into the keyboard. The keyboard also has a built-in speaker along with the usual goes-outtas. The industrial design — by Kazuki Kashiwase — tries to create the impression of a wind instrument such as a saxophone.

The performer must preload the lyrics into the instrument before performing. This lets the performer concentrate on the melody when performing, not linguistics. The keyboard adjusts the pitch and timing of the vocalization. The left-hand neck buttons navigate through the lyrics: back one note, advance phrase, go to the end, etc. The ribbon controller raises and lowers the pitch. Control knobs select vibrato, portameno, brightness, breath and gender. Other knobs set the volume and select lyrics. Up to five lyrics can be saved.

The prototype synthesizes the “VY1” Japanese female voice developed by Yamaha for Vocaloid version 2. Somewhat confusingly, “VY1” stands for “Vocaloid Yamaha 1.” The voice has the codename “Mizki.”

The Vocaloid engine is based on the Yamaha Vocaloid Board, not eVocaloid which is built into the NSX-1 integrated circuit (LSI). Yamaha sell the Vocaloid Board to OEMs, eventually intending to incorporate the board into entertainment, karaoke and musical instrument products of its own. The Vocaloid Board has MIDI IN/OUT, by the way, and reads the vocal database from an SD card.

Many of these details are taken from the article by Matsuo Koya (ITmedia). Please see the article for close-up photographs of the Vocaloid keyboard prototype.

The NSX-1 IC (YMW 820) mentioned above is a very interesting device itself. The NSX-1 is a single chip solution designed for embedded (“eVocaloid”) applications. It uses a smaller sized voice database, “eVY1”.

The NSX-1 has a General MIDI level 1 engine. Plus, the NSX-1 has a separate engine to reproduce high quality acoustic instrument sounds thanks to “Real Acoustic Sound” technology. This technology is based on Articulation Element Modeling (AEM) which forms the technical basis of Tyros 5 Super Articulation 2 (S.Art2) voices. Real Acoustic Sound and eVocaloid cannot be used simultaneously.

Holy smokes! I conjectured that AEM and Vocaloid are DSP siblings cousins. This is further evidence in support of that conjecture.

NSX-1 can be controlled using a Javascript library conforming to the Web MIDI API. Wanna make your browser sing? Check out the Yamaha WebMusic page on github.

The company Switch Science sells an eVY1 SHIELD for Arduino. Kit-maker Gakken Educational has developed a stylus gadget based on eVocaloid and the NSX-1 — Pocket MIKU. And, of course, here is the Pocket Miku video.

Only 13 more days until Summer NAMM 2017.

Copyright © 2017 Paul J. Drongowski