Welcome CS teachers and students!

[Be sure to visit Living Computers in Seattle. SIGCSE 2017 attendees are admitted free during the conference. I visited the museum today and it was a lot of fun! K-12 teachers will enjoy the hands on exhibits.]

The annual ACM Special Interest Group on Computer Science Education (SIGCSE 2017) Technical Symposium is next week (March 8 – 11) in Seattle, Washington. The symposium brings together educators at all levels (K-12 and higher ed) to exchange and discuss the latest methods, practices and results in computer science education.

I don’t often advertise it, but the Sand, Software, Sound site has many resources for educators and students alike. You can browse these resources by clicking on one of the WordPress topic buttons (Raspberry Pi, PERF, Courseware, etc.) above. You can also search for a topic or choose from one of the categories listed in the right sidebar.

Here are a few highlights.

I taught many computer-related subjects during my career and have posted course notes, slides and old projects. The four main sections are:

  • CS2 data structures: Undergraduate data structures course suitable for advanced placement students.
  • Computer design: Undergraduate computer architecture and design which uses a multi-level modeling approach.
  • VLSI systems: Graduate course on VLSI architecture, design and circuits which is suitable for undergraduate seniors.
  • Topics in computer architecture: Material for a special topics seminar about computer architecture (somewhat historical).

Please feel free to dig through these materials and make use of them.

Software and hardware performance analysis formed a major thread throughout my professional life. I recommend reading my series of tutorials on the Linux PERF tool set for software performance analysis:

The ARM11 microarchitecture summary is background material for the PERF tutorial. Program profiling is a good way to bring computer architecture to life and to teach students how to analyze and assess the execution speed of their programs.

There are two additional tutorials and getting started guides for teachers and students working on Raspberry Pi:

Music technology and computer-based music-making have been two of my chief interests over the years. The Arduino section of the site has several of my past projects using the Arduino for music-making. You should also check out my recent blog posts about the littleBits synth modules and littleBits Arduino. Please click on the tags and links at the bottom of each post in order to chase down material.

You might also enjoy my tutorial on software synthesizers for Linux and Raspberry Pi. The tutorial is a getting started guide for musicians of all stripes — music teachers and students are certainly welcome, too!

5-pin MIDI IN/OUT for Arduino

I hope you enjoyed the last post about a simple tone-based sequencer for littleBits Arduino. My next goal is to make the littleBits Arduino fluent in MIDI. Then we can turn the littleBits Arduino into the heart of MIDI-based tools like real time controllers and synthesizers.

At the time of this writing, littleBits does not offer a 5-pin MIDI input module or a 5-pin MIDI output module. That shouldn’t stop us. With a little know-how and some soldering, it’s easy to whip up 5-pin MIDI IN and MIDI OUT circuits. I will show you how. Even though this discussion is in the context of littleBits Arduino, the circuits below will work with any Arduino. The circuits will even work with Raspberry Pi or Beaglebone for that matter! Once I get a couple littleBits proto modules, I’ll show you how to connect the MIDI interface circuits to the littleBits Arduino.

5-pin MIDI is a mature standard and is one of the most successful, long-running standards in personal computing. Most musicians are familier with MIDI cables and MIDI connections. MIDI cables have familiar 5-pin DIN connectors at either end. Wiring is symmetric. Unlike USB, there isn’t an A side and a B side. Connect a MIDI OUT to a MIDI IN and you’re good to go.

Even though a connector has five pins (and associated wires), only three pins are really involved in MIDI data communication. One of the three pins — “the one in the middle” — carries electrical ground. The other two pins form a current loop from the sender to the receiver and back to the sender. “Current loop” means that we are communicating 0’s and 1’s using the presence or absence of electrical current.

Everyday logic like CMOS or TTL digital circuits use voltage level to represent logical zero and logical one. Low voltage (nominally 0 Volts) represents logical zero and high voltage (nominally 5 Volts in a 5 Volt system) represents logical one. Digital circuits actually switch through a transition zone between 0 and 5 Volts. Logical 0 and 1 are defined by threshold voltages, and now we’re getting too far afield! You get the idea — the representations and electrical mode of operation are different.

Let’s start with the receiver (MIDI IN) because that’s where all of the interesting action takes place. Here is the schematic for a very basic MIDI IN. (Click on images to get full resolution.)

schematic_midi_in

The incoming current flows through a 220 ohm resistor into the optical side of a 6N138 optoisolator. That may sound scary, but Arduino folks already know how to blink an LED on and off. That’s what the current loop does. It blinks an LED in the optoisolator. The LED shines on a photodiode that controls two transistor switches. The transistors switch the output (pin 6 of the optoisolator) between logical 0 and logical 1 (in voltage-ese). Pin 6 is connected to the Arduino serial receive port (pin D0, also known as “RX”). That’s all there is to it!

The optoisolator isolates the sender and receiver electrically. This is a good thing in stage environments and any place rife with grounding problems, connection mistakes, etc. The resistor before the LED limits the current through the loop and into the LED. This resistor plus the 1N4148 diode provide input protection.

Here is the schematic for a basic MIDI OUT circuit.

schematic_midi_out

All the sender needs to do is to drive or remove an electrical current through the loop. When the loop is driven, the LED at the other end of the loop shines. When the current is removed, the LED turns off. The current loop is controlled by the Arduino send port (pin D1, also known as “TX”). The 220 ohm resistors are current limiting resistors that put a limit on the amount of current driven into the loop.

This MIDI OUT circuit gets the job, but is a little basic. Most practical commercial circuits use a driver (such as a CMOS 74HC125 buffer/driver IC) or a transistor switch. The driver provides a little more electrical assurance and protection on the sender’s side. Better to blow up an inexpensive driver IC than your Arduino!

I built both the MIDI IN and MIDI OUT circuits on an Adafruit Perma-Proto quarter-sized breadboard PCB. I like the layout of these boards and they have nice through-holes for soldering. They have the same layout as a quarter-sized solderless breadboard. In this case, you solder connections instead of inserting jumper wires and component leads into solderless breadboard holes. Please, note. If you want to use the circuits above, but are reluctant to solder, then by all means, use a solderless breadboard!

The following image shows the final result looking at the MIDI IN connector. Click the image for full resolution.

board_midi_in

The jumper wires sprouting from the board are not intended to make the board look like a court-jester. They are the connections to be made to the Arduino:

  • Red: +5 Volts
  • Black: Ground
  • Yellow: Connect to D0 / RX
  • Blue: Connect to D1 / TX

My construction style uses 2×1 and 2×2 headers to make external connections. The header pins mate up neatly with either Female/Female or Female/Male jumper wires. I used F/M jumpers in order to plug into the signal headers on a standard Arduino UNO for testing.

The next image shows the final resulting looking at the MIDI OUT connector.

board_midi_out

If you don’t mind soldering, but don’t want to go free-style on a prototyping board, then I recommend the Sparkfun MIDI Shield (DEV-12898). The latest revision of the MIDI Shield has good input protection and output drivers. It also has a RUN/PROG switch that is handy when uploading a sketch to the Arduino. MIDI and PC communications share the same serial port and conflicts must be avoided. (More about this issue in another post.) With the prototyping board, I just pull the yellow jumper wire when I upload a sketch.

The Sparkfun MIDI Shield has two knobs and three switches. This is a bonus if you are working with a standard Arduino. The knobs and switches go unused if you are working with a littleBits Arduino. In either case, the Sparkfun MIDI Shield is a viable alternative to “roll you own.”

Next time, I’ll describe the sketches that I wrote in order to test the MIDI IN and MIDI OUT.

Update: Use this simple MIDI sequencer sketch to test the MIDI OUT portion of the 5-pin interface.

We need “code-able” MIDI controllers!

All MIDI controllers for sale are rubbish!

Eh?

OK, here comes a rant. I’ve been working on two Arduino-based MIDI controllers in order to try out a few ideas for real time control. I’m using homebrew microcontrollers because I need the flexibility offered by code in order to prototype these ideas.

None of the commercial available MIDI controllers from Novation, Korg, AKAI, Alesis and the rest of the usual suspects support user coding or true executable scripts. Nada. I would love it if one of these vendors made a MIDI controller with an Arduino-compatible development interface. Connect the MIDI controller to a Mac or PC running the Arduino IDE, write your code, download it, and use it in real time control heaven! Fatal coding mistakes are inevitable, so provide an “Oops” button that automatically resets program memory and returns the unit to its factory-fresh state.

Commercial MIDI controllers have a few substantial advantages over home-brew. Commercial controllers are nicely packaged, are physically robust and do a good job of integrating keyboard, knob, slider, LED, display, etc. hardware resources into a compact space. Do I need to mention that they look good? Your average punter (like me) stinks at hole drilling and chassis building.

Commercial controllers, on the other hand, stink at flexibility and extensibility. Sure, the current crop of controllers support easy assignment of standard MIDI messages — usually control change (CC), program change (PC), and note ON/OFF. Maybe (non-)registered parameter number messages (RPN or NRPN messages) are supported. System exclusive (SysEx) most certainly is not supported other than maybe a fixed string of HEX — if you’re incredibly fortunate to have it.

The old JL Cooper FaderMaster knew how to insert control values into simple SysEx messages. This is now lost art.

Here are a few use cases for a fully user-programmable MIDI controller.

The first use case is drawbar control. Most tone-wheel clones use MIDI CC messages for drawbar control, but not all. The Yamaha Tyros/PSR “Organ Flutes” are controlled by a single SysEx message. That SysEx message sets everything at once: all the drawbar levels, percussion parameters and vibrato. Drawbar control requires sensing and sending all of the controller’s knob and switch settings in one fell swoop. None of the commercially available MIDI controllers can handle this.

If you’re interested in this project, check out these links: Dangershield Drawbars, design and code.

The second use case is to fix what shouldn’t have been broken in the first place. The Korg Triton Taktile is a good MIDI controller. I like it and enjoy playing it. However, it’s brain-damaged in crazy ways. The function buttons cannot send program change messages! Even worse, the Taktile cannot send a full program change: bank select MSB followed by bank select LSB followed by program change. This makes the Taktile useless as a stage instrument in control of a modern, multi-bank synthesizer or tone module. If the Taktile allowed user scripting, I would have fixed this nonsense in a minute.

The third use case is sending a pre-determined sequence of pitch bend messages to a tone generator. Yes, for example, you can twiddle a controller’s pitch bender wheel (or whatever) to send pitch bend. However, you cannot hit a button and send a long sequence of pitch bend messages to automatically bend a virtual guitar string or to play a convincing guitar vibrato. Punters (like me) have trouble playing good guitar articulations, but we do know how to hit buttons at the right time. Why not store and send decent sounding pitch bend and controller values in real time as the result of a simple button press?

The fourth use case is an example of the “heavy lifting” potential of user code. Many sample players and libraries (like the Vienna Symphonic Library) assign a range of keys to articulations or other methods of dynamically altering the sound of a notes played elsewhere on the keyboard (i.e., the actual melody or chord). I claim that it’s a more natural gesture to control articulations through the keyboard than to reach for a special function button on the front panel. User coding would allow the redefinition of key presses to articulations — possibly playing a different sample or sending a sequence of controller messages.

Let me give you a more specific example, which is an experiment that I have in progress. Yamaha instruments have Megavoices. A Megavoice is selected as a single patch. However, different samples are mapped to different velocity ranges and different key ranges. As such, Megavoices are nearly impossible to play through the keyboard. Nobody can be that precise consistently in their playing.

I’m prototyping a MIDI controller that implements articulation keys to control the mapping of melody notes to the individual Megavoice samples. This involves mapping MIDI notes and velocities according to a somewhat complicated set of rules. Code and scripting is made for this kind of work!

Finally, the Yamaha Montage demonstrates how today’s MIDI controllers are functionally limited. Yamaha have created excitement promoting the “Superknob” macro control. Basically, the Superknob is a single knob that — among other things — spins the parameters which have been assigned to individual small knobs. Please note “parameters” is plural in that last sentence.

Today’s MIDI controllers and their limited configuration paradigm typically allow only one MIDI message to be assigned to a knob at a time. The target VST or whatever must route that incoming MIDI value to one or more parameters. (The controllers’ engineers have shifted the mapping problem to the software developers at the other end.) Wouldn’t it be cool if you could configure a controller knob to send multiple MIDI messages at once from the source? Then, wouldn’t it be cool if you could yoke two or more knobs together into a single macro knob?

If you had user coding, you would be there already.

All site content Copyright © Paul J. Drongowski unless otherwise indicated

Tutorial: Soft synths on Linux and Raspberry Pi

Stepping back a little bit, I realized that my recent series of articles add up to a “Getting started with soft synths on Linux” tutorial. Here are the links:

I hope these articles help you, too. They are a great memory refresher for me.

Eventually, I want to turn the Raspberry Pi into a low cost, stomp box-sized, stand-alone soft synth host — kind of a cheap MIDI-driven tone module that does virtual analog synthesis. I want to run a headless Raspberry Pi — no monitor, no QWERTY keyboard, no mouse. With some clever scripting, I think it should be possible to start up the JACK audio server and a soft synth like amsynth at boot time. The soft synth would listen to a MIDI IN connected to the RPi through a standard USB MIDI interface. One possible option is to add a small touch panel (e.g., Adafruit PiTFT Plus 320×240) for simple user interaction, including system shutdown.

Qsynth and FluidSynth on Raspberry Pi: The basics

The first four articles in this series are a quick guide to getting started with audio and MIDI on Raspberry Pi 2:

  1. Get started with Raspbian Jessie and Raspberry Pi 2
  2. Get started: Linux ALSA and JACK
  3. Raspberry Pi soft synthesizer: Get started
  4. USB audio for Raspberry Pi

Although the articles address Raspbian JESSIE, the HOW-TOs should be able to get you started with pretty much any version of Linux.

I showed how to use a simple monophonic soft synthesizer (amsynth) in part 3. Now, it’s time to move on to a multi-timbral synth: FluidSynth. FluidSynth has a graphical front-end, Qsynth, and I’ll demonstrate Qsynth, too. This tutorial assumes that JACK (and/or ALSA) is properly configured. The second and third articles will help you with configuration.

The Web sites for FluidSynth and Qsynth are:

Please visit these sites to learn about the advanced capacilities that are offered by these programs. You can always consult manual pages while you are working:

    man fluidsynth
    man qsynth
    man qjackctl
    man aplay

or you can request help directly, e.g., fluidsynth --help.

Installation

Installation is a breeze:

    sudo apt-get install fluidsynth
    sudo apt-get install qsynth

These commands should automatically download and install the General MIDI SoundFont. The path name for the GM SoundFont is:

    /usr/share/sounds/sf2/FluidR3_GM.sf2

If you did not get the GM SoundFont by installing Qsynth or FluidSynth, then enter the command:

    sudo apt-get install fluid-soundfont-gm

to install it. If you want a Roland GS-compatible SoundFont, install it with the command:

    sudo apt-get install fluid-soundfont-gs

The General MIDI SoundFont file is about 140MBytes and the GS-compatible SoundFont file is about 32MBytes in size.

FluidSynth

Although you’re most likely to use FluidSynth via Qsynth, it’s worth discussing FluidSynth’s unique capabilities first. Some things can be done quite handily from the command line. The number of FluidSynth’s command line options can be overwhelming, so if you skip to Qsynth, that’s understandable.

FluidSynth is a multi-timbral software synthesizer based on SoundFont 2 specifications. It is a command line application program that accepts MIDI input from either a MIDI controller keyboard or a software MIDI sequencer. FluidSynth needs a SoundFont file containing instrument definitions and samples. It plays the incoming notes using the selected SoundFont instruments. FluidSynth supports sixteen MIDI channels (default). It provides chorus and reverb effects.

There are many SoundFonts available for download from the Web. Two of the best known and widely used SoundFonts are:

  • FluidR3_GM.sf2: A General MIDI sound set
  • FluidR3_GS.sf2: A Roland GS-compatible sound set

The General MIDI sound set is pretty good; don’t let the “General MIDI” label drive you away!

FluidSynth has three main usage modes:

  1. Interactive command mode.
  2. One-liner mode. “One-liner” is my name for this mode of operation.
  3. Server mode.

If you just type fluidsynth on the command line, FluidSynth launches into its interactive mode, i.e., FluidSynth accepts and interpets commands of its own. I won’t go into interactive mode here, but suffice it to say, that you can set parameters, load SoundFont files, etc. using FluidSynth commands. Enter help when you are in interactive mode in order to get information about commands and parameters. Interactive mode is a good way to explore FluidSynth configuration such that you can write out complicated combinations of FluidSynth command line options.

“One-liner mode” (option -i) launches FluidSynth without dropping into its interactive mode. You’re mostly likely to use this mode when launching FluidSynth from a shell script or if you just have a simple job to do from the command line.

One-liner mode means that you need to dive into FluidSynth’s command line options. There are many command line options including:

  • -C, --chorus: Turn chorus ON or OFF
  • -R, --reverb: Turn reverb ON or OFF
  • -K, --midi-channels: Set the number of MIDI channels
  • -j, --connect-jack-outputs: Connect JACK outputs
  • -F, --fast-render: Render MIDI to an audio file
  • -O, --audio-file-format: Audio file format for fast rendering
  • -r, --sample-rate: Set the sample rate
  • -T, --audio-file-type: Audio file type for fast rendering
  • -i, --no-shell: Don’t run in interative mode
  • -S, --server: Start FluidSynth as a server process

A full list of command line parameters is given in the FluidSynth User Manual.

One-liner mode handles two everyday tasks without a lot of GUI hoopla:

  1. Play back MIDI given a list of MIDI files on the command line.
  2. Render a MIDI file to an audio file (fast render).

FluidSynth looks for command line options, followed by a SoundFont file, followed by a list of MIDI files. Enter the following command to play back a MIDI file (“EvilWays.mid” in these examples) through the ALSA audio port such as the 3.5mm stereo jack on the Raspberry Pi 2:

fluidsynth -a alsa -n -i /usr/share/sounds/sf2/FluidR3_GM.sf2 EvilWays.mid

The -a option selects the ALSA audio device, -n suppresses MIDI input, and -i suppresses interactive mode. ALSA should be configured to use the 3.5mm audio jack. (See the second article in this series about ALSA and JACK.)

If you prefer to use JACK instead of vanilla ALSA, start the JACK server running via qjackctl. (See the third article in this series about using JACK with a soft synth.) Then, enter the following command:

fluidsynth -a jack -j -n -i /usr/share/sounds/sf2/FluidR3_GM.sf2 EvilWays.mid

The -a option selects JACK and the -j option tells JACK to connect the audio output of FluidSynth to the system audio output. If you leave out the -j option, JACK will not make the audio connection and you will be left wondering why there isn’t any sound coming from your speakers! You can also make this connection in the qjackctl Connections or Patchbay windows. In practice, if you aren’t getting audio output or MIDI, check your connections in JACK — audio or MIDI connections may be missing.

The image below shows the audio connection from FluidSynth to JACK. (Click on the image to enlarge it to full resolution.) This is a snapshot of the qjackctl Connections window while FluidSynth is playing a MIDI file. The audio connection is broken when FluidSynth is done with playback (i.e., when FluidSynth exits).

qjackctl_fluidsynth

Fluidsynth provides a way to fast render a MIDI file to a digital audio file. “Fast” is a relatively term. Perhaps “non-realtime render” may be a more accurate description. The following command:

fluidsynth -T wav -F EvilWays.wav /usr/share/sounds/sf2/FluidR3_GM.sf2 EvilWays.mid

converts a MIDI file (“EvilWays.mid”) to a WAV format audio file (“EvilWays.wav”). The -T option specifies the file format and the -F option specifies the name of the output file. The rendering process grinds on for a little while, so please be patient. Once you have the audio file, play it back using the ALSA aplay program:

    aplay -D hw:CODEC,0 EvilWays.wav

This example command sends digital audio to the CODEC audio device. Of course, you may use the built-in audio port or some other device. (See part 2 of this series for more examples. These tutorial articles build on each other!)

The way to get a list of audio types (-T) and audio file formats (-O) is confusing. You need to pass “help” to the appropriate command line option. (Grrrrrr.) The command:

    fluidsynth -O help

produces the following output on Raspbian JESSIE:

-O options (audio file format):
   'double','float','s16','s24','s32','s8','u8'

s8, s16, s24, s32: Signed PCM audio of the given number of bits
float, double: 32 bit and 64 bit floating point audio

The command:

    fluidsynth -T help

produces the output:

-T options (audio file type):
  'aiff','au','auto','avr','caf','flac','htk','iff','mat','mpc','oga',
  'paf','pvf','raw','rf64','sd2','sds','sf','voc','w64','wav','wve','xi'

auto: Determine type from file name extension, defaults to "wav"

Finally, server mode is needed when you want to run FluidSynth as a stand-alone server process. Qsynth is more convenient, so I won’t discuss server mode here just to keep things short.

I have to warn you, working with FluidSynth in either interactive mode or one-liner mode is not always smooth. Feedback is limited and you often have to work through rather cryptic error messages. Qsynth makes life much easier and interesting.

Qsynth

Qsynth is a graphical user interface (GUI) for FluidSynth. Qsynth is based on the Qt framework and toolset for user interface design and implementation.

Qsynth is the way to go if you want to use it as a soft synth with a MIDI controller or sequencer. It pairs up rather nicely with QJackControl, too.

We intend to demonstrate Qsynth using an M-Audio Keystation Mini 32 controller. If you’re working along with me, plug a MIDI keyboard controller into an available Raspberry Pi 2 USB port. Launch qjackctl:

    qjackctl &

and start the JACK server by clicking the Start button in the QJackCtl control panel. JACK routes the audio to the selected audio output port. Then, launch qsynth:

    qsynth

Qsynth automatically searches for the JACK server and connects audio to it. Qsynth displays a control panel which resembles an old school MIDI module. The panel knobs control master gain and the reverb and chorus effects. There are also buttons to Restart FluidSynth, to stop stuck notes (Panic), to Reset settings and to view/edit MIDI channel settings (Channels).

qsynth_panel

At this point, you need a MIDI connection from the Keystation (or other MIDI controller) to Qsynth. In the demo, I clicked the Connect button on the QJackCtl panel and made the MIDI connection using the Connections window. (See the image below. Click on the image for full resolution.)

qjackctl_qsynth

Select the Keystation entry on the left and select the FluidSynth entry on the right. Click the Connect button to make the MIDI connection. “FluidSynth” appears as a destination in the right hand column instead of “Qsynth.” Remember, Qsynth is a graphical front-end for a FluidSynth running in the background. The MIDI controller needs to communicate with the soft synth.

Play a few notes on the MIDI controller to make sure that audio and MIDI are working. Then, click the Setup button on the Qsynth front panel. Qsynth displays its Setup window which has four tabs: MIDI, Audio, Soundfonts and Settings. Click SoundFonts to go to the Soundfonts tab.

qsynth_setup

The SoundFonts tab displays the SoundFont files that are currently loaded into Qsynth (FluidSynth). Click on the Open button to load a SoundFont file like:

    /usr/share/sounds/sf2/FluidR3_GS.sf2

Use the Remove button to unload a SoundFont. Click the OK button when you are finished making changes.

If you start Qsynth with the General MIDI SoundFont and play notes on MIDI channel 1, you hear a grand piano voice. Click the Channels button on the front panel in order to change voices. With the Channels window open, double click on a row in the MIDI channel table. Should you prefer contextual menus instead, right click on a row and select Edit in the pop-up menu. This action gets you to the same place: the channel edit window (below).

qsynth_edit_channel

The channel edit window displays a list of available SoundFont voices. Voices are organized and selected in the conventional way, namely, banks and individual programs (voices). Choose a different voice like Strings (General MIDI bank 0, program 48). Qsynth does not change the voice until you click the OK button to confirm the change. If you would like to browse and try voices, check the Preview box. When Preview is enabled, Qsynth temporarily changes the voice, letting you plink away on the controller and hear the voice before changing it (or perhaps just leaving things alone by cancelling).

Click the Quit button on the Qsynth front panel when you’re finished. Then, stop the JACK server using the QJackCtl control panel.

That’s all there is to it!

Copyright © 2016 Paul J. Drongowski

USB audio for Raspberry Pi

In the first few articles of this series:

Get started with Raspbian Jessie and RPi2
Get started: Linux ALSA and JACK
Raspberry Pi soft synthesizer: Get started

we used the built-in, 3.5mm audio output from the Raspberry Pi 2 (RPi2) to produce sound through powered monitors. If you tried this with your own RPi2, you realize that the sound quality is good enough for initial experiments, but not good enough for production — unless you’re into lo-fi.

This article starts with background information about the built-in audio circuit and why it is lo-fi. Then, I briefly mention a few alternative approaches for high quality audio output and audio input. Finally, I describe my experience bringing up the Behringer UCA-202 USB audio interface on RPi2 and Raspbian JESSIE.

Built-in audio

The Raspberry Pi Foundation has not yet published a schematic for the Raspberry Pi 2. However, Adafruit (and others) claim that the audio circuit is the same as the earlier, first generation Raspberry Pi. Let’s take a look at that.

The Raspberry Pi drives a pulse width modulated (PWM) signal into a passive low pass audio filter. (See the schematic below. Click on images to enlarge and get full resolution.)

rpi_audio_schematic

The PWM technique produces OK audio, but not good, clean audio. The software performs RPDF dithering and noise shaping to improve quality. Later RPi models (like the B+ and generation 2) have better power regulation and produce less digital noise at the audio output. There is much on-line debate about further improvements, but the PWM technique seems is limited by the 11-bit quantization. (This latter point alone is subject to debate!)

JACK seems to modify the audio sample stream as well. I can hear a loud hiss from my speakers when JACK is running and sending audio through the built-in DAC circuit. Ideally, the speaker should be completely silent.

Raspberry Pi 2 does not have an audio input. Thud!

Alternatives to built-in audio

If you want better audio quality or need to record an external audio signal, there are two approaches:

  1. Buy and install an audio board.
  2. Buy and install a USB audio interface.

With respect to the first approach, I briefly explored two of the available Raspberry Pi add-on audio boards:

  1. Cirrus Logic Audio Card
  2. HiFiBerry DAC Pro+

The Cirrus Logic board is well-specified with a WM5102 audio hub, WM8804 S/PDIF transceiver, and two WM7220 digital microphone integrated circuits. Those in the know will recognize these parts as Wolfson designs. The HiFiBerry DAC+ Pro is output only and uses an equally well-respected Burr Brown digital-to-audio converter (DAC).

Potential users are advised to be careful and to check compatibility with their particular model of Raspberry Pi. Adafruit cautions that the Cirrus Logic board may not be compatible with Raspberry Pi 2.

Both boards have drivers. However, both vendors eshew device configuration and prefer to distribute full OS images that include the requisite drivers. This approach puts existing users at a disadvantage. Now that I have Raspbian JESSIE installed and running, I would like to build and install the driver by itself, not write another micro SD card and go through the bring up process again.

With these issues in mind, I decided to go the USB audio interface route. It’s also the lowest cost option for me because I already have a Behringer USB audio interface in hand.

Behringer UCA-202 audio interface

The Behringer UCA-202 is an inexpensive ($30 USD) USB audio input/output interface. Analog signals are transfered on RCA connectors (left/right IN and left/right OUT). The UCA-202 also has a headphone output and an S/PDIF optical output. The UCA-202 is bus-powered and class-compliant. Conversion is 16-bit at 32kHz, 44.1kHz or 48kHz. The UCA-202 has a sister, the UCA-222, with the same spec.

I have used the UCA-202 as a plug-and-play audio interface with both Windows and Mac OS X. Now, I can claim success with Raspbian JESSIE Linux, too. This thing is the “pocket knife” of low-cost USB audio interfaces.

Even though I’m using a Behringer UCA-202, the directions below should also apply to other class-compliant USB audio interfaces. It never hurts to search the Web for directions, problems and tips for your particular audio interface. Just sayin’.

Before plugging in the UCA-202, run aplay -l and aplay -L to see a list of the sound cards (-l) and PCMs (-L) that are installed on your machine.

Next, plug the UCA-202 into one of the USB ports. Run the aplay commands, again, and look for a new audio device. On my machine, a new sound card appears in the aplay -l output:

    card 1: CODEC [USB Audio CODEC], device 0: USB Audio [USB Audio]
      Subdevices: 1/1
      Subdevice #0: subdevice #0

The new sound card is named “CODEC”, it is ALSA card number 1, and it has one subdevice (number 0). The aplay -L output lists a whole slew of new PCMs:

    sysdefault:CARD=CODEC
        USB Audio CODEC, USB Audio
        Default Audio Device
    front:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        Front speakers
    surround21:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        2.1 Surround output to Front and Subwoofer speakers
    surround40:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        4.0 Surround output to Front and Rear speakers
    surround41:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        4.1 Surround output to Front, Rear and Subwoofer speakers
    surround50:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        5.0 Surround output to Front, Center and Rear speakers
    surround51:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        5.1 Surround output to Front, Center, Rear and Subwoofer speakers
    surround71:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        7.1 Surround output to Front, Center, Side, Rear and Woofer speakers
    iec958:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        IEC958 (S/PDIF) Digital Audio Output
    dmix:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        Direct sample mixing device
    dsnoop:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        Direct sample snooping device
    hw:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        Direct hardware device without any conversions
    plughw:CARD=CODEC,DEV=0
        USB Audio CODEC, USB Audio
        Hardware device with all software conversions

Not all of these PCMs are defined and configured by the way. Take note of the PCM named “hw:CARD=CODEC,DEV=0”. This is essentially the raw interface to the UCA-202. This PCM, at the very least, is defined.

Connect the audio outputs of the UCA-202 to powered monitors. Test the audio output interface by playing an audio (WAV) file:

    aplay -D hw:1,0 HoldingBackTheYearsDb.wav

or:

    aplay -D hw:CARD=ALSA,DEV=0 HoldingBackTheYearsDb.wav

Please note that you need to pass in the entire PCM name “hw:CARD=CODEC,DEV=0“.

Connect an audio source to the inputs of the UCA-202. Test the audio input interface by recording to an audio (WAV) file:

    arecord -D hw:CARD=ALSA,DEV=0 -f cd test.wav

I had trouble with the duration (-d) option. YMMV. Type Control-C to stop recording. Then, play back the test audio file through the UCA-202.

That’s all there is to it! The UCA-202 is truly plug and play.

Configure JACK and other applications

You need to tell the JACK audio server to use the UCA-202 instead of the RPi’s built-in audio device. Run qjackctl and click the Settings button. Select “hw:CODEC” as the Input Device and Output Device. (See the image below.) Click OK to return to the main control panel and start the JACK server. The server routes digital audio to and from the UCA-202 and JACK clients. Launch amsynth and click its Audition button. You should hear sound from the powered monitors that are connected to the UCA-202.

qjackctl_codec

ALSA’s aplay and arecord commands are OK for testing, but are clunky for practical use. Let’s install Audacity:

    sudo apt-get install audacity

Audacity is the well-known cross-platform, open source, audio editing tool.

Edit Audacity’s preferences to set the audio interface. (See the following image.) If you want to use ALSA directly, set the interface Host to ALSA. Then set the Playback and Recording Devices to “USB Audio CODEC”. Audacity should now be able to play and record through the UCA-202.

audacity_alsa

If you prefer to use JACK instead, once again edit Audacity’s preferences. (See the following image.) Set the interface Host to “JACK Audio Connection Kit”. Set the Playback and Recording Device to “system”. Make sure the JACK audio server is running. You may need to restart Audacity at this point. Play back an audio file or try recording a new file. JACK should serve the UCA-202 audio to/from Audacity.

audacity_jack

Raspberry Pi soft synthesizer: Get started

Now let’s make some noise!

This article shows how to install, configure and play a simple software synthesizer (amsynth) on Raspberry Pi 2. The first part in this series is a quick installation and configuration guide for Raspbian Jessie Linux. The second part is an introduction to the Linux audio infrastructure (ALSA and JACK). Please consult these articles for background information. I assume that you know a little about JACK and ALSA aconnect in this article.

amsynth

amsynth is a basic virtual analog (i.e., analog modeling) synthesizer for Linux. It is polyphonic (16 voices max). Each voice has two oscillators, a 12 or 24dB per octave resonant filter and dual ADSR envelope generators. All can be modulated using a low frequency oscillator (LFO). The synth also has distortion and reverb effects. Read more about amsynth at the amsynth web site.

amsynth is a good starting point for exploration since it is easy to set up and use. It can operate standalone (JACK, ALSA or OSS) or as a plug-in (DSSI, LV2, VST). When amsynth launches, it automatically searches for a JACK audio server. If it cannot find a JACK server, it switches to ALSA audio.

Run the following command to install amsynth:

    sudo apt-get install amsynth

The package manager fetches amsynth and the libraries, etc. that amsynth needs.

I’m going to show amsynth running on ALSA and JACK in this tutorial. I had the most success running on JACK and I recommend that approach for practical work. My goal is to play amsynth from an external MIDI keyboard — an M-Audio Keystation Mini 32 in this demonstration.

amsynth running on ALSA

ALSA seemed like the fastest way to test amsynth. Indeed, it came right up and I was able to play amsynth using the Keystation once I connected the ALSA MIDI ports for amsynth and the Keystation.

To repeat my initial experiment, start two terminal windows on the desktop. In the first window, run amsynth:

    amsynth

Simple, huh? No command line arguments to mess with. You should see the amsynth front panel as shown in the image below. Notice the status at the bottom of the amsynth front panel. The synth expects to use ALSA for both MIDI and audio.

amsynth_alsa

With the Keystation plugged in, run aconnect in the second window to identify the available ALSA MIDI ports:

    > aconnect -o
    client 0: 'System' [type=kernel]
        0 'Timer           '
        1 'Announce        '
    client 14: 'Midi Through' [type=kernel]
        0 'Midi Through Port-0'
    client 20: 'Keystation Mini 32' [type=kernel]
        0 'Keystation Mini 32 MIDI 1'
    client 128: 'amsynth' [type=user]
        1 'MIDI OUT        '
    > aconnect -o
    client 14: 'Midi Through' [type=kernel]
        0 'Midi Through Port-0'
    client 20: 'Keystation Mini 32' [type=kernel]
        0 'Keystation Mini 32 MIDI 1'
    client 128: 'amsynth' [type=user]
        0 'MIDI IN         '

The aconnect -i command displays ALSA MIDI sender ports including the MIDI coming in from the Keystation. The aconnect -o command displays the ALSA MIDI receiver ports that accept MIDI data including the MIDI IN port belonging to amsynth.

Use aconnect, again, to patch the Keystation to amsynth:

    aconnect 20:0 128:0

ALSA ports are identified by client and client-specific port number. The first port in the command line above is the sender port and the second port is the receiver port.

Enter aconnect -l to display port and connection status. Here is what I saw after connecting the Keystation to amsynth:

    client 0: 'System' [type=kernel]
        0 'Timer           '
        1 'Announce        '
    client 14: 'Midi Through' [type=kernel]
        0 'Midi Through Port-0'
    client 20: 'Keystation Mini 32' [type=kernel]
        0 'Keystation Mini 32 MIDI 1'
            Connecting To: 128:0
    client 128: 'amsynth' [type=user]
        0 'MIDI IN         '
            Connected From: 20:0
        1 'MIDI OUT        '

Click the Audition button on the front panel. amsynth plays a sound. Hit the keys on the Keystation and amsynth plays the notes.

Now that you’re in business, here are a few things to do:

  • Try different presets.
  • Turn the virtual knobs while holding a note.
  • Twist MIDI controller knobs and watch amsynth track the changes.
  • Explore amsynth’s menus.

You probably noticed a few greyed out items in the Utils menu:

  • MIDI (ALSA) connections
  • Audio (JACK) connections

These items refer to utility programs that make MIDI and audio connections (kaconnect, alsa-patch-bay, qjackconnect). I couldn’t locate pre-built versions of these programs for Raspbian. This isn’t a big deal, since we’re going with JACK anyway.

If you followed these directions and played amsynth with a MIDI keyboard of your own, you probably noticed the latency (lag) between striking a key and hearing a sound. The lag under ALSA alone is unacceptable — another reason to go with JACK.

Should you need a virtual keyboard, here are two Linux applications for ya:

    vkeybd         Virtual MIDI Keyboard
    vmpk           Virtual MIDI Piano Keyboard

Install these with the sudo apt-get install command.

amsynth running on JACK

Let’s run amsynth along-side JACK for audio.

JACK is a server that runs as a separate Linux process. A process running a system service like JACK is called a “daemon” in Linux terminology. (Just in case you see this term when reading supplementary articles on the Web.) We need to start JACK running before amsynth so that amsynth can discover the JACK server and connect to it.

Here is the general flow of things when getting down to work:

  1. Plug in your MIDI controller.
  2. Launch qjackctl.
  3. Change JACK settings, if necessary.
  4. Start the JACK server.
  5. Launch amsynth or other JACK-aware application.
  6. Make connections in qjackctl or ALSA.

Full disclosure, I first started JACK from the command line using a variety of suggested options and had only limited success. I got a few runtime errors along the way and the latency was unacceptably long.

These first experiments produced one useful tip: Add yourself to the Linux audio group. The notion of a group in Linux is similar to the different classes of users that you find on a different operating system, e.g., the group of Administrator users on Windows. Users belonging to the audio group have special rights which improve the performance of realtime applications like a soft synthesizer. These rights include the ability to reserve and lock down memory and to run time-critical operations at a higher priority.

The Raspbian Jessie image comes equipped with the audio group. The following command checks to see if the audio group is already defined (just in case you’re working on a different version of Linux):

    grep audio /etc/group

If this command doesn’t display anything, then you need to create the audio group yourself. The command:

    sudo groupadd audio

adds the audio group. You will need to define the rights and privileges for the audio group — an expert task that I will not explain here. See the references at the bottom of this page for more details.

Run the following command to display your group membership:

    groups

If “audio” is not listed in the output, then you need to add yourself to the audio group:

    sudo usermod -a -G audio XXX

where XXX is your user name. The next step is vital to your sanity. Log out. Log all the way out. If you logged in from the text shell and started the X Windows system, then leave X Windows and log out from the text shell. Then, log back in. Run groups and the system should now show you as a member of the audio group. Group membership is established and inherited when you log in.

Finally, it’s time to start JACK. Fortunately, JACK has a graphical control panel called qjackctl. The control panel uses the cross-platform Qt graphical user interface (GUI) package which supplies all of the buttons, drop-down lists and so forth. Start the control panel with the following command:

    qjackctl &

The ampersand at the end of the command line is not accidental. It tells the Linux shell to run qjackctl and detach the control panel from the terminal window. This leave the terminal window live and ready to accept new commands.

The qjackctl control panel is shown in the following image.

qjackctl

Click the Setup button in order to make a few small changes. Change the Sample Rate parameter to 44100Hz, which is the rate prefered by amsynth. Set the Periods/Buffer parameter to 4. If the number of periods is less than 4, you will probably hear noisy, glitchy audio. JACK and amsynth work just fine when the Output Device is set to “(default)”. I decided to set the Output Device parameter by hand to “hw:ALSA,0” as a way of testing the ALSA settings. Please see the settings that I used in the following image. (Click images to get full resolution.)

qjackctl_setup

Now launch amsynth:

    amsynth

The soft synth will search for the JACK audio server and should connect to it.

You could follow the procedure in the ALSA section (above) to connect the Keystation to the MIDI IN belonging to amsynth. However, qjackctl has two convenient ways to make MIDI connections:

  1. Connections
  2. Patchbay

These features reside behind the Connect and Patchbay buttons. They each have similar capabilities and allow you to make connections between MIDI and audio ports. The main difference is persistence or lack thereof. Connections are temporary and are broken when a client is terminated. Connections are forgotten when the JACK server is terminated, too. The Patchbay lets you define, save and load port-to-port connections in a file. JACK is also pretty good about restoring the active patchbay even if you haven’t started applications, soft synths, etc. in an orderly way. (JACK needs to be running first, of course.)

I made connections using both techniques just for fun. The image below is a snapshot of the Connections dialog box. There are three tabs — one for each type of connection (port). I made MIDI connections using the ALSA tab because the Keystation MIDI ports were not registered with JACK. (They did not appear on the MIDI tab even though the MIDI tab did show amsynth‘s MIDI ports.) To make a connection, just select a sender in the left column and a receiver in the right column. Then click the “Connect” button. If you terminate amsynth or JACK, the connection is lost and forgotten.

qjackctl_alsa_midi

The Connections dialog is a good place to experiment while you’re getting your virtual, in-the-box studio together. When you have a set-up that you like, it’s time to capture the set-up in the Patchbay. First, click the “Patchbay” button on the qjackctl control panel. Click the New button. Use the appropriate Add button to add output sockets to the left column or to add input sockets to the right column. Then, choose two ports and click the Connect button. After making connections, save the set-up to a file. The interface is intuitive. You can save and load as many different set-ups as you would like (as long as there is free drive space!)

qjackctl_patchbay

When you quit JACK, it remembers the last active Patchbay set-up. JACK recalls this set-up when you launch JACK, again. In case you’re wondering, qjackctl saves its configuration (settings) in:

    /home/XXX/.config/rncbc.org/QjackCtl.conf

where “XXX” is your Linux user name. The “.” character at the beginning of “.config” hides the “.config” file. Use ls -a to show all files in a directory including the hidden ones. The JACK daemon saves its configuration in:

    /home/XXX/.jackdrc

where “XXX” is your linux user name. This, by the way, is your home directory. Linux applications typically store configurations in hidden files within your home directory. The “.jackdrc” file contains the command that was last used to launch JACK, e.g.,

    /usr/bin/jackd -dalsa -dhw:0 -r44100 -p1024 -n4 -D -Phw:ALSA,0

This is good to know when you want to find out the initial launch conditions for the JACK daemon.

The one aspect that qjackctl does not handle well is the deletion of Patchbay set-up files. qjackctl stores a Patchbay set-up in an XML file. If you delete or move the XML file, then you will get a warning message like:

    Could not load active patchbay definition. Disabled.

You will need to delete the reference to the missing file from the “QjackCtl.conf” file.

At this point, you should be able to play amsynth from an external MIDI controller with acceptable latency. Have fun!

Finally, I found three well-written guides to JACK, qjackctl, and the JACK patchbay. Here are the links. If you read my introduction to ALSA and JACK and this articles, then you have sufficient background to dive into the finer points.

Demystifying JACK – A Beginner’s Guide to Getting Started with JACK
HOW-TO QjackCtl Connections
QjackCtl and the Patchbay

If you enjoyed this article, then be sure to check out:

Qsynth and FluidSynth on Raspberry Pi: The basics

Copyright © 2016 Paul J. Drongowski

Get started: Linux ALSA and JACK

Before we dive into specific music applications, I need to provide a little background information about audio and MIDI support on Linux.

If you’re coming from Mac OS X or Windows, you may not have heard very much about the Linux way of doing audio and MIDI. Seems like the “mainstream media” don’t want to have much to do with Linux. Linux has a very well-developed infrastructure for audio and MIDI. Linux audio is a “stack” (a layer cake) with audio/MIDI applications on top:

  • Audio applications
  • JACK (Jack Audio Connection Kit)
  • ALSA (Advanced Linux Sound Architecture)
  • Linux kernel

You probably haven’t heard about JACK and ALSA before, so a little explaining is in order.

The Advanced Linux Sound Architecture (ALSA) uses the kernel to implement low-level — but extremely powerful — audio and MIDI features. ALSA provides several useful applications, but I like to think of ALSA as a tool to build higher level tools. ALSA is the layer that supports “soundcards,” which is the Linux catch-all term for hardware audio interfaces, MIDI interfaces, and more. Go to the ALSA project homepage to get more information from the developer’s perspective.

You are far more likely to interact with the Jack Audio Connection Kit (JACK) than ALSA. JACK is an audio/MIDI server that provides audio and MIDI services to JACK-based applications (i.e., applications using the JACK API). The list of JACK-enabled applications is impressive. In fact, this list is a rather good summary of the audio and MIDI applications that are available on Linux! Check out the JACK project page to get more information from the developer’s point of view. End-users (us normal people) should read the JACK FAQ which covers some of the finer points about JACK.

ALSA utils

The ALSA utility applications are collectively known as “ALSA utils.” Use the apt-get command to download and install the ALSA utils:

    sudo apt-get install alsa-utils

Here is a list of the ALSA utility applications:

    alsactl    Change and save settings for an audio device
    amixer     Adjust volume and sound controls (ncurses version)
    alsamixer  Adjust volume and sound controls (ncurses version)
    aconnect   Make MIDI connections
    aseqview   Display ALSA sequencer events (e.g., note ON, note OFF)
    aplay      Play back an audio file from the command line
    arecord    Record an audio file from the command line

Let’s take a look at a few of these applications in action.

Test speaker output

Although not strictly part of ALSA utils, speaker-test is a quick way to make sure that the built-in Raspberry Pi audio output is properly connected and configured.

First, connect the RPi2 audio output to your powered monitors using a 3.5mm to whatever patch cable. The Raspberry Pi built-in audio can be routed to either the 3.5mm audio jack (“analog”) or to the the HDMI port. Enter the command:

    amixer cset numid=3 N

to route the built-in audio. Replace “N” with one of the following choices:

    0: auto   1:analog   2:HDMI

In this case, use N=1 to route the audio to the 3.5mm audio jack. Then, run the command:

    speaker-test -t sine -f 440 -c 2

to send a 440Hz tone to the audio output. You should hear a test tone from your speakers.

If you don’t hear a test tone, double check your connections. You may need to add the current user to the audio group: sudo adduser XXX audio, where “XXX” is the user’s name. (I don’t believe this is strictly necessary.)

Play an audio file

Once speaker output is working, why not play an audio file? The aplay program plays an audio file. It supports just a handful of audio formats: voc, wav, raw or au. The default format is WAV.

    aplay -c 2 HoldingBackTheYearsDb.wav

The -c option specifies two channels. (The default is one channel of audio.)

If you listen carefully, you’ll notice that the built-in audio is a little bit noisy. I’ll get into the issue of audio quality in a future blog entry.

The command aplay -l displays a list of all sound cards and digital audio devices.

ALSA mixing

There are two ALSA utility mixer applications: amixer and alsamixer. amixer is a command line tool that controls one or more soundcards. The command (which does not have any command line arguments):

    amixer

displays the current mixer settings for the default soundcard and device as shown below:

    Simple mixer control 'PCM',0
      Capabilities: pvolume pvolume-joined pswitch pswitch-joined
      Playback channels: Mono
      Limits: Playback -10239 - 400
      Mono: Playback -2000 [77%] [-20.00dB] [on]

The output shows a list of the simple mixer controls at your disposal.

The alsamixer application is a bit more visual. alsamixer turns the terminal window into a visual mixer. Try:

    alsamixer

and see. Start alsamixer in one window and play an audio file in different window. Use the UP and DOWN arrows to control the playback gain (volume). Use the escape key (ESC) to exit alsamixer.

MIDI patch-bay

ALSA provides a virtual MIDI patch-bay that lets you interconnect MIDI senders and receivers. MIDI data is communicated from sender ports to receiver ports. A port may belong to either a MIDI hardware interface or a software application. The virtual patch-bay allows for very flexible, powerful MIDI data routing.

The aconnect utility application both displays the status of the virtual patch-bay and makes connections. First off, we need to know the available sender and receiver ports. The command:

    aconnect -i

displays a list of the sender ports including external MIDI input ports. External MIDI input ports (-i) are ALSA sender ports because they send MIDI data to ALSA receiver ports. I connected a Roland UM-2ex MIDI interface to one of the RPi’s USB ports and got the following output with aconnect -i:

client 0: 'System' [type=kernel]
    0 'Timer           '
    1 'Announce        '
client 14: 'Midi Through' [type=kernel]
    0 'Midi Through Port-0'
client 20: 'UM-2' [type=kernel]
    0 'UM-2 MIDI 1     '

The UM-2ex has one 5-pin MIDI IN (client 20, port 0).

Likewise, the command:

    aconnect -o

displays a list of the receiver ports including external MIDI output ports. External MIDI output ports (-o) are ALSA receiver ports because they receive MIDI data from ALSA sender ports. Here is the output when the UM-2ex is connected:

client 14: 'Midi Through' [type=kernel]
    0 'Midi Through Port-0'
client 20: 'UM-2' [type=kernel]
    0 'UM-2 MIDI 1     '
    1 'UM-2 MIDI 2     '

The UM-2ex has two 5-pin MIDI OUTs (client 20, port 0 and port 1).

The notions of sender and receiver may seem a little confusing especially in the context of external MIDI INs and OUTs. Please keep in mind that “send” and “receive” are defined with respect to ALSA itself (and ALSA objects).

Now, I want to really blow your mind. Let’s connect both the Roland UM-2ex and an M-Audio Keystation Mini 32 to the RPi2. Here is the output generated by aconnect -i:

client 0: 'System' [type=kernel]
    0 'Timer           '
    1 'Announce        '
client 14: 'Midi Through' [type=kernel]
    0 'Midi Through Port-0'
client 20: 'UM-2' [type=kernel]
    0 'UM-2 MIDI 1     '
client 24: 'Keystation Mini 32' [type=kernel]
    0 'Keystation Mini 32 MIDI 1'

We can see the MIDI IN for the UM-2 and the Keystation.

Here is the output generated by aconnect -o:

client 14: 'Midi Through' [type=kernel]
    0 'Midi Through Port-0'
client 20: 'UM-2' [type=kernel]
    0 'UM-2 MIDI 1     '
    1 'UM-2 MIDI 2     '
client 24: 'Keystation Mini 32' [type=kernel]
    0 'Keystation Mini 32 MIDI 1'

We see the MIDI OUTs for the UM-2 and the Keystation.

Let’s patch the Keystation (client 24, port 0) to the MIDI OUT of the UM-2ex (client 20, port 0):

    aconnect 24:0 20:0

The sender port is (24:0) and the receiver port is (20:0). MIDI messages are sent from the Keystation to the MIDI OUT of the UM-2ex. If you physically connect the MIDI IN of a tone module or synthesizer to the UM-2’s MIDI OUT, you can now play the tone module or synth using the Keystation. Guess what we just built? A USB MIDI to 5-pin MIDI bridge. Ever need to control an old school 5-pin MIDI synth using a new school USB-only MIDI controller? Now you can with Raspberry Pi and ALSA!

Run aconnect -l to display the connection status. Here is the output for the virtual patch bay:

client 0: 'System' [type=kernel]
    0 'Timer           '
    1 'Announce        '
client 14: 'Midi Through' [type=kernel]
    0 'Midi Through Port-0'
client 20: 'UM-2' [type=kernel]
    0 'UM-2 MIDI 1     '
	Connected From: 24:0
    1 'UM-2 MIDI 2     '
client 24: 'Keystation Mini 32' [type=kernel]
    0 'Keystation Mini 32 MIDI 1'
	Connecting To: 20:0

The output shows the connection from the Keystation to the UM-2ex.

To break the connection, run the command:

    aconnect -d 24:0 20:0

Run aconnect -l, again, and you’ll see that the connection has been removed.

More resources

If you’re a long-time reader of my site, you know that I blogged about the USB to 5-pin MIDI bridge technique before. If you have a Raspberry Pi and know how to run aconnect, you have a bridge!

The Ardour folks have two good articles about JACK on Linux (here and here).

New to Linux (Raspbian Jessie) on Rapsberry Pi? Then be sure to check out my article about getting started with Raspbian Jessie and Raspberry Pi.

Get started with Raspbian Jessie and RPi2

The Raspberry Pi 2 Model B (RPi2) is a computational gem. For $40 USD, you get a 900MHz quad-core ARM processor, a built-in graphics core, 1GBytes of RAM, 4 USB ports, HDMI, an Ethernet port, audio output and a Micro SD card slot. (The RPi2 does not have a built-in audio input.) This platform can handle most of the every-day tasks that people can sling at it and could easily replace platforms costing 10 times as much. Add in the cost of a keyboard/mouse, display and Micro SD card, and the total package price tips the scale at a little over $100.

When the RPi2 was introduced in February 2015, the Raspberry Pi Foundation released Raspsian Wheezy, the first Linux distribution supporting the RPi2. I installed the first release last February, and it felt, well, kind of wheezy.

I just installed the latest release, Raspbian Jessie (February 2016) and all I can say is “Wow.” This release feels finished. If you tried Raspbian on RPi2 before and were disappointed, it’s time to come back into the fold. (Download Raspbian.) This is the release that should have been there at the RPi2 launch! (See a quick introduction to the Jessie desktop.)

With a foot of snow on the ground, it seems like an opportune time to see what RPi2 and Jessie can do for musicians. I intend to try the RPi2 as a synthesizer and will post my experiences here. In the meantime, here are some tips for getting started with RPi2 and Jessie.

Linux requires just a little more work to get started than Mac OS X or Windows. However, if you put in just 10 or 20 minutes, you can have a quad-core music making machine for cheap. Shucks, even OS X and Windows 7 need to know your account name, etc. and Jessie doesn’t require much more information than that. So, what follows is my personal checklist for getting started with Raspberry Pi 2.

Hardware

Of course, you need the hardware. I bought a Canakit Raspberry Pi 2 kit last year. The Canakit includes most of the accessories that you need to get started. I imagine that future Canakit’s will include the latest Raspbian release (Jessie) pre-installed on Micro SD card.

My RPi2 lives in a cheap plastic case. That’s good enough — no fans, no heatsinks. I use an old HP monitor with an HDMI input and an even older Logitech wireless keyboard and mouse. The Logitech wireless interface takes up a single USB port, leaving three open USB ports. I connect the RPi2 Ethernet port directly to our router since I like to have the network up and running right from the start. The Canakit package included a USB Wi-Fi interface, but I never felt motivated to bring it up. Cables work good, too.

Once you have everything connected, it’s time to move on to software.

Download and install

Since our house is littered with computers, I first downloaded Raspbian Jessie to a Windows 7 PC. I followed the installation guide for Windows and wrote the disk image to an 8GByte Micro SD card. I do not recommend using anything smaller than an 8GByte card since you will need room for all the applications, samples and stuff for your projects.

The installation guide recommends the Win32DiskImager utility from Sourceforge. This utility works like a champ. Just be sure that you write to the correct destination disk!

If you’re installing from Linux or Mac OS X, there are installation guides for you, too. I do not recommend upgrading an old Wheezy system to Jessie. I read through the process and it is far easier to do a clean install.

First and second boot

Plug the Micro SD card into the RPi2 and apply power (i.e., plug in the power supply). It takes a few moments until the RPi2 boots the kernel (AKA “the OS”) and starts the X Windows system. The stock RPi2 boots into the desktop. The default user name is “pi” and the default password is “raspberry”.

At this point, it’s important to get a few housekeeping tasks out of the way. These tasks are similar to the ones you need to perform after installing OS X or Windows. These tasks use the raspi-config utility.

First, launch a terminal window by clicking on the terminal window icon in the task bar at the top of the screen. This action brings up a textual “shell” where you enter commands. Enter:

    sudo raspi-config

to launch the raspi-config utility. The sudo tells Linux that you want to use administrator (“super user”) privileges when running raspi-config. Linux will prompt for a password. Enter “raspberry”, the default password for the default user, “pi”.

raspi-config displays a rather 70s-ish interface with several options. Use the arrow keys to move between items. Use the ENTER key to select an item.

The disk image which you wrote to the Micro SD card probably didn’t use all of the available space on the card. So, your first job is to extend the Linux file system to use the entire Micro SD card. Use the arrow keys to move to the “Expand filesystem” item. Hit ENTER. When prompted to reboot, choose OK. You need to reboot to get access to the full capacity of the Micro SD card.

After reboot, you should be back in the desktop again. Launch a terminal window to get the shell. Enter sudo raspi-config to perform a few more housekeeping steps related to internationalisation. Use the arrow keys to move to the “Internationalisation options” item and hit ENTER.

It’s called “Internationalisation,” but it’s really configuring your RPi2 to your local country or region. raspi-config displays a short list of options. Choose the “Change timezone” option, follow the on-screen directions, and set your local timezone.

Next, choose your locale. The locale controls language and formating of date, time, currency, etc. The default locale is set for Great Britain. If you’re in the United States, for example, select one or more locales for the US, e.g., “en_US.UTF-8 UTF-8”. The text interface is a little weird here — use the SPACE key to mark one or more locales and hit ENTER when finished.

Then, change the keyboard layout. Follow the on-screen directions to find a close match for the keyboard that is connected to your RPi2. When you see questions about “compose key,” etc., fake your way through the menus. You probably won’t be doing this stuff anyway.

Finally, reboot. Rebooting the system at this point makes sure that the locale and other internationalization changes kick in.

Explore and browse

After reboot, the RPi2 should again return to the desktop. Now it’s time to explore the desktop a little bit. I recommend taking a tour through the start menu in the upper left corner of the desktop. When you find a menu item for the browser, try it out! If all is good with your network connection, then you should be able to access the Web. This is my lifeline to helpful information about Raspbian Linux and the many tutorials and HOW-TOs on the Web.

Also, check out the File Manager. This is a graphical way to browse through your files. Linux uses a hierarchical file system where absolute path names begin from the root, which is symbolized by the slash (“/”) character. More about this in a minute.

Just a few more things

I recommend creating a new user for yourself and keep the default user “pi” around for emergencies or administration. The Raspberry Pi folks have a nice introduction to user management. It’s a short read and now that you have the browser running, why not?

If you’re too lazy to read the guide, then use the following command to create a new user:

    sudo adduser XXX

where “XXX” is the name of the new user. The system prompts for the new user’s password. This part is up to you! You can remove the password for a user by entering the command:

    sudo passwd XXX -d

where “XXX” is the name of the user. The passwd command can be used to change your own password, too. If you want to remove a user, then run the command:

    sudo userdel -r XXX

where “XXX” is the name of the user to be removed.

The guide to user management describes “sudoers” and how to grant permission to a user to execute the sudo command. This process changes an internal user privileges file, so you must be careful. Enter the command:

    sudo visudo

and find the line:

    root	ALL=(ALL:ALL) ALL

Create a new line using this line as a model, except replace “root” with the name of the user who is to be a new sudoer, e.g.,

    XXX		ALL=(ALL:ALL) ALL

Save the changes and exit the editor. Oh, the editor here is nano, which is one of the pre-installed applications.

Users have their homes in the directory “/home”. If the user’s name is “XXX”, then their home directory is named “/home/XXX”. Here are a few commands that you can use to navigate through the file system via the shell.

    ls          List directory
    cd          Change the working directory
    pwd         Display the current working directory
    mkdir       Make new directory
    rmdir       Remove directory
    cp          Copy file (or directory)
    mv          Rename a file (or directory)
    rm          Remove file
    cat         Display file contents
    more        Display file contents
    nano        Edit text file
    date        Displays the current date

If you need help, you can always enter the desired command with the “–help” option. Or, you can display the manual page for the command, e.g., “man ls”. All of these commands have many options, making them quite flexible and powerful.

You can find more basic information about using Linux on this page.

Install applications

Speaking of new applications, you can install a new application from the command line if you know the package name. This is the most straightforward way to install a package (application). For example, I like the emacs text editor. The following command installs emacs:

    sudo apt-get install emacs

The apt-get command searches for the package on-line, downloads it and installs it. The command also installs any other packages which the target package needs (e.g., libraries). In Linux terms, it resolves dependencies. Installation is sometimes slow, so please be patient.

There is also a package manager with a nice user interface. Find the package manager in the desktop start menu and browse through the available applications. I’ll revisit this subject in future posts when we discuss specific music-oriented applications.

USB flash drives

The need for a USB flash drive is sure to come up. I recommend this guide to adding and using USB drives. Here are a few quick commands for reference. Create a mount point:

    sudo mkdir /mnt/usb_flash

Mount the USB flash drive (after inserting the drive):

    sudo mount -o uid=XXX,gid=XXX /dev/sda1 /mnt/usb_flash

where XXX is your user name. Unmount the flash drive when you’re finished using it:

    sudo umount /mnt/usr_flash

You can display the currently mounted file systems with the command:

    df -h

This command also shows the amount of used and available space in the various file systems and drives.

Raspbian Jessie is smart enough to recognize when a USB drive is inserted. It displays the File Manager automatically. If you are a File Manager type of person, definitely go this route. You must eject (unmount) the drive before removing it. The EJECT button appears in the upper right hand corner of the desktop.

Boot to a login shell

Raspbian Jessie boots into the desktop as the default user “pi”. You probably want to boot into your own account instead. At the moment, you need to dig into some system files to make the change and I simply don’t recommend diving into that, especially if you are new to Linux.

Instead, you can easily change the boot behavior using raspi-config. Launch raspi-config and choose the “Enable Boot To Desktop” option. Then, choose to boot to the command line. The next time you boot, the system will display a login prompt where you can enter your user name and password. Once your identity is validated, Linux puts you directly into a command line shell. If you want to, you can enter any Linux command into the shell and do some work. When you want to start the X Windows system and the desktop, just type:

    startx

Read the on-line documentation about respi-config for more information.

Shutting down

All operating systems like to shut down in an orderly way. OSes often times keep data in temporary buffers that need to be flushed to disk or flash memory. An orderly shutdown helps keep data in a consistent, correct state.

You can shutdown the system through the desktop start menu. (Yeah, that sounds oxymoronic.) You can also shutdown the system via the command line shell. Just execute the command:

    sudo shutdown -h now

The -h option asks Linux to halt the processor after shutting down. The shutdown command has other options for rebooting and so forth:

    sudo shutdown -r now

Here’s another way to force a reboot. Just enter:

    sudo reboot

on the command line.

If you enjoyed this introduction, you might want to check out the Raspberry Pi tips and tricks page that I wrote for the first generation Pi.

Well, that wasn’t so bad, was it? Good luck and have fun with Raspberry Pi 2!

All site content is Copyright © Paul J. Drongowski unless otherwise indicated.

PERF tutorial part 3 is now on-line

Just wrapped up Part 3 of the Linux-tools PERF tutorial.

The tutorial now consists of three parts. Part 1 covers the most basic PERF commands and shows how to find program hot-spots using software performance events. Part 2 discusses hardware performance events and performance counters, and demonstrates how to measure hardware performance events using PERF counting mode. Part 2 introduces several derived performance metrics like instructions per second (IPC) and applies these metrics to the sample application programs.

Part 3 is the newest addition to the tutorial series. It builds on parts 1 and 2, showing how to use hardware performance events and counter sampling to profile an application program. Part 3 discusses sampling period and frequency, the sampling process, overhead, statistical accuracy/confidence and other practical concerns.

I hope you find the PERF tutorial to be useful in your work! Although I produced the example data on the ARM-based Raspberry Pi, the commands and techniques will also work on x86.