Programming languages as musical instruments

Alan Blackwell and Nick Collins. The Programming Language as a Musical Instrument. In P. Romero, J. Good, E. Acosta Chaparro & S. Bryant (Eds). Proc. PPIG 17, pp. 120-130.

Any musician who wants to be competent with digital production tools has to take on qualities of a programmer. Music notation is itself a “programming language” for human musicians, complete with loops and subroutines. Electronic music collapses composition, performance and recording into the same act.

How do you differentiate a “live” electronic performance from playing back canned sequences? One way to make the presentation into an actual performance is to include improvisation, or at least the possibility of it. Morton Subotnick is a good example. He considers his compositions to consist of his synthesizer patches and sequences. His performances, on the other hand, are mostly improvisational, deploying his preset elements as he sees fit in the moment. This is similar to the methods of jazz musicians, spontaneously recombining and hybridizing pre-learned riffs and patterns.

Subotnick schools me in Buchla-lore

Performance demands a close relationship between gesture and result. Which tools and interfaces are best suited to live computer music? Blackwell and Collins approach the question as scholars of the psychology of human-computer interfaces. Their motivation for studying laptop musicians lies in the way that such non-traditional “end-user” programmers can grant valuable insight into software design generally:

We believe that the study of unusual programming contexts such as Laptop music may lead to more general benefits for programming research. This is because significant advances in programming language design have often arisen by considering completely new classes of user who might engage in programming activity… [E]nd-user programmers should not be regarded as ‘deficient’ computer programmers, but recognised as experts in their own right and in their own domain of work.

An admirable sentiment.

Blackwell and Collins survey the landscape of laptop performance tools. On one end of the spectrum, they place Ableton Live and Reason, (comparatively) user-friendly but inflexible software. In the middle are less user-friendly but more flexible graphical programming environments like Max/MSP or PD. Finally, the far end of the spectrum is occupied by the least user-friendly but most open-ended programming tools, command-line languages like SuperCollider or ChucK. It came as quite a surprise to me to learn that people are performing live with textual programming languages. I had seen ChucK and the like used for composition and sound design, but never on the fly in front of an audience. I was even more surprised by the authors’ mention of Alex McLean, who plays live using a customized version of Perl.

Blackwell and Collins devote a good part of their discussion to a comparison of Ableton Live and ChucK as live performance tools, particularly with regard to onstage improvisation. The authors appear not to think very highly of Ableton. They view its hardware metaphors and strong orientation toward dance music as a priori constraints on the user’s creativity. They grant, however, that Ableton’s narrowness of focus suits its intended use case well. While they are more excited by the limitless possibilitiess of ChucK, they are frank about its shortcomings: there is significant lag between the performer’s action (typing a line of code) and receiving feedback (hearing the resulting sound.) Also, the debug cycle has to be accomplished on stage in midflight. Ableton is tolerant of user mistakes and unintentional moves in ways that ChucK profoundly is not.

Mission control

Any serious electronic instrument should approach the instantaneous tactile and auditory feedback that acoustic instruments have given us for tens of thousands of years. Ableton passes this test in some of its functionality, and fails it in others. ChucK fails the test completely. Blackwell and Collins recognize this difficulty.

The reader may therefore wonder why any live performer would choose such a challenge as ChucK when set against the comfortable ride offered by Ableton.

Certainly, this reader does.

An aesthetic response would be to embrace the challenge of live coding; the virtuosity of the required cognitive load, the error-proneness, the diffuseness, all of these play-up the live coder as a modern concerto artist.

This is the point where I depart philosophically from the authors. Few people know or care how difficult a piece of music is to perform. Musicians should only be concerned with the emotions they evoke in the listener. If the only emotion being evoked is “wow, that must be hard,” that turns music into an athletic competition and drains it of its meaning.

The authors are concerned by “the representational paucity of programs like Ableton, which are biased towards fixed audio products in established stylistic modes, rather than experimental algorithmic music which requires the exploratory design possibilities of full programming languages.” Fair enough. But when I introduce musicians to Ableton, they tend to be boggled by the possibilities. This is as true for avant-gardists as it is for pop and dance artists. Morton Subotnick uses Ableton extensively, and if he has not exhausted its creative possibilities, it is hard to imagine anyone doing so.

The most valuable piece of musical insight given by Blackwell and Collins is this:

It is an interesting question whether some software structures (recursion, conditional branches) may be adopted in future as part of the conventional listening repertoire for live programming audiences. If this were to happen, then musical notations might evolve to support them.

Here is where the vocabulary of programming has the most to offer musicians. The musical instruction “repeat until cue” has a strong analogy to while and for loops. Conditional loops have been crucial to the work of John Coltrane (as in “My Favorite Things”) and James Brown (in too many songs to list.) Other sophisticated improvisers already use nested recursive loops and self-reference. I could easily imagine exciting improvisation-oriented compositions based on conditional branching, where the network diagram is mapped out ahead of time, but the particular path through it varies with every performance. I join the authors in hoping for continued convergence between the thought processes of programmers and improvising musicians.

9 replies on “Programming languages as musical instruments”

  1. This paper was published in 2005, Max for Live was released four years later in 2009.

    Much has happened in the past seven years, 

    1. This is true. If anyone reading is a Max for Live expert, would you care to weigh in on the state of the field?

  2. You may be interested in the work of Dr. Dan Lloyd who is mapping brain waves to music. http://www.kickstarter.com/projects/1313940325/music-of-the-hemispheres (that project has received its initial funding)

  3. Ableton Live is hardly “(comparatively) inflexible”, given that one can use Max4Live and CSound4Live to embed the graphical programming of Max/MSP and the textual programming of csound into a single composition/performance. 

    1. Absolutely right. I thought about bringing this up, but the authors were treating Live and Max/MSP as separate entities, so I figured I’d follow their lead. Also, I don’t know enough about Max For Live to be able to talk intelligently about it. Do people live code with it the way they improvise with regular Ableton? Is there some Youtube video I can check out?

      1.  I don’t play out with Live so I can’t speak to that, but I would assume the answer is yes.   The beauty of the whole arrangement is that you can embed Max patchers and csound scores just as you would any other clips, and trigger them as part of Live’s timeline or on-the fly.  Going back to the programming metaphor, it’s like taking advantage of the ease-of-use of Perl, while still being able to have subroutines in java or C, or even to shell out to system calls.  And with Max4Live (which is the glue that enables all three to work together) you can go much deeper than that and have patchers that access and modify the Live environment itself in response to algorithms or user actions. 

        I love being able to stick “randomizing” elements into mixes without having to code the whole thing in Max, and while I’ve only just begun to play with CSound4Live, it’s appealing to have access to csound instrument creation without having to do an entire score in code.

      2. Intro to Max4Live

        MaxForLiveInC

        Max For Live Melodic Step Sequencer (and controller interface)

        Using Max4Live to modulate parameters

        CSound4Live Grainulator under OSC control

Comments are closed.