Writing assignment for History of Science and Technology class with Myles Jackson. See a more informal introduction to the vocoder here.
Casual music listeners know the vocoder best as the robotic voice effect popular in disco and early hip-hop. Anyone who has heard pop music of the last two decades has heard Auto-Tune. The two effects are frequently mistaken for one another, and for good reason—they share the same mathematical and technological basis. Auto-Tune has become ubiquitous in recording studios, in two very different incarnations. There is its intended use, as an expedient way to correct out-of-tune notes, replacing various tedious and labor-intensive manual methods. Pop, hip-hop and electronic dance music producers have also found an unintended use for Auto-Tune, as a special effect that quantizes pitches to a conspicuously excessive degree, giving the voice a synthetic, otherworldly quality. In this paper, I discuss the history of the vocoder and Auto-Tune, in the context of broader efforts to use science and technology to mathematically analyze and standardize music. I also explore how such technologies problematize our ideas of virtuosity.
This post documents a presentation I’m giving in my History of Science and Technology class with Myles Jackson. See also a more formal history of the vocoder.
The vocoder is one of those mysterious technologies that’s far more widely used than understood. Here I explain what it is, how it works, and why you should care. A casual music listener knows the vocoder best as a way to make that robot voice effect that Daft Punk uses all the time.
Here’s Huston Singletary demonstrating the vocoder in Ableton Live.
This is a nifty effect, but why should you care? For one thing, you use this technology every time you talk on your cell phone. For another, this effect gave rise to Auto-Tune, which, love it or hate it, is the defining sound of contemporary popular music. Let’s dive in!
We created the Groove Pizza to make it easier to both see and hear rhythms. The next step is to create learning experiences around it. In this post, I’ll use the Pizza to explain the structure of some quintessential funk and hip-hop beats. You can click each one in the Groove Pizza, where you can customize or alter it as you see fit. I’ve also included Noteflight transcriptions of the beats.
View in Noteflight
This simple pattern is the basis of just about all rock and roll: kicks on beats one and three (north and south), and snares on beats two and four (east and west.) It’s boring, but it’s a solid foundation that you can build more musical-sounding grooves on top of.
View in Noteflight
This Billy Squier classic is Number nine on WhoSampled’s list of Top Ten Most Sampled Breakbeats. There are only two embellishments to the backbeat cross: the snare drum hit to the east is anticipated by a kick a sixteenth note (one slice) earlier, and the kick drum to the south is anticipated by a kick an eighth note (two slices) earlier. It isn’t much, but together with some light swing, it’s enough to make for a compelling rhythm. The groove is interestingly close to being symmetrical on the right side of the circle, and there’s an antisymmetry with the kick-free left side. That balance between symmetry and asymmetry is what makes for satisfying music. Continue reading
I don’t know a lot about Afro-Caribbean rhythms, beyond the fact that they cause me intense joy whenever I hear them. My formal music education has focused almost exclusively on harmony, and I’ve had to learn about rhythm mostly on my own. That’s why it was so exciting for me to discover the work of Godfried Toussaint. He introduced me to a startlingly useful pedagogical tool: the rhythm necklace.
A rhythm necklace is a circular notation for rhythm. Let’s say your rhythm is in 12/8 time. That means that each cycle of the rhythm has twelve slots where sounds can go, and each slot is an eighth note long (which is not very long.) A 12/8 rhythm necklace is like a circular ice cube tray that holds twelve ice cubes.
What’s so great about writing rhythms this way? Rhythms are relationships between events that are non-adjacent in time. When you write your rhythms from left to right, as is conventional, it’s hard to make out the relationships. On the circle, the symmetries and patterns jump right out at you. I recommend the Toussaint-inspired Rhythm Necklace app to get these concepts under your fingers and into your ears.
You can’t look into Afro-Caribbean beats without coming across a bell pattern called Bembé, also known as “the standard pattern” or the “short bell pattern.” Here’s how it sounds:
I was probably first exposed to Bembé by Santana’s “Incident at Neshabur.”
Bembé’s meter is ambivalent. You can represent it as duple (4/4) or triple (6/8 or 12/8). Practitioners urge you not to think of the bell pattern as being in one meter or the other. Instead, you’re supposed to hold both of them in your head at the same time. The ambiguity is the point.
Continuing my series of posts on the ways that science might explain why we like the music we like. See also my posts on the science of rock harmony, harmony generally, and Afro-Cuban rhythms.
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
I’ve undergone some evolution in my thinking about the intended audience for my thesis app. My original idea was to aim it at the general public. But the general public is maybe not quite so obsessed with breakbeats as I am. Then I started working with Alex Ruthmann, and he got me thinking about the education market. There certainly a lot of kids in the schools with iPads, so that’s an attractive idea. But hip-hop and techno are a tough sell for traditionally-minded music teachers. I realized that I’d find a much more receptive audience in math teachers. I’ve been thinking about the relationship between music and math for a long time, and it would be cool to put some of those ideas into practice.
The design I’ve been using for the Drum Loop UI poses some problems for math usage. Since early on, I’ve had it so that the centers of the cells line up with the cardinal angles. However, if you’re going to measure angles and things, the grid lines really need to be on the cardinal angles instead. Here’s the math-friendly design:
A musical pitch is a blend of many different frequencies beside the fundamental. Here’s a visualization of the different vibrational modes of an ideal string. The string’s movements are the sum of all these different modes simultaneously.
The Quora question that prompted this post asks:
Why has music been historically the most abstract art form?
We can see highly developed musical forms in renaissance polyphony and baroque counterpoint. The secular forms of this music is often non-programmatic or “absolute music.” In contrast to this, the paintings and sculpture of those times are often representational. Did music start as representational but merely move to a more abstract art form than other types of arts sooner? Does it lend it self to this sort of abstraction more easily?
I had an art professor in college who argued that all “representational” art is abstract, and all “abstract” art is representational. Any art has to refer back to sensory impressions of the world, internal or external, because that’s the only raw material we have to work with. Meanwhile, you’re unlikely to ever mistake a work of representational art for the object it represents. You don’t mistake photographs (or photorealistic paintings) for their subjects, and even the most “realistic” special effects in movies require willing suspension of disbelief.
Gödel, Escher, Bach by Douglas Hofstadter describes and defines the concept of recursion, and discusses its applications in computer science, consciousness, art, music, biology and various other fields.
Recursion is crucial to writing computer programs in a compact, elegant way, but it also opens the door to infinite loops and irreconcilable logical contradictions.
Update: check out my masters thesis, a radial drum machine. Specifically, see the section on visualizing rhythm. See also a more scholarly review of the literature on visualization and music education. And here’s a post on the value of video games in music education.
Computer-based music production and composition involves the eyes as much as the ears. The representations in audio editors like Pro Tools and Ableton Live are purely informational, waveforms and grids and linear graphs. Some visualization systems are purely decorative, like the psychedelic semi-random graphics produced by iTunes. Some systems lie in between. I see rich potential in these graphical systems for better understanding of how music works, and for new compositional methods. Here’s a sampling of the most interesting music visualization systems I’ve come across.
Western music notation is a venerable method of visualizing music. It’s a very neat and compact system, unambiguous and digital, and not too difficult to learn. Programs like Sibelius can effortlessly translate notation to and from MIDI data, too.
But western notation has some limitations, especially for contemporary music. It doesn’t handle microtones well. It has limited ability to convey performative nuance — after a hundred years of jazz, there’s no good way to notate swing other than to just write the word “swing” at the top of the score. The key signature system works fine for major keys, but is less helpful for minor keys and modal music and is pretty much worthless for the blues.
Here’s a suggestion for how notation could improve in the future. It’s a visualization by Jon Snydal of John Coltrane’s solo in Miles Davis’ “All Blues” (I edited it a little to be easier on the eyes.)
Snydal’s visualization is more analog than digital — it shows the exact nuances of Coltrane’s performance, with subtle shadings of pitch, timing and dynamics.