This post documents a presentation I’m giving in my History of Science and Technology class with Myles Jackson.
The vocoder is one of those mysterious technologies that’s far more widely used than understood. Here I explain what it is, how it works, and why you should care. A casual music listener knows the vocoder best as a way to make that robot voice effect that Daft Punk uses all the time.
Here’s Huston Singletary demonstrating the vocoder in Ableton Live.
This is a nifty effect, but why should you care? For one thing, you use this technology every time you talk on your cell phone. For another, this effect gave rise to Auto-Tune, which, love it or hate it, is the defining sound of contemporary popular music. Let’s dive in!
QWERTYBeats is a proposed accessible, beginner-friendly rhythm performance tool with a basic built-in sampler. By simply holding down different combinations of keys on a standard computer keyboard, users can play complex syncopations and polyrhythms. If the app is synced to the tempo of a DAW or other music playback system, the user can easily perform good-sounding rhythms over any song.
This project is part of Design For The Real World, an NYU ITP course. We are collaborating with the BEAT Rockers, the Lavelle School for the Blind, and the NYU Music Experience Design Lab. Read some background research here. Continue reading
I have a whole lot of explanatory writing about rhythm in the pipeline, and thought it would be good to have a place to link the word “syncopation” to every time it arises. So here we go. Syncopation is to rhythm what dissonance is to harmony. A syncopated rhythm has accents on unexpected beats. In Western classical music, syncopation is usually temporary and eventually “resolves” to simpler rhythms. In the music of the African diaspora, syncopation is a constant, in the same way that unresolved tritones are constant in the blues.
Syncopation is not just a subjective quality of music; you can mathematically define it. Before we do, it helps to visualization a measure of 4/4 time, the amount of time it takes to count “one, two, three, four.”
The more times you have to subdivide the measure to get to a given beat, the weaker that beat is. When you accent weak beats, you get syncopation. Continue reading
While I was doing some examination of rhythm necklaces and scale necklaces, I noticed a symmetry among the major scale modes: Lydian mode and Locrian mode are mirror images of each other, both on the chromatic circle and the circle of fifths. Here’s Lydian above and Locrian below:
Does this geometric relationship mean anything musically? Turns out that it does.
Robert Davidson’s first-ever tweet is a remarkable one:
Rob’s tweet raises three profound questions in my mind. Continue reading
Continuing my series of posts on the ways that science might explain why we like the music we like. See also my posts on the science of rock harmony, harmony generally, and Afro-Cuban rhythms.
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
One of the best discoveries I made while researching my thesis is the mathematician Godfried Toussaint. While the bookshelves groan with mathematical analyses of western harmony, Toussaint is the rare scholar who uses the same tools to understand Afro-Cuban rhythms. He’s especially interested in the rhythm known to Latin musicians as 3-2 son clave, to Ghanaians as the kpanlogo bell pattern, and to rock musicians as the Bo Diddley beat. Toussaint calls it “The Rhythm that Conquered the World” in his paper of the same name. Here it is as programmed by me on a drum machine:
The image behind the SoundCloud player is my preferred circular notation for son clave. Here are eight different more conventional representations as rendered by Toussaint:
My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:
The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered
Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.
Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.
I’ve undergone some evolution in my thinking about the intended audience for my thesis app. My original idea was to aim it at the general public. But the general public is maybe not quite so obsessed with breakbeats as I am. Then I started working with Alex Ruthmann, and he got me thinking about the education market. There certainly a lot of kids in the schools with iPads, so that’s an attractive idea. But hip-hop and techno are a tough sell for traditionally-minded music teachers. I realized that I’d find a much more receptive audience in math teachers. I’ve been thinking about the relationship between music and math for a long time, and it would be cool to put some of those ideas into practice.
The design I’ve been using for the Drum Loop UI poses some problems for math usage. Since early on, I’ve had it so that the centers of the cells line up with the cardinal angles. However, if you’re going to measure angles and things, the grid lines really need to be on the cardinal angles instead. Here’s the math-friendly design:
Octaves are notes that you hear as being “the same” in spite of their being higher or lower in actual pitch. (Technically, notes separated by an octave are in the same pitch class.) Play middle C on the piano. Then go up the C major scale (the white keys) and the eighth note you play will be another C an octave higher. The “oct” part of the word refers to this eight step distance up the scale.
From a science perspective, octaves are pitch intervals related by factors of two. When a tuning fork plays standard concert A, it vibrates at 440 Hz. The A an octave higher is 880 Hz, and the A an octave lower is 220 Hz. Any note with the frequency 2^n * 440 will be an A. It’s a central mystery of human cognition why we hear pitches related by powers of two as being “the same” note. The ability to detect octave equivalency is probably built in to our brains, and it isn’t limited to humans. Rhesus monkeys have been shown to be able to detect octaves too, as have some other mammals.
Original post on Quora