I don’t know a lot about Afro-Caribbean rhythms, beyond the fact that they cause me intense joy whenever I hear them. My formal music education has focused almost exclusively on harmony, and I’ve had to learn about rhythm mostly on my own. That’s why it was so exciting for me to discover the work of Godfried Toussaint. He introduced me to a startlingly useful pedagogical tool: the rhythm necklace.
A rhythm necklace is a circular notation for rhythm. Let’s say your rhythm is in 12/8 time. That means that each cycle of the rhythm has twelve slots where sounds can go, and each slot is an eighth note long (which is not very long.) A 12/8 rhythm necklace is like a circular ice cube tray that holds twelve ice cubes.
What’s so great about writing rhythms this way? Rhythms are relationships between events that are non-adjacent in time. When you write your rhythms from left to right, as is conventional, it’s hard to make out the relationships. On the circle, the symmetries and patterns jump right out at you. I recommend the Toussaint-inspired Rhythm Necklace app to get these concepts under your fingers and into your ears.
You can’t look into Afro-Caribbean beats without coming across a bell pattern called Bembé, also known as “the standard pattern” or the “short bell pattern.” Here’s how it sounds:
I was probably first exposed to Bembé by Santana’s “Incident at Neshabur.”
Bembé’s meter is ambivalent. You can represent it as duple (4/4) or triple (6/8 or 12/8). Practitioners urge you not to think of the bell pattern as being in one meter or the other. Instead, you’re supposed to hold both of them in your head at the same time. The ambiguity is the point.
Continuing my series of posts on the ways that science might explain why we like the music we like. See also my posts on the science of rock harmony, harmony generally, and Afro-Cuban rhythms.
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
I’ve undergone some evolution in my thinking about the intended audience for my thesis app. My original idea was to aim it at the general public. But the general public is maybe not quite so obsessed with breakbeats as I am. Then I started working with Alex Ruthmann, and he got me thinking about the education market. There certainly a lot of kids in the schools with iPads, so that’s an attractive idea. But hip-hop and techno are a tough sell for traditionally-minded music teachers. I realized that I’d find a much more receptive audience in math teachers. I’ve been thinking about the relationship between music and math for a long time, and it would be cool to put some of those ideas into practice.
The design I’ve been using for the Drum Loop UI poses some problems for math usage. Since early on, I’ve had it so that the centers of the cells line up with the cardinal angles. However, if you’re going to measure angles and things, the grid lines really need to be on the cardinal angles instead. Here’s the math-friendly design:
A musical pitch is a blend of many different frequencies beside the fundamental. Here’s a visualization of the different vibrational modes of an ideal string. The string’s movements are the sum of all these different modes simultaneously.
The Quora question that prompted this post asks:
Why has music been historically the most abstract art form?
We can see highly developed musical forms in renaissance polyphony and baroque counterpoint. The secular forms of this music is often non-programmatic or “absolute music.” In contrast to this, the paintings and sculpture of those times are often representational. Did music start as representational but merely move to a more abstract art form than other types of arts sooner? Does it lend it self to this sort of abstraction more easily?
I had an art professor in college who argued that all “representational” art is abstract, and all “abstract” art is representational. Any art has to refer back to sensory impressions of the world, internal or external, because that’s the only raw material we have to work with. Meanwhile, you’re unlikely to ever mistake a work of representational art for the object it represents. You don’t mistake photographs (or photorealistic paintings) for their subjects, and even the most “realistic” special effects in movies require willing suspension of disbelief.
Gödel, Escher, Bach by Douglas Hofstadter describes and defines the concept of recursion, and discusses its applications in computer science, consciousness, art, music, biology and various other fields.
Recursion is crucial to writing computer programs in a compact, elegant way, but it also opens the door to infinite loops and irreconcilable logical contradictions.
Update: check out my masters thesis, a radial drum machine. Specifically, see the section on visualizing rhythm. See also a more scholarly review of the literature on visualization and music education. And here’s a post on the value of video games in music education.
Computer-based music production and composition involves the eyes as much as the ears. The representations in audio editors like Pro Tools and Ableton Live are purely informational, waveforms and grids and linear graphs. Some visualization systems are purely decorative, like the psychedelic semi-random graphics produced by iTunes. Some systems lie in between. I see rich potential in these graphical systems for better understanding of how music works, and for new compositional methods. Here’s a sampling of the most interesting music visualization systems I’ve come across.
Western music notation is a venerable method of visualizing music. It’s a very neat and compact system, unambiguous and digital, and not too difficult to learn. Programs like Sibelius can effortlessly translate notation to and from MIDI data, too.
But western notation has some limitations, especially for contemporary music. It doesn’t handle microtones well. It has limited ability to convey performative nuance — after a hundred years of jazz, there’s no good way to notate swing other than to just write the word “swing” at the top of the score. The key signature system works fine for major keys, but is less helpful for minor keys and modal music and is pretty much worthless for the blues.
Here’s a suggestion for how notation could improve in the future. It’s a visualization by Jon Snydal of John Coltrane’s solo in Miles Davis’ “All Blues” (I edited it a little to be easier on the eyes.)
Snydal’s visualization is more analog than digital — it shows the exact nuances of Coltrane’s performance, with subtle shadings of pitch, timing and dynamics.
In high school science class, you probably saw a picture of an atom that looked like this:
The picture shows a stylized nucleus with red protons and blue neutrons, surrounded by three grey electrons. It’s an attractive and iconic image. It makes a nice logo. Unfortunately, it’s also totally wrong. There’s an extent to which subatomic particles are like little marbles, but it’s a limited extent. Electrons do move around the nucleus, but they don’t do it in elliptical paths as if they’re little moons orbiting a planet. The true nature of electrons in atoms is way weirder and cooler.
Pictures are a terrible way to understand the nature of quantum particles. Music theory is much better.
I always enjoy when hip-hop artists sample themselves. It makes the music recursive, and for me, “recursive” is synonymous with “good.” You can hear self-sampling in “Nas Is Like” by Nas, “The Score” by the Fugees and many songs by Eric B and Rakim. The most recent self-sampling track to cross my radar is “Unbelievable” by Biggie Smalls, from his album Ready To Die. Here’s the instrumental.
[iframe_loader width=”480″ height=”360″ src=”http://www.youtube.com/embed/IdL2e1MrTgY” frameborder=”0″ allowfullscreen]
Music is richly mathematical, and an understanding of one subject can be a great help in understanding the other.
Geometry and angles
My masters thesis is devoted in part to a method for teaching math concepts using a drum machine organized on a radial grid. Placing rhythms on a circle gives a good multisensory window into ratios and angles.
The brain turns out to be adept at decomposing sinusoids into their component frequencies. We can’t necessarily consciously compare the partials of a sound, but we certainly do it unconsciously — that’s how we’re able to distinguish different timbres, and is probably the basis for our sense of consonance and dissonance. If two pitches share a lot of overtones, we tend to hear them as consonant, at least here in the western world. There’s a strong case to be made that overlapping overtone series is the basis of all of western music theory.
The concept of orbitals in quantum mechanics made zero sense to me until I finally found out that they’re just harmonics of the electron field’s vibrations. I wasn’t at all surprised to learn that Einstein conceptualized wave mechanics in musical terms as well.
Octave equivalency is really just your brain’s ability to detect frequencies related by powers of two. The relationship between absolute pitches and pitch classes is an excellent doorway into logarithms generally. You also need logarithms to understand decibels and loudness perception.
Music is really just a way of applying symmetry to events in time. See this delightful paper by Vi Hart about symmetry and transformations in the musical plane.