Music theory is hard. But we make it harder by holding on to naming and notational conventions that are hundreds of years old, and that were designed to describe very different music than what we’re playing now. Here are some fantasies for how note naming might be improved.
Right now, the “default setting” for western diatonic harmony is the C major scale. It’s the One True Scale, from which all else is derived by adding sharps and flats. Why do we use the C major scale for this purpose? Why not the A major scale? Wouldn’t it make more sense if harmonic ground zero for our whole harmonic system was the sequence ABCDEFG? I know there are historical reasons why the unmodified first seven letters of the alphabet denote the natural minor scale, but so what? How is a person supposed to make sense of the fact that scale degree one falls on the third letter of the alphabet?
Furthermore, I question whether the major scale really is the one we should consider to be the most basic. I’d prefer that we use mixolydian instead. The crucial pitches in mixo are close to the natural overtone series, for one thing. For another, Americans hear flat seven as being as “natural” as natural seven, if not more so. While the leading tone is common inside chords, it’s rare to hear it in a popular melody. Flat seven is ubiquitous in the music most of us listen to, and in plenty of other world cultures besides.
As we continue to flesh out the video content for Play With Your Music, I put together this series on rhythm.
Update: I now have a functioning prototype of my app. If you’d like to try it, get in touch.
My NYU masters thesis is a drum programming tutorial system for beginner musicians. It uses a novel circular interface for displaying the drum patterns. This presentation explains the project’s goals, motivations and scholarly background.
If you prefer, see it on Slideshare.
Matthew D. Thibeault. Wisdom for Music Education From the Recording Studio. General Music Today, 20 October 2011.
Stuart Wise, Janinka Greenwood and Niki Davis. Teachers’ Use of Digital Technology in Secondary Music Education: Illustrations of Changing Classrooms. British Journal of Music Education, Volume 28, Issue 2, July 2011, pp 117 - 134.
Digital recording studios in schools are becoming more common as the price of the required hardware and software falls. Matthew Thibeault urges music teachers to think of the studio not just as a collection of gear that can be used to document the “real” performance, but as a musical instrument in its own right, carrying with it an entire philosophy of music-making. Digital studio techniques have collapsed composition, recording and editing into a single act. Since most of the music we encounter in the world is recorded, and most of that digitally, any music program needs to include the recording, sequencing and editing process as part of the core curriculum.
For my grad school thesis, I’m designing an intro-level music education app. I’m operating within the techno/hip-hop paradigm, with an Afrocentric rhythm-oriented approach. Electronic dance music production software had brought me much joy over the years, joy that I’m eager to spread to more people. I firmly believe that everyone is a potential musician, and that the right interface can draw beginners in and motivate them. So as I ponder this project, I’m naturally giving a lot of thought to electronic music interfaces, both software and hardware. And because all interfaces on a screen necessarily involve some music visualization, I’ve been exploring that too. For example, here’s a particularly attractive music interface/visualization, the pitch correction program Melodyne:
Earlier this summer I took Advanced Computer Music Composition, which included a lot of history of the twentieth century avant-gardists. While these people have had a lot of not-so-wonderful ideas about music, they have done a lot of interesting experiments with novel interfaces.
Update: check out my masters thesis, a radial drum machine. Specifically, see the section on visualizing rhythm. See also a more scholarly review of the literature on visualization and music education. And here’s a post on the value of video games in music education.
Computer-based music production and composition involves the eyes as much as the ears. The representations in audio editors like Pro Tools and Ableton Live are purely informational, waveforms and grids and linear graphs. Some visualization systems are purely decorative, like the psychedelic semi-random graphics produced by iTunes. Some systems lie in between. I see rich potential in these graphical systems for better understanding of how music works, and for new compositional methods. Here’s a sampling of the most interesting music visualization systems I’ve come across.
Western music notation is a venerable method of visualizing music. It’s a very neat and compact system, unambiguous and digital, and not too difficult to learn. Programs like Sibelius can effortlessly translate notation to and from MIDI data, too.
But western notation has some limitations, especially for contemporary music. It doesn’t handle microtones well. It has limited ability to convey performative nuance — after a hundred years of jazz, there’s no good way to notate swing other than to just write the word “swing” at the top of the score. The key signature system works fine for major keys, but is less helpful for minor keys and modal music and is pretty much worthless for the blues.
Here’s a suggestion for how notation could improve in the future. It’s a visualization by Jon Snydal of John Coltrane’s solo in Miles Davis’ “All Blues” (I edited it a little to be easier on the eyes.)
Snydal’s visualization is more analog than digital — it shows the exact nuances of Coltrane’s performance, with subtle shadings of pitch, timing and dynamics.
Turntablists use record players to play records in ways they weren’t meant to be played. By speeding up, slowing down and reversing the record under the needle, a whole universe of new sounds becomes possible. The record player as musical instrument is still in its early stages of development. DJs already invented the instrumental sound of hip-hop. I wonder what else they have coming.
It’s no accident that music and games share the verb “to play.” Both music and games are semi-structured forms of social learning. As far as I’m concerned, the most exciting thing happening in the video game world is the explosion of music-based games like Dance Dance Revolution.
Writing a song is a lot like writing a computer program. They both require clever management of loops and control flow.
The simplest sheet music reads as a straightforward top-to-bottom list of instructions. You start on measure one and read through to the end sequentially. That’s fine unless the music is very repetitive, which most popular music is. The loop is the basic compositional unit of nearly every song you could dance to. The problem is that writing loops out sequentially is very tedious.
Rather than writing the same passage over and over, you can save yourself a lot of laborious writing by using repeat markers. They’re like the GOTO instruction in BASIC. Here are the first four bars of “Chameleon” by Herbie Hancock. This four-bar phrase repeats hundreds of times over the course of the song. You wouldn’t want to write them all out. With repeat markers, you don’t have to. Continue reading
In my laptop band Revival Revival, we use Reason for all of our instrumental sounds and sample playback. The newest version has a handy color-coding feature in the sequencer, which makes it easy for me to be able to keep track of which part of which song happens in which order. Having all the tunes under my eyes all the time has revealed new wisdom to my ears about symmetry and asymmetry, and isn’t that what music is all about?
The color-coding system started as a simple information-management technique, but it ended up improving my ears. Spending so much time looking at these colorfully abstracted representations of so many songs, I couldn’t help but notice some patterns. I’ve done enough tracks now that I can lay something out in the sequencer and know that it’ll basically work without having to listen to it first. Classical and jazz musicians get to the point where by glancing over a score, they can hear it quite clearly in the mind’s ear. The Reason sequencer has a much shorter path into the brain’s deep sense-data processing centers because it’s dynamic, animated, and responsive to my thoughts in real time.