Anna posed this question, and I think it’s an excellent one: What is up with “Let It Go” and little girls? Why is this song such a blockbuster among the pre-K set? How did it jump the gap from presentational to participatory music? Is it the movie, or the song itself? In case you never interact with pop culture or little kids, this is the tune in question:
I posted the question on Facebook, and my friends have so many good responses that I’m going to just paste them all in more or less verbatim below.
Last week I put together a new set of music theory videos.
These videos are aimed at participants in Play With Your Music, who may want to start producing their own music or remixes and have no idea where to start. I’m presuming that the viewer has no formal background, no piano skills and no reading ability. This would seem to be an unpromising place to start making music from, but there’s a surprising lot you can do just by fumbling around on a MIDI keyboard. Playing the white keys only gives you the seven modes of the C major scale, with seven very different emotional qualities. Playing the black keys only gives you the G♭ major and E♭ minor pentatonic scales. From there, you can effortlessly transpose your MIDI data into any key you want.
My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:
The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered
Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.
Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.
Update: a version of this post appeared on Slate.com.
I seem to have touched a nerve with my rant about the conventional teaching of music theory and how poorly it serves practicing musicians. I thought it would be a good idea to follow that up with some ideas for how to make music theory more useful and relevant. The goal of music theory should be to explain common practice music. I don’t mean “common practice” in its present pedagogical sense. I mean the musical practices that are most prevalent in a given time and place, like America in 2013. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: what is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world?
I have my own set of ideas about what constitutes common practice music in America in 2013, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music. Trevor de Clerq and David Temperley have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the 500 Greatest Songs of All Time. Temperley and de Clerq transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “A corpus analysis of rock harmony.” They also have a web site where you can download their raw data and analyze it yourself. The whole project is a masterpiece of descriptivist music theory, as opposed to the bad prescriptivist kind.
I am mercifully finished with music theory in grad school and couldn’t be happier about it. You may find this surprising. My blog is full of music theory. How could a guy who enjoys thinking about music in analytical terms as much as I do have such a wretched time in my graduate music theory classes? It wasn’t the work, I mostly breezed through that. No, it was the grinding Eurocentrism. Common-practice period classical music theory is fine and good, but in the hands of the music academy, it’s dry, tedious, and worst of all, largely useless. The strict rules of eighteenth-century European art music are distantly removed from the knowledge a person needs to do anything in the present-day music world (except, I guess, to be a professor of common-practice tonal theory.)
The title of this post is a reference to the Susan Sontag essay, “Against Interpretation.” She argues that by ignoring the subjective sensual pleasures of art and instead looking for rigorously logical theories of its inner workings, academics are missing the point. She calls scholarly interpretation “the intellect’s revenge upon art.” I’m with her. Music theory as practiced at NYU and elsewhere is the intellectual’s revenge on music. Sontag’s punchline is right on: “[I]n place of a hermeneutics we need an erotics of art.” Speak it, sister!
The word is from Greek, “poly” meaning many and “phony” meaning voice. This is as opposed to monophony — one voice. Originally, polyphony literally meant multiple people singing together. Over the course of musical history, the term has become more abstracted, referring to multiple “voices” played on any instrument. And usually, polyphony means that the different voices are all playing/singing independent lines.
I’m currently working on a book chapter about the use of video games in music education. While doing my research, I came across a paper by Kylie Peppler, Michael Downton, Eric Lindsay, and Kenneth Hay, “The Nirvana Effect: Tapping Video Games to Mediate Music Learning and Interest.” It’s a study of the effectiveness of Rock Band in teaching traditional music skills. The most interesting part of the paper comes in its enthusiastic endorsement of Rock Band’s notation system.
The authors think that Rock Band and games like it do indeed have significant educational value, that there’s a “Nirvana effect” analogous to the so-called Mozart effect:
We argue that rhythmic videogames like Rock Band bear a good deal of resemblance to the ‘real thing’ and may even be more well-suited for encouraging novices to practice difficult passages, as well as learn musical material that is challenging to comprehend using more traditional means of instruction.
The most fun Music Technology class I’m taking this semester is Advanced Audio Production with Paul Geluso. A major component of the class is learning how to listen analytically, and to that end, we were assigned to pick a song and do an exhaustive study of its sonic qualities. We used methods from William Moylan’s book The Art of Recording: Understanding and Crafting the Mix. I chose “Tightrope” by Janelle Monáe featuring Big Boi.
I love music grad school and am finding it extremely valuable, except for one part: the music theory requirement. In order to get my degree, I have to attain mastery of Western tonal harmony of the common practice era. I am not happy about it. This requirement requires a lot mastery of a lot of skills that are irrelevant to my life as a working musician, and leaves out many skills that I consider essential. Something needs to change.
Don’t get me wrong: I love studying music theory. I spent years studying it for my own gratification before ever even considering grad school. I’ve written a ton of blog posts about it, taught it for money, and talked about it to anyone who would listen. But the way that music theory is taught at NYU, and in most schools, is counterproductive.
Octaves are notes that you hear as being “the same” in spite of their being higher or lower in actual pitch. (Technically, notes separated by an octave are in the same pitch class.) Play middle C on the piano. Then go up the C major scale (the white keys) and the eighth note you play will be another C an octave higher. The “oct” part of the word refers to this eight step distance up the scale.
From a science perspective, octaves are pitch intervals related by factors of two. When a tuning fork plays standard concert A, it vibrates at 440 Hz. The A an octave higher is 880 Hz, and the A an octave lower is 220 Hz. Any note with the frequency 2^n * 440 will be an A. It’s a central mystery of human cognition why we hear pitches related by powers of two as being “the same” note. The ability to detect octave equivalency is probably built in to our brains, and it isn’t limited to humans. Rhesus monkeys have been shown to be able to detect octaves too, as have some other mammals.
Original post on Quora