In this post, I’ll be doing some public-facing note-taking on Music As Social Life: The Politics Of Participation by Thomas Turino. I’m especially interested in chapter two: Participatory and Presentational Performance. We in America tend to place a high value on presentational music created by professionals, and a low value on participatory music made by amateurs. It’s useful to know that there are people in the world who take a different view.
Turino divides music into four big categories:
- Participatory music. Everyone present is actively doing something: playing an instrument, singing or chanting, and/or dancing. For example: a bluegrass jam, campfire singing, a hip-hop cypher.
- Presentational music. There’s a clear divide between the performers and the audience. Audience members might dance or sing along, but they are not the focus. For example: a classical, rock or jazz concert.
- High-fidelity recording. A document of a live performance (or a convincing illusion of such.) For example: a classical or jazz album.
- Studio sound art. A recording that was constructed in the studio using techniques other than (or in addition to) people performing in real time. For example: a late Beatles album, or any pop song since 1980.
Turino devotes a lot of his attention to three examples of participatory music cultures:
This last group might strike you as the odd one out, but Turino sees more commonalities between the musical experience of American contra dancers and participants in Shona rituals than he does between the contra dancers and audiences at, say, a bluegrass concert.
Last week I put together a new set of music theory videos.
These videos are aimed at participants in Play With Your Music, who may want to start producing their own music or remixes and have no idea where to start. I’m presuming that the viewer has no formal background, no piano skills and no reading ability. This would seem to be an unpromising place to start making music from, but there’s a surprising lot you can do just by fumbling around on a MIDI keyboard. Playing the white keys only gives you the seven modes of the C major scale, with seven very different emotional qualities. Playing the black keys only gives you the G♭ major and E♭ minor pentatonic scales. From there, you can effortlessly transpose your MIDI data into any key you want.
I have a strongly held belief about musical talent: there is no such thing. Every neurotypical human is born with the ability to learn music, the same way the vast majority of us are born with the ability to learn to walk and talk. We still have to do the learning, though; otherwise the capacity doesn’t develop itself. When we talk about “musical talent,” we’re really talking about the means, motive and opportunity to activate innate musicality. When we talk about “non-musicians,” we’re rarely talking about the Oliver Sacks cases with congenital amusia; usually we mean people who for whatever reason never had the chance to develop musically.
So what if almost everyone is a potential musician? Why should you care? Because participation in music, particularly in groups, is an essential emotional vitamin. We here in America are sorely deficient in this vitamin, and it shows in our stunted emotional growth. Steve Dillon calls music a “powerful weapon against depression.” We need to be nurturing musicality wherever it occurs as a matter of public health.
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:
The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered
Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.
Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.
Update: a version of this post appeared on Slate.com.
I seem to have touched a nerve with my rant about the conventional teaching of music theory and how poorly it serves practicing musicians. I thought it would be a good idea to follow that up with some ideas for how to make music theory more useful and relevant. The goal of music theory should be to explain common practice music. I don’t mean “common practice” in its present pedagogical sense. I mean the musical practices that are most prevalent in a given time and place, like America in 2013. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: what is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world?
I have my own set of ideas about what constitutes common practice music in America in 2013, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music. Trevor de Clerq and David Temperley have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the 500 Greatest Songs of All Time. Temperley and de Clerq transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “A corpus analysis of rock harmony.” They also have a web site where you can download their raw data and analyze it yourself. The whole project is a masterpiece of descriptivist music theory, as opposed to the bad prescriptivist kind.
I’ve undergone some evolution in my thinking about the intended audience for my thesis app. My original idea was to aim it at the general public. But the general public is maybe not quite so obsessed with breakbeats as I am. Then I started working with Alex Ruthmann, and he got me thinking about the education market. There certainly a lot of kids in the schools with iPads, so that’s an attractive idea. But hip-hop and techno are a tough sell for traditionally-minded music teachers. I realized that I’d find a much more receptive audience in math teachers. I’ve been thinking about the relationship between music and math for a long time, and it would be cool to put some of those ideas into practice.
The design I’ve been using for the Drum Loop UI poses some problems for math usage. Since early on, I’ve had it so that the centers of the cells line up with the cardinal angles. However, if you’re going to measure angles and things, the grid lines really need to be on the cardinal angles instead. Here’s the math-friendly design:
Brennan, K. (2013). Best of Both Worlds: Issues of Structure and Agency in Computational Creation, In and Out of School. Doctoral Dissertation, Massachusetts Institute of Technology.
I had the very good fortune to attend a fancy elementary school run on solid constructivist principles. In sixth grade I got to experience the “hard fun” of Sprite Logo. Similarly fortunate kids today are learning Logo’s great-grandchild, Scratch.
Karen Brennan’s doctoral dissertation looks at the ways people teach and learn Scratch, and asks how the study of programming can help or hinder kids’ agency in their own learning. Agency, in this sense, refers to your ability to define and pursue learning goals, so you can play a part in your self-development, adaptation, and self-renewal. This is interesting to me, because every single argument Brennan makes about the teaching of programming applies equally well to the teaching of music.