Reflections on the MOOC

This week marks the conclusion of the first iteration of Play With Your Music, the music production MOOC I’ve been contributing to this past semester.

Play With Your Music

Creating and running the MOOC has been a learning experience for everybody involved. It certainly has been for me. I do most of my music teaching one on one, and it’s been weird creating materials for a couple of thousand students I never see at all. (Though I guess that’s sort of what I’m doing on this blog.) My colleagues have been keeping close tabs on the community of participants, but my personal interaction has been limited by the course’s coinciding with crunch time for my thesis. So this post will be less about the students, and more about the teachers.

Continue reading “Reflections on the MOOC”

Thesis presentation

Here’s the presentation I’ll be giving of my masters thesis next week, enjoy.

My masters thesis is done

Yaaaaay

You can read it here on the blog or as a PDF.

Can science make a better music theory?

My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:

The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered

Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.

Standing waves on a string

Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.

Continue reading “Can science make a better music theory?”

Toward a better music theory

Update: a version of this post appeared on Slate.com.

I seem to have touched a nerve with my rant about the conventional teaching of music theory and how poorly it serves practicing musicians. I thought it would be a good idea to follow that up with some ideas for how to make music theory more useful and relevant.

The goal of music theory should be to explain common practice music. I don’t mean “common practice” in its present pedagogical sense. I mean the musical practices that are most prevalent in a given time and place, like America in 2013. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: what is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world?

I have my own set of ideas about what constitutes common practice music in America in 2013, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music. Trevor de Clerq and David Temperley have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the 500 Greatest Songs of All Time. Temperley and de Clerq transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “A corpus analysis of rock harmony.” They also have a web site where you can download their raw data and analyze it yourself. The whole project is a masterpiece of descriptivist music theory, as opposed to the bad prescriptivist kind.

Jimi Hendrix, common practice musician

Continue reading “Toward a better music theory”

My collection of transcribed rhythm patterns

For my thesis, I’ve been gathering good drum machine patterns: classic breakbeats, genre templates and Afro-Cuban rhythms. Here they are, enjoy.

Classic breakbeats

Analyzing the musical structure of “Sledgehammer” by Peter Gabriel

We’re asking participants in Play With Your Music to create musical structure graphs of their favorite songs. These are diagrams showing the different sections of the song and where its component sounds enter and exit. In order to create these graphs, you have to listen to the song deeply and analytically, probably many times. It’s excellent ear training for the aspiring producer or songwriter. This post will talk you through a structure graph of “Sledgehammer” by Peter Gabriel, co-produced by Peter and Daniel Lanois.

Here is the video version of my analysis:

Below is the musical structure graph. Click the image below to see it bigger, and with popup comments.

"Sledgehammer" structure graph

Here’s the perceived space graph:

"Sledgehammer" perceived space

And here’s a chart of the chord progression.

Continue reading “Analyzing the musical structure of “Sledgehammer” by Peter Gabriel”

Teaching audio and MIDI editing in the MOOC

This is the fifth in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first, second, third and fourth posts.

Soundation uses the same basic interface paradigm as other audio recording and editing programs like Pro Tools and Logic. Your song consists of a list of tracks, each of which can contain a particular sound. The tracks all play back at the same time, so you can use them to blend together sounds as you see fit. You can either record your own sounds, or use the loops included in Soundation, or both. The image below shows six tracks. The first two contain loops of audio; the other four contain MIDI, which I’ll explain later in the post.

Audio and MIDI tracks in Soundation

Continue reading “Teaching audio and MIDI editing in the MOOC”

Teaching expressive use of audio effects in the MOOC

This is the fourth in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first, second and third posts.

After PWYM participants have tried mixing using just levels and panning, the next step is to include audio effects for additional audio manipulation. As a painless introduction, you can load any track from SoundCloud into our own miniature web-based effects unit, #PWYM Live Effects. Then it’s time to open up some dry stems in Soundation. In addition to mixing and panning, you can now do some creative application of Soundation’s effects. These include:

Filter

Both low-pass and high-pass filters are available, which block high and low frequencies, respectively. Why would you want to do such a thing? There are practical and expressive reasons. The practical one is to keep sounds from fighting each other in the mix. So, for example, my electric guitar has a very bass-heavy sound. If there’s a bassist on the track along with me, together we’re going to sound like mud. By applying a high-pass filter to my guitar, I can stay out of the bassist’s way and still get across most of the information in my sound. Similarly, I’d want to low-pass the bass for the same reason.

A low-pass filter

Continue reading “Teaching expressive use of audio effects in the MOOC”

Teaching mixing in a MOOC

This is the third in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first and second posts.

So, you’ve learned how to listen closely and analytically. The next step is to get your hands on some multitrack stems and do mixes of your own. Participants in PWYM do a “convergent mix” — you’re given a set of separated instrumental and vocal tracks, and you need to mix them so they match the given finished product. PWYM folks work with stems of “Air Traffic Control” by Clara Berry, using our cool in-browser mixing board. The beauty of the browser mixer is that the fader settings get automatically inserted into the URL, so once you’re done, anyone else can hear your mix by opening that URL in their own browser.

Mixing desk Continue reading “Teaching mixing in a MOOC”