Teaching mixing in a MOOC

This is the third in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first and second posts.

So, you’ve learned how to listen closely and analytically. The next step is to get your hands on some multitrack stems and do mixes of your own. Participants in PWYM do a “convergent mix” — you’re given a set of separated instrumental and vocal tracks, and you need to mix them so they match the given finished product. PWYM folks work with stems of “Air Traffic Control” by Clara Berry, using our cool in-browser mixing board. The beauty of the browser mixer is that the fader settings get automatically inserted into the URL, so once you’re done, anyone else can hear your mix by opening that URL in their own browser.

Mixing deskOnce you’ve done a convergent mix, the next step is to try something more creative: a divergent mix, one that’s different from the original. This time the point is to set levels in a way that suits your own idea interpretation of how the track should sound. At this stage, PWYM folks also begin working with panning and effects like reverb, EQ and compression. The goal is to see how much you can alter the sonic and emotional qualities of the song just by changing levels and surface timbres, without changing the musical content at all. For this exercise, participants use the amazing in-browser DAW Soundation.

What difference does mixing make? A lot of American pop and rock genres are distinguished more by their mixing styles than their harmonic and rhythmic content. On paper, one rock or pop song is much like another. But they come out of the speakers sounding quite diverse. Vocals might be up front (singer-songwriters) or buried under the guitars (most hard rock.) The snare might be the dominant percussive element (metal) or the kick might be (funk, disco and hip-hop.) Keyboards might be a faintly perceptible background to the guitars (hard rock again) or vice versa (pop and new wave.) The sounds might be crisp and precise (mainstream rock, pop, hip-hop and EDM) or fuzzy and lo-fi (punk, indie rock, underground hip-hop.) The bass might be strong and assertive (pop, hip-hop) or inaudible (metal.) The main spatial effect might be reverb (before 1980) or delay (after 1980.) These distinctions mostly act on you unconsciously, but quite powerfully. In high school, my friends argued endlessly about the merits of different rock styles that seemed to differ solely on the basis of the guitar sounds.

Basic mixing entails setting levels, panning and effects, and leaving them. In more advanced mixing, these settings might change over the course of the song. A prosaic example is to have a track’s level come up or down during different sections of the song. (Nearly all pop songs use this technique to keep the parts sounding even.) A guitar part might be clean behind the vocals and then distorted during the solo. Songs begin by fading in, and end by fading out. Particular voices or instruments can fade in and out independently as well.

While automation has historically been subtle, current pop, dance and hip-hop use it as an active element of the musical foreground. In place of drum fills, producers will have the rhythm track simply drop out unexpectedly for the last beat or bar of a section. Hip-hop producers will also sometimes mute the downbeat of a drum part at the beginning of a section. Entire genres of dance music are based on the idea of a synth playing a very simple and repetitive loop while the frequency and resonance parameters sweep slowly in and out. A classic example is “Little Fluffy Clouds” by the Orb — listen to the synth bass beginning at 0:24.

One of my favorite examples of expressive automation is “Love Lockdown” by Kanye West. Thirty seconds in, he suddenly adds guitar-style distortion to his vocal, but just for half a line. It’s a startling effect. Later in the song, he sings a “guitar solo” with the distortion, but without the Auto-tune.

An even more dramatic piece of automation comes at two minutes into another Kanye track, “No Church In The Wild,” with Jay-Z and Frank Ocean.

Under Frank Ocean’s verse, the entire backing track gets swallowed by an extreme low-pass filter (a filter that removes higher frequencies, only allowing the lower frequencies through.) The effect is like being plunged underwater. As the filter gradually releases, it feels like the track emerges from the water and rushes up to meet you. Very hip.

The next post discusses the expressive use of effects in more detail.