Sound On Sound ran this highly detailed account of mixing the inescapable summer jam of 2012. It’s the most thorough explanation of how a contemporary pop song gets mixed that I’ve ever read.
I’m interested in this article not so much for the specifics of the gear and the plugins, but rather just out of sheer awe at the complexity and nuance of the track’s soundscape. My cadre of pop-oriented music academics likes to say that the creativity in recordings lies not in their melodies and the chords necessarily, but in their timbre and space. “Call Me Maybe” is an excellent case in point. Its melody and chords are fun, but not exactly groundbreaking. Yet the track leaps out of the speakers at you, demanding your attention, managing both to pound you with sonic force and intrigue you with quiet detail. Whether you want your attention grabbed in this way is a matter of taste. I happen to love the song, but even if it isn’t your cup of tea, the craft behind it bears some thinking about.
I recently began my second semester of teaching Music Technology 101 at Montclair State University. In a perfect world, I’d follow Mike Medvinsky’s lead and dive straight into creative music-making on day one. However, there are logistical reasons to save that for day two. Instead, I started the class with a listening party, a kind of electronic popular music tasting menu. I kicked things off with “Umbrella” by Rihanna.
I chose this song because of its main drum loop, which is a factory sound that comes with GarageBand called Vintage Funk Kit 03–slow it down to 90 bpm and you’ll hear it. The first several class projects use GarageBand, and I like the students to feel like they’re being empowered to create real music in the class, not just performing academic exercises.
My students at NYU and Montclair State are beginning to venture into producing their own tracks. There are two challenges facing them, the small one and the big one. The small challenge is learning3 the tools: remembering where the menus are and which key you hold down to turn the mouse pointer into a pencil, learning to conceive of notes and beats as rectangles on the piano roll, troubleshooting when you play notes on the MIDI keyboard and no sound comes out. The big challenge is option paralysis. Even a lightweight tool like GarageBand comes with a staggeringly large collection of software instruments, loops and effects, even before you start dealing with recording your own sounds. Where do you even begin?
The solution I’m using with my classes is the shared-sample project. Students are challenged to build a track out of a particular sound, or set of sounds. The easy version requires that they use the given sound, along with any additional sounds they see fit to include. The hard version, and for me the really interesting one, requires that they use the given sound(s) and absolutely nothing else. I was inspired in creating these assignments by the many Disquiet Junto shared sample projects I’ve had the pleasure of participating in. I’m trying out my own project ideas on MSU advanced audio production independent studiers Dan Bui and Matt Skouras, and will soon be giving shared-sample projects to my beginner-level classes as well.
The first assignment I gave Dan and Matt was to use eight GarageBand factory loops to build a track. They were free to do whatever processing they wanted, but they could not use other sounds. Also, they only had an hour to put their tracks together. Here are the loops:
Right now I’m teaching music technology to a lot of classical musicians. I came up outside the classical pipeline, and am always surprised to be reminded how insulated these folks are from the rest of the culture. I was asked today for some electronic music recommendations by a guy who basically never listens to any of it, and I expect I’ll be asked that many more times in this job. So I put together this playlist. It’s not a complete, thorough, or representative sampling of anything; it mostly reflects my own tastes. In more or less chronological order:
This lady did cooler stuff with tape recorders than most of us are doing with computers. See her in action. Here’s a proto-techno beat she made in 1971.
In my first post in this series, I briefly touched on the problem of option paralysis facing all electronic musicians, especially the ones who are just getting started. In this post, I’ll talk more about pedagogical strategies for keeping beginners from being overwhelmed by the infinite possibilities of sampling and synthesis.
This is part of a larger argument why Ableton Live and software like it really needs a pedagogy specifically devoted to it. The folks at Ableton document their software extremely well, but their materials presume familiarity with their own musical culture. Most people aren’t already experimental techno producers. They need to be taught the musical values, conventions and creative approaches that Ableton Live is designed around. They also need some help in selecting raw musical materials. We music teachers can help, by putting tools like Ableton into musical context, and by curating finitely bounded sets of sounds to work with. Doing so will lower barriers to entry, which means happier users (and better sales for Ableton.) Continue reading
My music-making life has revolved heavily around Ableton Live for the past few years, and now the same thing is happening to my music-teaching life. I’m teaching Live at NYU’s IMPACT program this summer, and am going to find ways to work it into my future classes as well. My larger ambition is to develop an all-around electronic music composition/improvisation/performance curriculum centered around Live.
While the people at Ableton have done a wonderful job documenting their software, they mostly presume that users know what they want to accomplish, they just don’t know how to get there. But my experience of beginner Ableton users (and newbie producers generally) is that they don’t even know what the possibilities are, what the workflow looks like, or how to get a foothold. My goal is to fill that vacuum, and I’ll be documenting the process extensively here on the blog.
You hear musicians talk all the time about groove. You might wonder what they mean by that. A lot of musicians couldn’t explain exactly, beyond “the thing that makes music sound good.” The etymology of the term comes from vinyl records. Musicians ride the groove the way a phonograph needle physically rides the groove in the vinyl.
But what is groove, exactly? It isn’t just a matter of everyone playing with accurate rhythm. When a classical musician executes a passage flawlessly, you don’t usually talk about their groove. Meanwhile, it’s possible for loosely executed music to have a groove to it. Most of my musician friends talk about groove as a feeling, a vibe, an ineffable emotional quality, and they’re right. But groove is something tangible, too, and even quantifiable.
Using digital audio production software, you can learn to understand the most mystical aspects of music in concrete terms. I’ve written previously about how electronic music quantifies the elusive concept of swing. Music software can similarly help you understand the even more elusive concept of groove. In music software, “groove” means something specific and technical: the degree to which a rhythm deviates from the straight metronomic grid. Continue reading
Later this week I’m doing a teaching demo for a music technology professor job. The students are classical music types who don’t have a lot of music tech background, and the task is to blow their minds. I’m told that a lot of them are singers working on Verdi’s Requiem. My plan, then, is to walk the class through the process of remixing a section of the Requiem with Ableton Live. This post is basically the script for my lecture.
I participate in Marc Weidenbaum’s Disquiet Junto whenever I have the time and the brain space. Once a week, he sends out an assignment, and you have a few days to produce a new piece of music to fit. Marc asks that you discuss your process in the track descriptions on SoundCloud, and I’m always happy to oblige. But my descriptions are usually terse. This week I thought I’d dive deep and document the whole process from soup to nuts, with screencaps and everything.
Here’s this week’s assignment, which is simpler than usual:
Please answer the following question by making an original recording: “What is the room tone of the Internet?” The length of your recording should be two minutes.
I just completed a batch of new music, which was improvised freely in the studio and then later shaped into structured tracks.
I thought it would be helpful to document the process behind this music, for a couple of reasons. First of all, I expect to be teaching this kind of production a lot more in the future. Second, knowing how the tracks were made might be helpful to you in enjoying them. Third, composing the music during or after recording rather than before has become the dominant pop production method, and I want to help my fellow highbrow musicians to get hip to it. Continue reading