This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.
I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.
I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.
The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.
In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.
For the benefit of Play With Your Music participants and anyone else we end up teaching basic audio production to, MusEDLab intern Robin Chakrabarti and I created this video on recording audio in less-than-ideal environments.
This video is itself quite a DIY production, shot and edited in less than twenty-four hours, with minimal discussion beforehand and zero rehearsal. Continue reading
Most Americans who study music formally do so using common-practice era western tonal theory. Tonal theory is very useful in understanding the music of the European aristocracy in the eighteenth and early nineteenth centuries, and the music derived from it. Tonal theory is not, however, very useful for understanding the blues, or any of the music that derives from it.
The blues is based around a set of harmonic expectations that are quite different from the classical ones. Major and minor tonality are freely intermingled. Dominant seventh chords can function as tonics. Tritones may or may not resolve. The blues scale is as basic in this context as the major scale is in tonal harmony. We need some new and better vocabulary. In this post, I propose that we teach blues tonality as a distinct category from major or minor, combining elements of both with elements not found in either.
In tonal theory, the basis of all harmony is the major scale and its associated chords, which you modify in various ways to make all the other scales and chords. For example, in C major, you modify the “natural” seventh, B, to get the “flat” seventh, B♭. If you come into music through rock, hip-hop, R&B, country or any other musical form originating in the African diaspora (what the music academy calls “pop”,) you develop a quite different sense of what “natural” harmony is. When you’re enculturated by blues-based music, you’re likely to hear B♭ as more natural than B in C major. And you are likely to be closely familiar with the blues scale, which western tonal theory has no explanation for whatsoever.
When you absorb the blues rule set first, as I did, the classical one seems painfully obtuse. Many of my pop musician friends think they don’t “get” music theory because the rules of eighteenth century western European court music are frequently at odds with their intuition. Who can blame them for being confused, and why should we be surprised when so many give up? Most of the working musicians I know outside of classical and jazz are substantially self-taught. How can we design a harmony pedagogy that applies to the music that students actually know and care about? Continue reading
I’m continuing to crank out educational videos for Play With Your Music. Part of the process involves remaking old videos as both my chops and the facilities in NYU’s Blended Learning Lab improve. Here’s my series on the basics of rhythm:
We’ve made several improvements, some technical, some creative. The most immediately noticeable one is multiple camera angles. NYU’s Blended Learning Lab now has three cameras instead of just one. We can now cut back and forth between various angles, rather than showing a continuous talking head shot. It doesn’t seem like it would make such a big difference, but it does.
In my first post in this series, I briefly touched on the problem of option paralysis facing all electronic musicians, especially the ones who are just getting started. In this post, I’ll talk more about pedagogical strategies for keeping beginners from being overwhelmed by the infinite possibilities of sampling and synthesis.
This is part of a larger argument why Ableton Live and software like it really needs a pedagogy specifically devoted to it. The folks at Ableton document their software extremely well, but their materials presume familiarity with their own musical culture. Most people aren’t already experimental techno producers. They need to be taught the musical values, conventions and creative approaches that Ableton Live is designed around. They also need some help in selecting raw musical materials. We the music teachers can help, by putting tools like Ableton into musical context, and by curating finitely bounded sets of sounds to work with. Doing so will lower barriers to entry, which means happier users (and better sales for Ableton.) Continue reading
My music-making life has revolved heavily around Ableton Live for the past few years, and now the same thing is happening to my music-teaching life. I’m teaching Live at NYU’s IMPACT program this summer, and am going to find ways to work it into my future classes as well. My larger ambition is to develop an all-around electronic music composition/improvisation/performance curriculum centered around Live.
While the people at Ableton have done a wonderful job documenting their software, they mostly presume that users know what they want to accomplish, they just don’t know how to get there. But my experience of beginner Ableton users (and newbie producers generally) is that they don’t even know what the possibilities are, what the workflow looks like, or how to get a foothold. My goal is to fill that vacuum, and I’ll be documenting the process extensively here on the blog.
You hear musicians talk all the time about groove. You might wonder what they mean by that. A lot of musicians couldn’t explain exactly, beyond “the thing that makes music sound good.” The etymology of the term comes from vinyl records. Musicians ride the groove the way a phonograph needle physically rides the groove in the vinyl.
But what is groove, exactly? It isn’t just a matter of everyone playing with accurate rhythm. When a classical musician executes a passage flawlessly, you don’t usually talk about their groove. Meanwhile, it’s possible for loosely executed music to have a groove to it. Most of my musician friends talk about groove as a feeling, a vibe, an ineffable emotional quality, and they’re right. But groove is something tangible, too, and even quantifiable.
Using digital audio production software, you can learn to understand the most mystical aspects of music in concrete terms. I’ve written previously about how electronic music quantifies the elusive concept of swing. Music software can similarly help you understand the even more elusive concept of groove. In music software, “groove” means something specific and technical: the degree to which a rhythm deviates from the straight metronomic grid. Continue reading
Later this week I’m doing a teaching demo for a music technology professor job. The students are classical music types who don’t have a lot of music tech background, and the task is to blow their minds. I’m told that a lot of them are singers working on Verdi’s Requiem. My plan, then, is to walk the class through the process of remixing a section of the Requiem with Ableton Live. This post is basically the script for my lecture.
You may have noticed a lot of writing about Peter Gabriel on the blog lately. This is because I’ve been hard at work with Alex Ruthmann, the NYU MusEDLab, and the crack team at Peer To Peer University on a brand new online class that uses some of Peter’s eighties classics to teach audio production. We’re delighted to announce that the class is finished and ready to launch.
Here’s Alex’s video introduction: