I’m writing a chapter of the forthcoming Oxford Handbook of Technology and Music Education. Here’s a section of what I wrote, about my own music learning experiences.
Most of my music education has happened outside of the classroom. It has come about intentionally, through lessons and disciplined practice, and it has come about unintentionally, through osmosis or accidental discovery. There has been no separation between my creative practice, my learning, and my teaching.
My formal music education has been a mixed bag. In elementary school, I did garden-variety general music, with recorders and diatonic xylophones. I don’t remember enjoying or not enjoying it in particular. I engaged more deeply with the music my family listened to at home: classical and jazz on public radio; the Beatles, Paul Simon and Motown otherwise. Like every member of my age cohort, I listened to a lot of Michael Jackson, and because I grew up in New York City, I absorbed some hip-hop as well.
In middle school we started on traditional classical music. I chose the cello, for no good reason except that I had braces and so was steered away from wind instruments. I liked the instrument, and still do, but the cello parts in basic-level Baroque music are mostly sawing away at quarter notes, and I lost interest quickly. Singing showtunes in chorus didn’t hold much appeal for me either, and I abandoned formal music as soon as I was able.
Starting this week, I’m teaching my very first full course at NYU, the undergraduate Music Education Technology Practicum. Exciting! Here’s the syllabus.
This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.
I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.
I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.
The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.
In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.
For the benefit of Play With Your Music participants and anyone else we end up teaching basic audio production to, MusEDLab intern Robin Chakrabarti and I created this video on recording audio in less-than-ideal environments.
This video is itself quite a DIY production, shot and edited in less than twenty-four hours, with minimal discussion beforehand and zero rehearsal. Continue reading
I’m continuing to crank out educational videos for Play With Your Music. Part of the process involves remaking old videos as both my chops and the facilities in NYU’s Blended Learning Lab improve. Here’s my series on the basics of rhythm:
We’ve made several improvements, some technical, some creative. The most immediately noticeable one is multiple camera angles. NYU’s Blended Learning Lab now has three cameras instead of just one. We can now cut back and forth between various angles, rather than showing a continuous talking head shot. It doesn’t seem like it would make such a big difference, but it does.
In my first post in this series, I briefly touched on the problem of option paralysis facing all electronic musicians, especially the ones who are just getting started. In this post, I’ll talk more about pedagogical strategies for keeping beginners from being overwhelmed by the infinite possibilities of sampling and synthesis.
This is part of a larger argument why Ableton Live and software like it really needs a pedagogy specifically devoted to it. The folks at Ableton document their software extremely well, but their materials presume familiarity with their own musical culture. Most people aren’t already experimental techno producers. They need to be taught the musical values, conventions and creative approaches that Ableton Live is designed around. They also need some help in selecting raw musical materials. We the music teachers can help, by putting tools like Ableton into musical context, and by curating finitely bounded sets of sounds to work with. Doing so will lower barriers to entry, which means happier users (and better sales for Ableton.) Continue reading
My music-making life has revolved heavily around Ableton Live for the past few years, and now the same thing is happening to my music-teaching life. I’m teaching Live at NYU’s IMPACT program this summer, and am going to find ways to work it into my future classes as well. My larger ambition is to develop an all-around electronic music composition/improvisation/performance curriculum centered around Live.
While the people at Ableton have done a wonderful job documenting their software, they mostly presume that users know what they want to accomplish, they just don’t know how to get there. But my experience of beginner Ableton users (and newbie producers generally) is that they don’t even know what the possibilities are, what the workflow looks like, or how to get a foothold. My goal is to fill that vacuum, and I’ll be documenting the process extensively here on the blog.
You hear musicians talk all the time about groove. You might wonder what they mean by that. A lot of musicians couldn’t explain exactly, beyond “the thing that makes music sound good.” The etymology of the term comes from vinyl records. Musicians ride the groove the way a phonograph needle physically rides the groove in the vinyl.
But what is groove, exactly? It isn’t just a matter of everyone playing with accurate rhythm. When a classical musician executes a passage flawlessly, you don’t usually talk about their groove. Meanwhile, it’s possible for loosely executed music to have a groove to it. Most of my musician friends talk about groove as a feeling, a vibe, an ineffable emotional quality, and they’re right. But groove is something tangible, too, and even quantifiable.
Using digital audio production software, you can learn to understand the most mystical aspects of music in concrete terms. I’ve written previously about how electronic music quantifies the elusive concept of swing. Music software can similarly help you understand the even more elusive concept of groove. In music software, “groove” means something specific and technical: the degree to which a rhythm deviates from the straight metronomic grid. Continue reading
Later this week I’m doing a teaching demo for a music technology professor job. The students are classical music types who don’t have a lot of music tech background, and the task is to blow their minds. I’m told that a lot of them are singers working on Verdi’s Requiem. My plan, then, is to walk the class through the process of remixing a section of the Requiem with Ableton Live. This post is basically the script for my lecture.