My music-making life has revolved heavily around Ableton Live for the past few years, and now the same thing is happening to my music-teaching life. I’m teaching Live at NYU’s IMPACT program this summer, and am going to find ways to work it into my future classes as well. My larger ambition is to develop an all-around electronic music composition/improvisation/performance curriculum centered around Live.
While the people at Ableton have done a wonderful job documenting their software, they mostly presume that users know what they want to accomplish, they just don’t know how to get there. But my experience of beginner Ableton users (and newbie producers generally) is that they don’t even know what the possibilities are, what the workflow looks like, or how to get a foothold. My goal is to fill that vacuum, and I’ll be documenting the process extensively here on the blog.
You hear musicians talk all the time about groove. You might wonder what they mean by that. A lot of musicians couldn’t explain exactly, beyond “the thing that makes music sound good.” The etymology of the termcomes from vinyl records. Musicians ride the groove the way a phonograph needle physically rides the groove in the vinyl.
But what is groove, exactly? It isn’t just a matter of everyone playing with accurate rhythm. When a classical musician executes a passage flawlessly, you don’t usually talk about their groove. Meanwhile, it’s possible for loosely executed music to have a groove to it. Most of my musician friends talk about groove as a feeling, a vibe, an ineffable emotional quality, and they’re right. But groove is something tangible, too, and even quantifiable.
Using digital audio production software, you can learn to understand the most mystical aspects of music in concrete terms. I’ve written previously about how electronic music quantifies the elusive concept of swing. Music software can similarly help you understand the even more elusive concept of groove. In music software, “groove” means something specific and technical: the degree to which a rhythm deviates from the straight metronomic grid. Continue reading →
Later this week I’m doing a teaching demo for a music technology professor job. The students are classical music types who don’t have a lot of music tech background, and the task is to blow their minds. I’m told that a lot of them are singers working on Verdi’s Requiem. My plan, then, is to walk the class through the process of remixing a section of the Requiem with Ableton Live. This post is basically the script for my lecture.
You may have noticed a lot of writing about Peter Gabriel on the blog lately. This is because I’ve been hard at work with Alex Ruthmann, the NYU MusEDLab, and the crack team at Peer To Peer University on a brand new online class that uses some of Peter’s eighties classics to teach audio production. We’re delighted to announce that the class is finished and ready to launch.
Walking to the subway this morning, I had a bright idea for how to make the Drum Loop more kid-friendly by representing the radial grid as a pizza. Here’s a very quick concept sketch:
To really make this work, I wouldn’t just plop a JPEG of a pizza under the existing UI. I’d want a cartoon pizza rendered in a flat-color style. Instead of colored wedge cells, drum hits would be represented by stylized pepperoni, sausage, anchovies, olives, mushrooms and so on. I’ll throw it on the ever-expanding “future work” pile.
Usually I like to make everything on this blog freely available to whoever wants to use it, but The Groove Pizza is ⓒ Ethan Hein 2013, all rights reserved.
Soundation uses the same basic interface paradigm as other audio recording and editing programs like Pro Tools and Logic. Your song consists of a list of tracks, each of which can contain a particular sound. The tracks all play back at the same time, so you can use them to blend together sounds as you see fit. You can either record your own sounds, or use the loops included in Soundation, or both. The image below shows six tracks. The first two contain loops of audio; the other four contain MIDI, which I’ll explain later in the post.
Alex is fond of the phrase “pedagogies of timbre and space.” By that, he means: ways of studying those aspects of recorded music beyond the notes being played and words being sung. Timbre is the combination of overtones, noise content, attack and decay that makes one instrument sound different from another. Space refers to the environment that the sound exists in, real or simulated. These are the aspects of music that get shaped by recording engineers, producers and DJs. Audio creatives usually don’t have much input into the stuff you see on sheet music. But they end up significantly shaping the end result, because the sonic surface is the main thing that most non-specialist listeners pay attention to (along with the beat.) For many pop and dance styles, the surface texture is the most salient component of the music.
The work of audio professionals, be they recording artists, engineers, producers, remixers or DJs, consists mostly of close listening. Making recordings consists of doing a lot of asking yourself: Does this sound good? If not, why not? Is there something missing? Or does something need to be taken out? Is the blend of timbres satisfying? Are the sounds placed well in the stereo field? Are they at the right perceptual distance from the listener? No one is born able to ask these questions, much less to answer them. You have to learn how, and then you have to practice. In a sense, music production software is like the Microsoft Office suite. Before you learn about the fine points of formatting or making equations, you need to learn how to write coherently, how to organize data, how to structure a presentation. So it is with music. There’s no point in learning the nuts and bolts of particular software until you know what you’re listening for and what you want to achieve.
I’ve undergone some evolution in my thinking about the intended audience for my thesis app. My original idea was to aim it at the general public. But the general public is maybe not quite so obsessed with breakbeats as I am. Then I started working with Alex Ruthmann, and he got me thinking about the education market. There certainly a lot of kids in the schools with iPads, so that’s an attractive idea. But hip-hop and techno are a tough sell for traditionally-minded music teachers. I realized that I’d find a much more receptive audience in math teachers. I’ve been thinking about the relationship between music and math for a long time, and it would be cool to put some of those ideas into practice.
The design I’ve been using for the Drum Loop UI poses some problems for math usage. Since early on, I’ve had it so that the centers of the cells line up with the cardinal angles. However, if you’re going to measure angles and things, the grid lines really need to be on the cardinal angles instead. Here’s the math-friendly design:
My thesis is supposed to include a quantitative research component. This had been causing me some anxiety. It’s educational and creative software. What exactly could I measure? I had this vague notion of testing people’s rhythmic ability before and after using the app. But how do you quantify rhythmic ability? Even if I had a meaningful numerical representation, how could I possibly measure a big enough sample size over a long enough time to get a statistically significant result? The development of my app is going okay, but I was really stressing about the experimental component.
Then my advisor introduced me to Andrew Brown‘s notion of software development as research, or SoDaR. As Brown puts it, “SoDaR involves computers, but is about people.” Humans are complex, our interactions with computers are complex, the way we learn is complex. The only method of inquiry that can encompass all that complexity is qualitative, anthropological inquiry, involving a substantial amount of introspection on the part of the researcher.