A Quora user asks whether artificial intelligence will ever replace human musicians.
My students at NYU and Montclair State are beginning to venture into producing their own tracks. There are two challenges facing them, the small one and the big one. The small challenge is learning3 the tools: remembering where the menus are and which key you hold down to turn the mouse pointer into a pencil, learning to conceive of notes and beats as rectangles on the piano roll, troubleshooting when you play notes on the MIDI keyboard and no sound comes out. The big challenge is option paralysis. Even a lightweight tool like GarageBand comes with a staggeringly large collection of software instruments, loops and effects, even before you start dealing with recording your own sounds. Where do you even begin?
The solution I’m using with my classes is the shared-sample project. Students are challenged to build a track out of a particular sound, or set of sounds. The easy version requires that they use the given sound, along with any additional sounds they see fit to include. The hard version, and for me the really interesting one, requires that they use the given sound(s) and absolutely nothing else. I was inspired in creating these assignments by the many Disquiet Junto shared sample projects I’ve had the pleasure of participating in. I’m trying out my own project ideas on MSU advanced audio production independent studiers Dan Bui and Matt Skouras, and will soon be giving shared-sample projects to my beginner-level classes as well.
The first assignment I gave Dan and Matt was to use eight GarageBand factory loops to build a track. They were free to do whatever processing they wanted, but they could not use other sounds. Also, they only had an hour to put their tracks together. Here are the loops:
Right now I’m teaching music technology to a lot of classical musicians. I came up outside the classical pipeline, and am always surprised to be reminded how insulated these folks are from the rest of the culture. I was asked today for some electronic music recommendations by a guy who basically never listens to any of it, and I expect I’ll be asked that many more times in this job. So I put together this playlist. It’s not a complete, thorough, or representative sampling of anything; it mostly reflects my own tastes. In more or less chronological order:
This month I’ve been teaching music production and composition as part of NYU’s IMPACT program. A participant named Michelle asked me to critique some of her original compositions. I immediately said yes, and then immediately wondered how I was actually going to do it. I always want to evaluate music on its own terms, and to do that, I need to know what the terms are. I barely know Michelle. I’ve heard her play a little classical piano and know that she’s quite good, but beyond that, I don’t know her musical culture or intentions or style. Furthermore, she’s from China, and her English is limited.
I asked Michelle to email me audio files, and also MIDI files if she had them. Then I had an epiphany: I could just remix her MIDIs, and give my critique totally non-verbally.
I’m working on a long paper right now with my colleague at Montclair State University, Adam Bell. The premise is this: In the past, metaphors came from hardware, which software emulated. In the future, metaphors will come from software, which hardware will emulate.
The first generation of digital audio workstations have taken their metaphors from multitrack tape, the mixing desk, keyboards, analog synths, printed scores, and so on. Even the purely digital audio waveforms and MIDI clips behave like segments of tape. Sometimes the metaphors are graphically abstracted, as they are in Pro Tools. Sometimes the graphics are more literal, as in Logic. Propellerhead Reason is the most skeuomorphic software of them all. This image from the Propellerhead web site makes the intent of the designers crystal clear; the original analog synths dominate the image.
In Ableton Live, by contrast, hardware follows software. The metaphor behind Ableton’s Session View is a spreadsheet. Many of the instruments and effects have no hardware predecessor.
Participants in Play With Your Music were recently treated to an in-depth interview with two Peter Gabriel collaborators, engineer Kevin Killen and drummer Jerry Marotta. Both are highly accomplished music pros with a staggering breadth of experience between them. You can watch the interview here:
Kevin Killen engineered So and several subsequent Peter Gabriel albums. His other engineering and mixing credits include Suzanne Vega, Gilbert O’Sullivan, Bobby McFerrin, Elvis Costello, Dar Williams, Sophie B. Hawkins, Ricky Martin, Madeleine Peyroux, U2, Allen Toussaint, Duncan Sheik, Bob Dylan, Ennio Morricone, Tori Amos, Rosanne Cash, Shakira, Talking Heads, John Scofield, Anoushka Shankar, Patti Smith, Laurie Anderson, Stevie Nicks, Los Lobos, Kate Bush, Roy Orbison and Bryan Ferry.
Jerry Marotta played drums on all of Peter Gabriel’s classic solo albums. He has also performed and recorded with a variety of other artists, including Hall & Oates, the Indigo Girls, Ani DiFranco, Sarah McLachlan, Marshall Crenshaw, Suzanne Vega, John Mayer, Iggy Pop, Tears for Fears, Elvis Costello, Cher, Paul McCartney, Carly Simon, and Ron Sexsmith.
In my first post in this series, I briefly touched on the problem of option paralysis facing all electronic musicians, especially the ones who are just getting started. In this post, I’ll talk more about pedagogical strategies for keeping beginners from being overwhelmed by the infinite possibilities of sampling and synthesis.
This is part of a larger argument why Ableton Live and software like it really needs a pedagogy specifically devoted to it. The folks at Ableton document their software extremely well, but their materials presume familiarity with their own musical culture. Most people aren’t already experimental techno producers. They need to be taught the musical values, conventions and creative approaches that Ableton Live is designed around. They also need some help in selecting raw musical materials. We music teachers can help, by putting tools like Ableton into musical context, and by curating finitely bounded sets of sounds to work with. Doing so will lower barriers to entry, which means happier users (and better sales for Ableton.) Continue reading
My music-making life has revolved heavily around Ableton Live for the past few years, and now the same thing is happening to my music-teaching life. I’m teaching Live at NYU’s IMPACT program this summer, and am going to find ways to work it into my future classes as well. My larger ambition is to develop an all-around electronic music composition/improvisation/performance curriculum centered around Live.
While the people at Ableton have done a wonderful job documenting their software, they mostly presume that users know what they want to accomplish, they just don’t know how to get there. But my experience of beginner Ableton users (and newbie producers generally) is that they don’t even know what the possibilities are, what the workflow looks like, or how to get a foothold. My goal is to fill that vacuum, and I’ll be documenting the process extensively here on the blog.
Later this week I’m doing a teaching demo for a music technology professor job. The students are classical music types who don’t have a lot of music tech background, and the task is to blow their minds. I’m told that a lot of them are singers working on Verdi’s Requiem. My plan, then, is to walk the class through the process of remixing a section of the Requiem with Ableton Live. This post is basically the script for my lecture.
I participate in Marc Weidenbaum’s Disquiet Junto whenever I have the time and the brain space. Once a week, he sends out an assignment, and you have a few days to produce a new piece of music to fit. Marc asks that you discuss your process in the track descriptions on SoundCloud, and I’m always happy to oblige. But my descriptions are usually terse. This week I thought I’d dive deep and document the whole process from soup to nuts, with screencaps and everything.
Here’s this week’s assignment, which is simpler than usual:
Please answer the following question by making an original recording: “What is the room tone of the Internet?” The length of your recording should be two minutes.