I participate in Marc Weidenbaum’sDisquiet Junto whenever I have the time and the brain space. Once a week, he sends out an assignment, and you have a few days to produce a new piece of music to fit. Marc asks that you discuss your process in the track descriptions on SoundCloud, and I’m always happy to oblige. But my descriptions are usually terse. This week I thought I’d dive deep and document the whole process from soup to nuts, with screencaps and everything.
I just completed a batch of new music, which was improvised freely in the studio and then shaped into structured tracks after the fact.
I thought it would be helpful to document the process behind this music, for a couple of reasons. First of all, I expect to be teaching this kind of production a lot more in the future. Second, knowing how the tracks were made might be helpful to you in enjoying them. Third, composing the music during or after recording rather than before has become the dominant pop production method, and I want to help my fellow highbrow musicians catch up to it. Continue reading →
Peter Gabriel’s songwriting and recording process in the early 1980s was unusual in its technological sophistication, playfulness and reliance on improvisation. But now that the technology is a lot cheaper and more accessible, most pop, dance and hip-hop music is produced using similar methods.
The South Bank Show’s long 1983 documentary on the making of Peter Gabriel’s fourth solo album Security follows the production of the album from its earliest conception to its release and critical reception, giving fascinating insight into the creative process along the way.
This is the third in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first and second posts.
So, you’ve learned how to listen closely and analytically. The next step is to get your hands on some multitrack stems and do mixes of your own. Participants in PWYM do a “convergent mix” — you’re given a set of separated instrumental and vocal tracks, and you need to mix them so they match the given finished product. PWYM folks work with stems of “Air Traffic Control” by Clara Berry, using our cool in-browser mixing board. The beauty of the browser mixer is that the fader settings get automatically inserted into the URL, so once you’re done, anyone else can hear your mix by opening that URL in their own browser.
Computers have revolutionized the composition, production and recording of music. However, they have not yet revolutionized music education. While a great deal of educational software exists, it mostly follows traditional teaching paradigms, offering ear training, flash cards and the like. Meanwhile, nearly all popular music is produced in part or in whole with software, yet electronic music producers typically have little to no formal training with their tools. Somewhere between the ad-hoc learning methods of pop and dance producers and traditional music pedagogy lies a rich untapped vein of potential.
This paper will explore the problem of how software can best be designed to help novice musicians access their own musical imagination with a minimum of frustration. I will examine a variety of design paradigms and case studies. I will hope to discover software interface designs that present music in a visually intuitive way, that are discoverable, and that promote flow.
I was motivated to create a surround remix of a Beatles song by hearing the Beatles Love album in class.
I chose “Here Comes The Sun” because I have the multitracks, and because I heard potential to find new musical ideas within it. Remixing an existing recording is always an enjoyable undertaking, but the process takes on new levels of challenge and reward when the source material is so well-known and widely revered. Much as I enjoy Beatles Love, I feel that it didn’t take enough liberties with the original tracks. I wanted to depart further from the original mix and structure of “Here Comes The Sun.”
For Paul Geluso’s Advanced Audio Production midterm, we were assigned to choose two tracks from his recommended listening list, and compare and contrast them sonically. I chose “Regiment” by David Byrne and Brian Eno, and “Little Fluffy Clouds” by The Orb.
Recorded ten years apart using very different technology, both tracks nevertheless share a similar structure: dance grooves at medium-slow tempos centered around percussion and bass, overlaid with radically decontextualized vocal samples. Both are dense and abstract soundscapes with an otherworldly quality. However, the two tracks have some profound sonic differences as well. “Regiment” is played by human instrumentalists into analog gear, giving it a roiling organic murk. “Little Fluffy Clouds” is a pristine digital recording built entirely from DJ tools, quantized neatly and clinically precise.
As I contemplate my masters thesis, I’m looking for good examples of beginner-centric musical user interface design. Propellerhead’s new Figure app has been a source of inspiration for me. It’s mostly wonderful, and even its design flaws are instructive.
I have a long history with Propellerhead’s software, beginning with Rebirth in 1998. I’ve made a lot of good music with their stuff, but have also experienced a lot of frustration, mostly due to their insistence on slathering everything with unhelpfully “realistic” design.
There’s no music I love more in the world than Duke Ellington’s.
When I was a kid, the New York Transit Museum had a commercial in heavy rotation on local TV that used “Take The A Train” and I remember being riveted by it. I should point out that Billy Strayhorn wrote this tune, not Ellington, but it became the Ellington Orchestra’s theme song for decades.