In the past three weeks, thanks to the magic of the Disquiet Junto, I’ve participated in the creation of three musical trios with six strangers from the internet. Here’s a family tree of the nine tracks we all did:
Artist names are in black, “part one” tracks are in blue, “part two” tracks are in red, and “part three” tracks are in green. We followed Marc Weidenbaum’s prompts for part one, part two, and part three. Hear all the music we all made below.
Every semester in Intro to Music Tech, we have Kanye West Day, when we listen analytically to some of Ye’s most sonically adventurous tracks (there are many to choose from.) The past few semesters, Kanye West Day has centered on “Ultralight Beam,” especially Chance The Rapper’s devastating verse. That has naturally led to a look at Chance’s “All We Got.”
All the themes of the class are here: the creative process in the studio, “fake” versus “real” sounds, structure versus improvisation, predictability versus surprise, and the way that soundscape and groove do much more expressive work than melody or harmony.
Last week I was Ableton’s guest for Loop, their delightful “summit for music makers.” I was on a panel about technology in music education, and I got to meet a lot of amazing people and hear some good music too. Here’s my live Twitter feed from the event if you want a fine-grained accounting. Otherwise, read on for some high points.
Writing assignment for History of Science and Technology class with Myles Jackson. See a more informal introduction to the vocoder here.
Casual music listeners know the vocoder best as the robotic voice effect popular in disco and early hip-hop. Anyone who has heard pop music of the last two decades has heard Auto-Tune. The two effects are frequently mistaken for one another, and for good reason—they share the same mathematical and technological basis. Auto-Tune has become ubiquitous in recording studios, in two very different incarnations. There is its intended use, as an expedient way to correct out-of-tune notes, replacing various tedious and labor-intensive manual methods. Pop, hip-hop and electronic dance music producers have also found an unintended use for Auto-Tune, as a special effect that quantizes pitches to a conspicuously excessive degree, giving the voice a synthetic, otherworldly quality. In this paper, I discuss the history of the vocoder and Auto-Tune, in the context of broader efforts to use science and technology to mathematically analyze and standardize music. I also explore how such technologies problematize our ideas of virtuosity.
This post documents a presentation I’m giving in my History of Science and Technology class with Myles Jackson. See also a more formal history of the vocoder.
The vocoder is one of those mysterious technologies that’s far more widely used than understood. Here I explain what it is, how it works, and why you should care.
Casual music listeners know the vocoder best as a way to make the robot voice effect that Daft Punk uses all the time.
Here’s Huston Singletary demonstrating the vocoder in Ableton Live.
You may be surprised to learn that you use a vocoder every time you talk on your cell phone. Also, the vocoder gave rise to Auto-Tune, which, love it or hate it, is the defining sound of contemporary popular music. Let’s dive in!
I use variations on this project list for all of my courses. In Advanced Digital Audio Production at Montclair State University, students do all of these assignments. Students in Music Technology 101 do all of them except the ones marked Advanced. My syllabus for the NYU Music Education Technology Practicum has an additional recording studio project in place of the final project. Here’s the project list in Google Spreadsheet format.
I talk very little about microphone technology or technique in my classes. This is because I find this information to only be useful in the context of actual recording studio work, and my classes do not have regular access to a studio. I do spend one class period on home recording with the SM58 and SM57, and talk a bit about mic technique for singers. I encourage students who want to go deeper into audio recording to take a class specifically on that subject, or to read something like the Moylan book.
My project-based approach is informed strongly by Matt Mclean and Alex Ruthmann. Read more about their methods here.
I do not require any text. However, for education majors, I strongly recommend Teaching Music Through Composition by Barbara Freedman and Music Technology and Education: Amplifying Musicality by Andrew Brown.
Try a very early alpha of the scale wheel visualization here
The MusEDLab will soon be launching a revamped version of the aQWERTYon with some enhancements to its visual design, including a new scale picker. Beyond our desire to make our stuff look cooler, the scale picker represents a challenge that we’ve struggled with since the earliest days of aQW development. On the one hand, we want to offer users a wide variety of intriguing and exotic scales to play with. On the other hand, our audience of beginner and intermediate musicians is likely to be horrified by a list of terms like “Lydian dominant mode.” I recently had the idea to represent all the scales as colorful icons, like so:
Read more about the rationale and process behind this change here. In this post, I’ll explain what the icons mean, and how they can someday become the basis for a set of new interactive music theory visualizations.
I have started working with a startup called Musicto, which creates playlists curated by humans around particular themes. For example: music to grieve to, music to clean house to, music to fight evil. My first playlist is music to sing your hipster baby to sleep.
These are songs I have been singing to my kids, and that I recommend you sing to yours. It isn’t just a playlist, though. Each track is accompanied by a short blog post explaining what’s so special about it. New tracks will be added regularly in the coming weeks. If you’d like, you can follow the playlist on Twitter. If this sounds like the kind of thing you might enjoy putting together, the company is seeking more curators.
The good people at Noteflight have started doing weekly challenges. I love constraint-based music prompts, like the ones in the Disquiet Junto, so I thought I would try this one: compose a piece of music using only four notes.
The music side of this wasn’t hard. My material tends not to use that many pitches anyway. If you really want to challenge me, tell me I can’t use any rhythmic subdivisions finer than a quarter note. Before you listen to my piece, though, let’s talk about this word, “compose.” When you write using notation, the presumption is that you’re creating a set of instructions for a human performer. However, actually getting your composition performed is a challenge, unless you have a band or ensemble at your disposal. I work in two music schools, and I would have a hard time making it happen. (When I have had my music performed, the musicians either used a prose score, learned by ear from a recording, or just improvised.) Noteflight’s target audience of kids in school are vanishingly unlikely to ever hear their work performed, or at least, performed well. Matt Mclean formed the Young Composers and Improvisers Workshop to address this problem, and he’s doing amazing work, but most Noteflight compositions will only ever exist within the computer.
Given this fact, I wanted to create a piece of music that would actually sound good when played back within Noteflight. This constraint turned out to be a significantly greater challenge than using four notes. I started with the Recycled Percussion instrument, and chose the notes B, E, F, and G, because they produce the coolest sounds. Then I layered in other sounds, chosen because they sound reasonably good. Here’s what I came up with: Continue reading
Ableton recently launched a delightful web site that teaches the basics of beatmaking, production and music theory using elegant interactives. If you’re interested in music education, creation, or user experience design, you owe it to yourself to try it out.