Every semester in Intro to Music Tech, we have Kanye West Day, when we listen analytically to some of Ye’s most sonically adventurous tracks (there are many to choose from.) The past few semesters, Kanye West Day has centered on “Ultralight Beam,” especially Chance The Rapper’s devastating verse. That has naturally led to a look at Chance’s “All We Got.”
All the themes of the class are here: the creative process in the studio, “fake” versus “real” sounds, structure versus improvisation, predictability versus surprise, and the way that soundscape and groove do much more expressive work than melody or harmony.
Writing assignment for History of Science and Technology class with Myles Jackson. See a more informal introduction to the vocoder here.
Casual music listeners know the vocoder best as the robotic voice effect popular in disco and early hip-hop. Anyone who has heard pop music of the last two decades has heard Auto-Tune. The two effects are frequently mistaken for one another, and for good reason—they share the same mathematical and technological basis. Auto-Tune has become ubiquitous in recording studios, in two very different incarnations. There is its intended use, as an expedient way to correct out-of-tune notes, replacing various tedious and labor-intensive manual methods. Pop, hip-hop and electronic dance music producers have also found an unintended use for Auto-Tune, as a special effect that quantizes pitches to a conspicuously excessive degree, giving the voice a synthetic, otherworldly quality. In this paper, I discuss the history of the vocoder and Auto-Tune, in the context of broader efforts to use science and technology to mathematically analyze and standardize music. I also explore how such technologies problematize our ideas of virtuosity.
This post documents a presentation I’m giving in my History of Science and Technology class with Myles Jackson. See also a more formal history of the vocoder.
The vocoder is one of those mysterious technologies that’s far more widely used than understood. Here I explain what it is, how it works, and why you should care.
Casual music listeners know the vocoder best as a way to make the robot voice effect that Daft Punk uses all the time.
Here’s Huston Singletary demonstrating the vocoder in Ableton Live.
You may be surprised to learn that you use a vocoder every time you talk on your cell phone. Also, the vocoder gave rise to Auto-Tune, which, love it or hate it, is the defining sound of contemporary popular music. Let’s dive in!
Over on Quora, David Leigh complains that it doesn’t take much musical ability to be a popular singer these days, not like when Enrico Caruso sold a million records. People had taste back then. Kids today, amirite?
Here’s my response: Continue reading
Update: I’ve turned this post into an academic article. Here’s a draft.
The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. Here are the slides:
The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may well be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.
So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.
Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have significant literary interest. (If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees.) But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.
This summer, I’m teaching Cultural Significance of Rap and Rock at Montclair State University. It’s my first time teaching it, and it’s also the first time anyone has taught it completely online. The course is cross-listed under music and African-American studies. Here’s a draft of my syllabus, omitting details of the grading and such. I welcome your questions, comments and criticism.
My computer dictionary says that a melody is “a sequence of single notes that is musically satisfying.” There are a lot of people out there who think that rap isn’t music because it lacks melody. My heart broke when I found out that Jerry Garcia was one of these people. If anyone could be trusted to be open-minded, you’d think it would be Jerry, but no.
I’ve always instinctively believed this position to be wrong, and I finally decided to test it empirically. I took some rap acapellas and put them into Melodyne. What I found is that rap vocals use plenty of melody. The pitches rise and fall in specific and patterned ways. The pitches aren’t usually confined to the piano keys, but they are nevertheless real and non-arbitrary. (If you say a rap line with the wrong pitches, it sounds terrible.) Go ahead, look and listen for yourself. Click each image to hear the song section in question. Continue reading
When we talk about Auto-Tune, we’re talking about two different things. There’s the intended use, which is to subtly correct pitch problems (and not just with vocalists; it’s extremely useful for horns and strings.) The ubiquity of pitch correction in the studio should be no great mystery; it’s a tremendous time-saver.
But usually when we talk about Auto-Tune, we’re talking about the “Cher Effect,” the sound you get when you set the Retune Speed setting to zero. The Cher Effect is used so often in pop music because it’s richly expressive of our emotional experience of the world: technology-saturated, alienated, unreal. My experience with Auto-Tune as a musician has felt like stepping out the door of a spaceship to explore a whole new sonic planet. Auto-Tune turns the voice into a keyboard synth, and we are only just beginning to understand its creative possibilities. (Warning: explicit lyrics throughout.)
Here’s an email conversation I’ve been having with my friend Greg Brown about Kanye West’s recent albums. Greg is a classical composer and performer with a much more avant-garde sensibility than mine. The exchange is lightly edited for clarity.
Greg: I’ve been listening to 808s and Heartbreak and Twisted Fantasy. I’m really enjoying them. Far more than I thought I would. I think Auto-tune here is somehow protective for Kanye when he is expressing emotion in a genre where that is not really smiled on. I haven’t quite put my finger on it, but I think the dehumanizing of the human voice is somehow a foil for the expression of inner turmoil. It’s haunting.
Ethan: Yes! Absolutely. The Auto-tune gives Ye a way to be the sensitive, vulnerable singer, as opposed to the swaggering rapper. And I like the similar sonic palettes between 808s and Fantasy, except 808s is sparse and Fantasy is full. And the thing of using tuned 808 kick drums to play the basslines is so hip.
Greg: The hard part for me to wrap my head around is the fact that Auto-tune is a filter, a dehumanizer, and it manages to make Kanye both closer and more human.
Ethan: I have a broader philosophical idea brewing about the concepts of “dehumanizing” and “posthuman” and how they’re really kind of meaningless, at least as applied to music. How can things that humans create be dehumanizing? Everyone involved in the production of Kanye’s albums is human. Auto-tune is a novel way of sounding human, but it’s still human, just like the sound of reverb or EQ or compression.
Greg: Yes — I have similar issues with natural vs. unnatural in general. Humans are natural, therefore everything we do is also natural.
Auto-tune was already a well-established studio tool by the time “Believe” came out, though it was unknown outside the music industry.