The title of this post is also the title of a tutorial I’m giving at ISMIR 2016 with Jan Van Balen and Dan Brown. Here are the slides:
The conference is organized by the International Society for Music Information Retrieval, and it’s the fanciest of its kind. You may well be wondering what Music Information Retrieval is. MIR is a specialized field in computer science devoted to teaching computers to understand music, so they can transcribe it, organize it, find connections and similarities, and, maybe, eventually, create it.
So why are we going to talk to the MIR community about hip-hop? So far, the field has mostly studied music using the tools of Western classical music theory, which emphasizes melody and harmony. Hip-hop songs don’t tend to have much going on in either of those areas, which makes the genre seem like it’s either too difficult to study, or just too boring. But the MIR community needs to find ways to engage this music, if for no other reason than the fact that hip-hop is the most-listened to genre in the world, at least among Spotify listeners.
Hip-hop has been getting plenty of scholarly attention lately, but most of it has been coming from cultural studies. Which is fine! Hip-hop is culturally interesting. When humanities people do engage with hip-hop as an art form, they tend to focus entirely on the lyrics, treating them as a subgenre of African-American literature that just happens to be performed over beats. And again, that’s cool! Hip-hop lyrics have significant literary interest. (If you’re interested in the lyrical side, we recommend this video analyzing the rhyming techniques of several iconic emcees.) But what we want to discuss is why hip-hop is musically interesting, a subject which academics have given approximately zero attention to.
This summer, I’m teaching Cultural Significance of Rap and Rock at Montclair State University. It’s my first time teaching it, and it’s also the first time anyone has taught it completely online. The course is cross-listed under music and African-American studies. Here’s a draft of my syllabus, omitting details of the grading and such. I welcome your questions, comments and criticism.
My computer dictionary says that a melody is “a sequence of single notes that is musically satisfying.” There are a lot of people out there who think that rap isn’t music because it lacks melody. My heart broke when I found out that Jerry Garcia was one of these people. If anyone could be trusted to be open-minded, you’d think it would be Jerry, but no.
I’ve always instinctively believed this position to be wrong, and I finally decided to test it empirically. I took some rap acapellas and put them into Melodyne. What I found is that rap vocals use plenty of melody. The pitches rise and fall in specific and patterned ways. The pitches aren’t usually confined to the piano keys, but they are nevertheless real and non-arbitrary. (If you say a rap line with the wrong pitches, it sounds terrible.) Go ahead, look and listen for yourself. Click each image to hear the song section in question. Continue reading
When we talk about Auto-Tune, we’re talking about two different things. There’s the intended use, which is to subtly correct pitch problems (and not just with vocalists; it’s extremely useful for horns and strings.) The ubiquity of pitch correction in the studio should be no great mystery; it’s a tremendous time-saver.
But usually when we talk about Auto-Tune, we’re talking about the “Cher Effect,” the sound you get when you set the Retune Speed setting to zero. The Cher Effect is used so often in pop music because it’s richly expressive of our emotional experience of the world: technology-saturated, alienated, unreal. My experience with Auto-Tune as a musician has felt like stepping out the door of a spaceship to explore a whole new sonic planet. Auto-Tune turns the voice into a keyboard synth, and we are only just beginning to understand its creative possibilities. (Warning: explicit lyrics throughout.)
Here’s an email conversation I’ve been having with my friend Greg Brown about Kanye West’s recent albums. Greg is a classical composer and performer with a much more avant-garde sensibility than mine. The exchange is lightly edited for clarity.
Greg: I’ve been listening to 808s and Heartbreak and Twisted Fantasy. I’m really enjoying them. Far more than I thought I would. I think Auto-tune here is somehow protective for Kanye when he is expressing emotion in a genre where that is not really smiled on. I haven’t quite put my finger on it, but I think the dehumanizing of the human voice is somehow a foil for the expression of inner turmoil. It’s haunting.
Ethan: Yes! Absolutely. The Auto-tune gives Ye a way to be the sensitive, vulnerable singer, as opposed to the swaggering rapper. And I like the similar sonic palettes between 808s and Fantasy, except 808s is sparse and Fantasy is full. And the thing of using tuned 808 kick drums to play the basslines is so hip.
Greg: The hard part for me to wrap my head around is the fact that Auto-tune is a filter, a dehumanizer, and it manages to make Kanye both closer and more human.
Ethan: I have a broader philosophical idea brewing about the concepts of “dehumanizing” and “posthuman” and how they’re really kind of meaningless, at least as applied to music. How can things that humans create be dehumanizing? Everyone involved in the production of Kanye’s albums is human. Auto-tune is a novel way of sounding human, but it’s still human, just like the sound of reverb or EQ or compression.
Greg: Yes — I have similar issues with natural vs. unnatural in general. Humans are natural, therefore everything we do is also natural.
Auto-tune was already a well-established studio tool by the time “Believe” came out, though it was unknown outside the music industry.
If you’re a guitarist, you may have noticed that it’s hard to get your instrument perfectly in tune. This is not your imagination. If you tune each string perfectly to the one next to it, the low E string will end up out of tune with the high E string. If you use an electronic tuner to make sure the individual strings are tuned to the correct pitch, they won’t sound fully in tune with each other. It has nothing to do with the quality of your instrument or your skill at tuning: it’s a fundamental fact of western music theory. This post attempts to explain why. It’s very geeky stuff, but if you like math (and who doesn’t?) then read on.
This week I’ve been all about Kanye West’s “Lost In The World,” the most gripping track on My Beautiful Dark Twisted Fantasy. Kanye is one of the few commercial producers with a high enough profile to be able to license whatever samples he wants, so he carries the banner of memetastic collage-based music in the mainstream, and god bless him for it. Click through for the song on YouTube.
There’s nothing going on in contemporary music that interests me more than the vibe of this track. The blend of electronic and tribal drums and Auto-tuned singing draws on the same sonic palette as “Love Lockdown,” which continues to be my favorite song of the 21st century, but “Lost In The World” is much bigger and denser.
Meet the most fascinating and problematic pop star of the moment, Antoine Dodson.
If you’re a follower of internet memes, you know the story by now. If not: Antoine, his sister Kelly and her daughter were asleep in their apartment in the Lincoln Park housing project in Huntsville, Alabama. An intruder broke in and sexually assaulted Kelly before Antoine chased him off. The family complained to the housing project authorities, who were unmoved. So on July 28, 2010, the Dodsons took their story to the local news. Continue reading
Revival Revival vs Primus
mp3 download, ipod format download
Vocals by Barbara Singer. Samples and programming by me. The guitar licks were originally played by Alex Torovic but have been chopped up pretty dramatically. This is part of our ongoing strategy, learned from hip-hop, of taking a familiar chorus and coming up with new verses.