Marvin Gaye is one of the great singers and songwriters of all time, with a status deservedly approaching secular sainthood. Robin Thicke is a sleazy dirtbag who made a giant pile of money by knocking off one of Marvin’s songs to produce a rapey earworm, accompanied by a porn video. Naturally, I side with Team Marvin, and am delighted that Thicke and Pharrell lost the lawsuit.
While my fellow musicians are gleefully crowing, other observers are worried that this case sets a bad precedent. Michaelangelo Matos is among them.
I encourage vocal fans of this verdict to demonstrate their solidarity by deleting and/or destroying every piece of music they own featuring an unlicensed sample or bearing a notable resemblance to an earlier piece of music. But they won’t, and they shouldn’t, because that would entail deleting just about everything. Even if you loathe Thicke, this is no cause for celebration, because the size of the Gaye estate’s bounty is only going to encourage more lawsuits like this one.
Bennett, J. (2011). Collaborative songwriting – the ontology of negotiated creativity in popular music studio practice. Journal on the Art of Record Production, (5), online.
My professional life at the moment mostly consists of teaching classical and jazz musicians how to write pop songs. While every American is intuitively familiar with the norms of pop music, few of us think about them explicitly, even trained musicians. It’s worth considering them, though. While individual pop songs might be musically uninteresting, in the aggregate they’re a rich source of information about the way our culture evolves. Bennett describes popular song as an “unsubsidized populist art form,” like Hollywood movies and video games. The marketplace exerts strong Darwinian pressures on songwriters and producers, polishing pop conventions like pebbles being tumbled in a river.
A Quora user asks whether artificial intelligence will ever replace human musicians. TL;DR No.
If music composition and improvisation could be expressed as algorithmic rule sets, then human musicians would have reason for concern. Fortunately, music can’t be completely systematized, much as some music theorists would like to believe it can be. Music is not an internally consistent logical system like math or physics. It’s an evolved set of mostly arbitrary patterns of memes. This should be no surprise; music emerges from our consciousness, and our consciousness is an evolved system, not an algorithmic one. We can do algorithmic reasoning if we work really hard at it, but our minds are pretty chaotic and unpredictable, and it isn’t our strong suit. It’s a good thing, too; we may not be so hot at performing algorithms, but we’re good at inventing new possible ones. Computers are great at performing algorithms, but are lousy at inventing new ones. Continue reading
Recently, WNYC’s great music show Soundcheck held a contest to see who could do the best version of the 100 year old song “Yellow Dog Blues” by WC Handy.
Marc Weidenbaum had the members of the Disquiet Junto enter the contest en masse. I did my track, put it on SoundCloud, and promptly forgot all about it.
A month later, I was surprised and delighted to learn from Marc’s blog that the contest winner was Junto stalwart Westy Reflector.
There’s an interview on the Creative Commons blog with Disquiet Junto instigator and Aphex Twin historian Marc Weidenbaum. It’s full of his usual keen insight.
Here are some key quotes. Continue reading
Continuing my series of posts on the ways that science might explain why we like the music we like. See also my posts on the science of rock harmony, harmony generally, and Afro-Cuban rhythms.
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
One of the best discoveries I made while researching my thesis is the mathematician Godfried Toussaint. While the bookshelves groan with mathematical analyses of western harmony, Toussaint is the rare scholar who uses the same tools to understand Afro-Cuban rhythms. He’s especially interested in the rhythm known to Latin musicians as 3-2 son clave, to Ghanaians as the kpanlogo bell pattern, and to rock musicians as the Bo Diddley beat. Toussaint calls it “The Rhythm that Conquered the World” in his paper of the same name. Here it is as programmed by me on a drum machine:
The image behind the SoundCloud player is my preferred circular notation for son clave. Here are eight different more conventional representations as rendered by Toussaint:
My last post discussed how we should be deriving music theory from empirical observation of what people like using ethnomusicology. Another good strategy would be to derive music theory from observation of what’s going on between our ears. Daniel Shawcross Wilkerson has attempted just that in his essay, Harmony Explained: Progress Towards A Scientific Theory of Music. The essay has an endearingly old-timey subtitle:
The Major Scale, The Standard Chord Dictionary, and The Difference of Feeling Between The Major and Minor Triads Explained from the First Principles of Physics and Computation; The Theory of Helmholtz Shown To Be Incomplete and The Theory of Terhardt and Some Others Considered
Wilkerson begins with the observation that music theory books read like medical texts from the middle ages: “they contain unjustified superstition, non-reasoning, and funny symbols glorified by Latin phrases.” We can do better.
Wilkerson proposes that we derive a theory of harmony from first principles drawn from our understanding of how the brain processes audio signals. We evolved to be able to detect sounds with natural harmonics, because those usually come from significant sources, like the throats of other animals. Musical harmony is our way of gratifying our harmonic-series detectors.
Update: a version of this post appeared on Slate.com.
I seem to have touched a nerve with my rant about the conventional teaching of music theory and how poorly it serves practicing musicians. I thought it would be a good idea to follow that up with some ideas for how to make music theory more useful and relevant. The goal of music theory should be to explain common practice music. I don’t mean “common practice” in its present pedagogical sense. I mean the musical practices that are most prevalent in a given time and place, like America in 2013. Rather than trying to identify a canonical body of works and a bounded set of rules defined by that canon, we should take an ethnomusicological approach. We should be asking: what is it that musicians are doing that sounds good? What patterns can we detect in the broad mass of music being made and enjoyed out there in the world?
I have my own set of ideas about what constitutes common practice music in America in 2013, but I also come with my set of biases and preferences. It would be better to have some hard data on what we all collectively think makes for valid music. Trevor de Clerq and David Temperley have bravely attempted to build just such a data set, at least within one specific area: the harmonic practices used in rock, as defined by Rolling Stone magazine’s list of the 500 Greatest Songs of All Time. Temperley and de Clerq transcribed the top 20 songs from each decade between 1950 and 2000. You can see the results in their paper, “A corpus analysis of rock harmony.” They also have a web site where you can download their raw data and analyze it yourself. The whole project is a masterpiece of descriptivist music theory, as opposed to the bad prescriptivist kind.
Another thought-provoking Quora question: Are there any hereditary units in music? The question details give some context:
In his blog post “The Music Genome Project is no such thing,” David Morrison makes an edifying distinction between a genotype and a phenotype. He also makes the bold statement “there are no hereditary units in music.” Is this true?
Morrison’s post is a valuable read, because it’s so precisely wrong as to be quite useful in clarifying your thinking.