Musical repetition has become a repeating theme of this blog. Seems appropriate, right? This post looks at a wonderful article by Elizabeth Hellmuth Margulis, investigating the reasons why we love repetition in music in Aeon Magazine.
The simple act of repetition can serve as a quasi-magical agent of musicalisation. Instead of asking: ‘What is music?’ we might have an easier time asking: ‘What do we hear as music?’ And a remarkably large part of the answer appears to be: ‘I know it when I hear it again.’
Quora user Marc Ettlinger recently sent me a paper by Sherri Novis-Livengood, Richard White, and Patrick CM Wong entitled Fractal complexity (1/f power law) determines the stability of music perception, emotion, and memory in a repeated exposure paradigm. (The paper isn’t on the open web, but here’s a poster-length version.) The authors think that fractals explain our music preferences. Specifically, they find that note durations, pitch intervals, phrase lengths and other quantifiable musical parameters tend to follow a power law distribution. Power-law distributions have the nifty property of scale invariance, meaning that patterns in such entities resemble themselves at different scales. Music is full of fractals, and the more fractal-filled it is, the more we like it.
Soundation uses the same basic interface paradigm as other audio recording and editing programs like Pro Tools and Logic. Your song consists of a list of tracks, each of which can contain a particular sound. The tracks all play back at the same time, so you can use them to blend together sounds as you see fit. You can either record your own sounds, or use the loops included in Soundation, or both. The image below shows six tracks. The first two contain loops of audio; the other four contain MIDI, which I’ll explain later in the post.
Karen Brennan’sdoctoral dissertation looks at the ways people teach and learn Scratch, and asks how the study of programming can help or hinder kids’ agency in their own learning. Agency, in this sense, refers to your ability to define and pursue learning goals, so you can play a part in your self-development, adaptation, and self-renewal. This is interesting to me, because every single argument Brennan makes about the teaching of programming applies equally well to the teaching of music.
I am mercifully finished with music theory in grad school and couldn’t be happier about it. You may find this surprising. My blog is full of music theory. How could a guy who enjoys thinking about music in analytical terms as much as I do have such a wretched time in my graduate music theory classes? It wasn’t the work, I mostly breezed through that. No, it was the grinding Eurocentrism. Common-practice period classical music theory is fine and good, but in the hands of the music academy, it’s dry, tedious, and worst of all, largely useless. The strict rules of eighteenth-century European art music are distantly removed from the knowledge a person needs to do anything in the present-day music world (except, I guess, to be a professor of common-practice tonal theory.)
The title of this post is a reference to the Susan Sontag essay, “Against Interpretation.” She argues that by ignoring the subjective sensual pleasures of art and instead looking for rigorously logical theories of its inner workings, academics are missing the point. She calls scholarly interpretation “the intellect’s revenge upon art.” I’m with her. Music theory as practiced at NYU and elsewhere is the intellectual’s revenge on music. Sontag’s punchline is right on: “[I]n place of a hermeneutics we need an erotics of art.” Speak it, sister!
Nearly getting scooped by Loopseque lit a fire under me to get some more concept images for my thesis app together. So here are some examples of the beat programming lessons that form the intellectual heart of my project. The general idea is that you’re given an existing drum pattern, a famous breakbeat or something more generic. Some of the beats are locked down, guaranteeing that anything you do will sound musical. Click each one to see it bigger.
For me I have to wait for the right inspiration given to me very irregularly. But it seems others can compose with chords deliberately. How do you compose, and do you feel proud of it all the times (i.e. know you couldn’t have done better)?
I have two methods of composition: improvisation and collage. I use the computer for both. At the moment, my software of choice is Ableton Live. Before that I mostly used Pro Tools and Reason. It’s been a long time since I “composed” something on a piece of paper (except for music school assignments.)
Alex Ruthmann, in a blog post discussing music-making with the educational multimedia programming environment Scratch, has this to say:
What’s NOT easy in Scratch for most kids is making meaningful music with a series of “play note”, “rest for”, and “play drum” blocks. These blocks provide access to music at the phoneme rather than morpheme levels of sound. Or, as Jeanne Bamberger puts it, at the smallest musical representations (individual notes, rests, and rhythms) rather than the simplest musical representations (motives, phrases, sequences) from the perspective of children’s musical cognition. To borrow a metaphor from chemistry, yet another comparison would be the atomic/elemental vs. molecular levels of music.
To work at the individual note, rest, and rhythms levels requires quite a lot of musical understanding and fluency. It can often be hard to “start at the very beginning.” One needs to understand and be able to dictate proportional rhythm, as well as to divine musical metadimensions by ear such as key, scale, and meter. Additionally, one needs to be fluent in chromatic divisions of the octave, and that in MIDI “middle C” = the note value 60. In computer science parlance, one could describe the musical blocks included with Scratch as “low level” requiring a lot of prior knowledge and understanding with which to work.