Teaching audio and MIDI editing in the MOOC

This is the fifth in a series of posts documenting the development of Play With Your Music, a music production MOOC jointly presented by P2PU, NYU and MIT. See also the first, second, third and fourth posts.

Soundation uses the same basic interface paradigm as other audio recording and editing programs like Pro Tools and Logic. Your song consists of a list of tracks, each of which can contain a particular sound. The tracks all play back at the same time, so you can use them to blend together sounds as you see fit. You can either record your own sounds, or use the loops included in Soundation, or both. The image below shows six tracks. The first two contain loops of audio; the other four contain MIDI, which I’ll explain later in the post.

Audio and MIDI tracks in Soundation

Continue reading

The radial drum machine: background and inspiration

Update: I now have a functioning prototype of my app. If you’d like to try it, get in touch.

My NYU masters thesis is a drum programming tutorial system for beginner musicians. It uses a novel circular interface for displaying the drum patterns. This presentation explains the project’s goals, motivations and scholarly background.

If you prefer, see it on Slideshare.

User Interface Design for Music Learning Software

Computers have revolutionized the composition, production and recording of music. However, they have not yet revolutionized music education. While a great deal of educational software exists, it mostly follows traditional teaching paradigms, offering ear training, flash cards and the like. Meanwhile, nearly all popular music is produced in part or in whole with software, yet electronic music producers typically have little to no formal training with their tools. Somewhere between the ad-hoc learning methods of pop and dance producers and traditional music pedagogy lies a rich untapped vein of potential.

This paper will explore the problem of how software can best be designed to help novice musicians access their own musical imagination with a minimum of frustration. I will examine a variety of design paradigms and case studies. I will hope to discover software interface designs that present music in a visually intuitive way, that are discoverable, and that promote flow.

Continue reading

Using Ableton Live to teach music

Here’s a presentation I gave at the December 2012 Advanced Ableton User Meetup at Tekserve, hosted by Hank Shocklee of Public Enemy. I speak about how useful Ableton Live is as a music teaching tool, using Gnarls Barkley’s “Crazy” as an example.

Very shortly after I concluded my talk, my wife went into labor with our son Milo. Quite a memorable night.

User interface case study: iOS Garageband

Apple has long made a practice of giving away cool software with their computers. One of the coolest such freebies is Garageband. It’s a stripped down version of Logic aimed at beginners, and it’s a surprisingly robust tool. The software instruments and loops sound terrific, the interface is approachable, and it’s generally a great scratchpad. However, Garageband isn’t a good way to learn about music. It gives you a lot of nice sounds, but offers no indication whatsoever as to what you’re supposed to do with them. To get a decent-sounding track, you need to come pre-equipped with a fair bit of musical knowledge.

A young guitar student of mine is a good example. After only his third lesson, he jumped on Garageband and tried writing a song, mostly by throwing loops together. I admire his initiative, but the result was jagged and disjointed, lacking any kind of structural logic. It’s natural that a first effort would be a mess, but I felt a missed opportunity. At no point did the program ever suggest that the kid’s loops would sound best in groups of two, four, eight or sixteen. It didn’t suggest he organize his track into sections with symmetrical lengths. And it didn’t suggest any connection between one chord and another. Seeing enough other beginners struggle with Garageband makes me think that it isn’t really for novices after all. It seems to be pitched more toward dads in cover bands, who have some half-remembered knowledge of chord progressions and song forms and who just need a minimum of prodding to start putting together tracks on the computer.

The iPad version of Garageband is a much better experiential learning tool. Its new touch-specific interfaces encourage the playful exploration at the heart of music-making. The program isn’t trying to be particularly pedagogical, but its presets and defaults nevertheless implicitly give valuable guidance to the budding producer or songwrter. And while it’s quite a bit more limited than the desktop version, those limitations are strengths for beginner purposes.

Continue reading

Inside Morton Subotnick’s studio

Update: one of the photos below currently appears on Mort’s Wikipedia page. Pretty cool.

The seminar I’ve been taking with Morton Subotnick is sadly drawing to a close. As part of the end of the semester, we were invited to Professor Subotnick’s home studio, a few blocks from NYU, to get a demonstration of the setup he uses in performances.
Morton Subotnick's World Of Music

Continue reading

Originality in Digital Music

This post is longer and more formal than usual because it was my term paper for a class in the NYU Music Technology Program.

Questions of authorship, ownership and originality surround all forms of music (and, indeed, all creative undertakings.) Nowhere are these questions more acute or more challenging than in digital music, where it is effortless and commonplace to exactly reproduce sonic elements generated by others. Sometimes this copying is relatively uncontroversial, as when a producer uses royalty-free factory sounds from Reason or Ableton Live. Sometimes the copying is legally permissible but artistically dubious, as when one downloads a public-domain Bach or Scott Joplin MIDI file and copies and pastes sections from them into a new composition. Sometimes one may have creative approval but no legal sanction; within the hip-hop community, creative repurposing of copyrighted commercial recordings is a cornerstone of the art form, and the best crate-diggers are revered figures.

Even in purely noncommercial settings untouched by copyright law, issues of authorship and originality continue to vex us. Some electronic musicians feel the need to generate all of their sounds from scratch, out of a sense that using samples is cheating or lazy. Others freely use samples, presets and factory sounds for reasons of expediency, but feel guilt and a weakened sense of authorship. Some electronic musicians view it as a necessity to create their tools from scratch, be they hardware or software. Others feel comfortable using off-the-shelf products but try to avoid common riffs, rhythmic patterns, chord progressions and timbres. Still others gleefully and willfully appropriate and put their “theft” of familiar recordings front and center.

Is a mashup of two pre-existing recordings original? Is a new song based on a sample of an old one original? What about a new song using factory sounds from Reason or Ableton Live? Is a DJ set consisting entirely of other people’s recordings original? Can a bright-line standard for originality or authenticity even exist in the digital realm?

I intend to parse out our varied and conflicting notions of originality, ownership and authorship as they pertain to electronic music. I will examine perspectives from musicians and fans, jurists and journalists, copyright holders and copyright violators. In so doing, I will advance the thesis that complete originality is neither possible nor desirable, in digital music or elsewhere, and that the spread of digital copying and manipulation has done us a service by bringing the issue into stark relief.

Continue reading

Encoding emotion

Steven R. Livingstone, Ralf Muhlberger, Andrew R. Brown, and William F. Thompson. Changing Musical Emotion: A Computational Rule System for Modifying Score and Performance. Computer Music Journal, 34:1, pp. 41–64, Spring 2010.

The authors present CMERS, “a Computational Music Emotion Rule System for the real-time control of musical emotion that modifies features at both the score level and the performance level.” The paper compares CMERS to other computer-based musical expressiveness algorithms, as part of a larger effort to find a complete systematic categorization of all of the emotions that can be expressed and evoked through music.

The authors first conducted a survey of past efforts to categorize emotions, and after meta-analysis of the results, devised a two-dimensional graph. The vertical axis runs from Active to Passive. The horizontal axis runs from Negative to Positive. The Negative/Active quadrant includes such emotions as anger and agitation. The Passive/Positive quadrant includes serenity and tenderness. The authors then paired particular musical devices with each emotion, both compositional and performative. For example, sadness is correlated with slow tempo, minor mode, low pitch height, complex harmony, legato articulation, soft dynamics, slow note onset, and so on.

Continue reading

Tales of an Apple fanboy

I’ve now had a couple of opportunities to play around with an iPad, and to surreptitiously watch other people use it. I have strong and mixed feelings. The touchscreen interface is pretty wonderful and I have no doubt that it’s going to send the mouse the way of the floppy disk. But the walled garden aspect disturbs me. It smells a little Microsoft-y. As long Apple’s products are so delightful, I guess I don’t care that deeply what their business philosophy is. But not everything that Apple makes is equally delightful, and gorgeous though it is, the iPad gives me some qualms.

A little background. I got my first Mac exposure in 1988, eighth grade, back in the days of System 6 and Pagemaker 1.0. It was love at first use. The mouse interface is old hat now but then it was a tremendous improvement on typing arcane DOS commands.

Continue reading