I’ve been asked enough times for mobile music app recommendations that I decided to collect all of them here. The iOS apps are ones that I’ve personally used and enjoyed. I haven’t tried most of the Android ones, but they were recommended by people whose opinions I trust. If you have suggestions, please add them in the comments. Continue reading
When we talk about Auto-Tune, we’re talking about two different things. There’s the intended use, which is to subtly correct pitch problems (and not just with vocalists; it’s extremely useful for horns and strings.) The ubiquity of pitch correction in the studio should be no great mystery; it’s a tremendous time-saver.
But usually when we talk about Auto-Tune, we’re talking about the “Cher Effect,” the sound you get when you set the Retune Speed setting to zero. The Cher Effect is used so often in pop music because it’s richly expressive of our emotional experience of the world: technology-saturated, alienated, unreal. My experience with Auto-Tune as a musician has felt like stepping out the door of a spaceship to explore a whole new sonic planet. Auto-Tune turns the voice into a keyboard synth, and we are only just beginning to understand its creative possibilities. (Warning: explicit lyrics throughout.)
I want to expand my private teaching and speaking practice. If you were to book me for a workshop or seminar, what would you want it to be about? Music production? Intellectual property and authorship? Music and math? Music and science? Music pedagogy? Improvisation and flow, both in music and in life generally? Something else?
I’d be happy to visit your music classroom, non-music classroom, company, co-working space, or community organization. Here are some instructional videos of mine to give you a sense of my style.
I do traditional music teaching and production too, but I’m pitching here to people who don’t consider themselves to be “musicians” (spoiler alert: everybody is a musician, you just might not have found your instrument yet.) Group improvisation on iOS devices or laptops is always a good time, and it’s easier than you would think to attain musical-sounding results. Instrument design with the Makey Makey is a fun one too. If you have Ableton Live and are wondering what to do with it, a remix and mashup workshop would be just the thing. All of the above activities are revelatory windows into user interface and experience design. Group music-making is an excellent team-building exercise, and is just generally a spa treatment for the soul. Get in touch with your suggestions, requests and questions.
I’m interested in this article not so much for the specifics of the gear and the plugins, but rather just out of sheer awe at the complexity and nuance of the track’s soundscape. My cadre of pop-oriented music academics likes to say that the creativity in recordings lies not in their melodies and the chords necessarily, but in their timbre and space. “Call Me Maybe” is an excellent case in point. Its melody and chords are fun, but not exactly groundbreaking. Yet the track leaps out of the speakers at you, demanding your attention, managing both to pound you with sonic force and intrigue you with quiet detail. Whether you want your attention grabbed in this way is a matter of taste. I happen to love the song, but even if it isn’t your cup of tea, the craft behind it bears some thinking about.
DJ Earworm is the foremost practitioner of the art of the mashup. I don’t think there’s a more interesting musician in the world right now. I was on public radio with him once! His main claim to fame is the United State of Pop series, where he combines the top 25 US pop songs of a given year into a single, seamlessly coherent track. I’ve scattered several of them throughout this post. He has started doing more seasonal mashups as well; here’s one from this past summer:
It’s rare that an artist talks you through their production process in depth, so I was delighted to discover that DJ Earworm wrote an entire book about mashup production. He wrote it in 2007 and focused it on Sony Acid, so from a technical standpoint, it might not be super useful to you. But as with the KLF’s pop songwriting tutorial, the creative method he espouses transcends technology and time period, and it would be of value to any musician. Some choice passages follow.
This semester, I had the pleasure of leading an independent study for two music students at Montclair State University. One was Matt Skouras, a grad student who wants to become the music tech teacher in a high school. First of all, let me just say that if you’re hiring for such a position in New Jersey, you should go right ahead and hire Matt, he’s an exceptionally serious and well-versed musician and technologist. But the reason for this post is a question that Matt asked me after our last meeting yesterday: What should he be studying in order to teach music tech?
Matt is an good example of a would-be music tech teacher. He’s a classical trumpet player by training who has found little opportunity to use that skill after college. Wanting to keep his life as a musician moving forward, he started learning guitar, and, in his independent study with me, has been producing adventurous laptop music with Ableton Live. Matt is a broad-minded listener, and a skilled audio engineer, but his exposure to non-classical music is limited in the way typical of people who came up through the classical pipeline. It was at Matt’s request that I put together this electronic music tasting menu.
So. How to answer Matt’s question? How does one go about learning to teach music technology? My first impulse was to say, I don’t know, but if you find out, please tell me. The answer I gave him was less flip: that the field is still taking shape, and it evolves rapidly as the technology does. Music tech is a broad and sprawling subject, and you could approach it from any number of different philosophical and technical angles. I’ll list a few of them here. Continue reading
I’m wrapping up my first semester as a legit college professor, and that means my first round of student evaluations. Here’s what my Intro to Music Tech students at Montclair State University had to say about me.
The creation of original music was a big hit, predictably. Everyone in the class is from the classical pipeline, and producing pop tracks was well outside of their comfort zone. After their initial resistance, though, everybody quickly got caught up in it, and I started having to chase them out of the room at the end of class. People thought I was a supportive and effective songwriting teacher, which is nice. A student wanted to learn more about song structure. I would like to teach more about it. In general, this is something I plan to start doing on day one in future semesters.
I also got rave reviews for talking through Beatles and Michael Jackson stems. Classical musicians don’t often get exposure to the creative use of the recording studio. Those stems are a rich resource for examining songwriting, arrangement, recording, mixing and editing. I wish I didn’t have to acquire them illegally from the shadiest corners of the internet.
Teaching Observation of Ethan Hein – MUTC-101: Introduction to Music Technology
As the students began to trickle into the music technology lab and power up their iMacs, discussions immediately hatched about an upcoming assignment. A young woman turned on her speakers and played a work in progress made with the program Logic. “That’s cool!” responded one of her classmates as he listened intently. The piece commenced with a heavy guitar riff and shared sonic similarities with the “nu-metal” style of the early 2000s, comprised by the traditional trio of rock instruments: guitar, bass, and drumset. “Can we all listen to your song again? All the way through and more loudly?” asked Professor Hein. If there was a distinct moment when class had officially begun, this was it, and this was the first of many indications that the education occurring in this room under the guidance of Professor Hein is a continuing conversation that his students are engaged in and enjoying.
Music theory is hard. But we make it harder by holding on to naming and notational conventions that are hundreds of years old, and that were designed to describe very different music than what we’re playing now. Here are some fantasies for how note naming might be improved.
Right now, the “default setting” for western diatonic harmony is the C major scale. It’s the One True Scale, from which all else is derived by adding sharps and flats. Why do we use the C major scale for this purpose? Why not the A major scale? Wouldn’t it make more sense if harmonic ground zero for our whole harmonic system was the sequence ABCDEFG? I know there are historical reasons why the unmodified first seven letters of the alphabet denote the natural minor scale, but so what? How is a person supposed to make sense of the fact that scale degree one falls on the third letter of the alphabet?
Furthermore, I question whether the major scale really is the one we should consider to be the most basic. I’d prefer that we use mixolydian instead. The crucial pitches in mixo are close to the natural overtone series, for one thing. For another, Americans hear flat seven as being as “natural” as natural seven, if not more so. While the leading tone is common inside chords, it’s rare to hear it in a popular melody. Flat seven is ubiquitous in the music most of us listen to, and in plenty of other world cultures besides.