Big thoughts on music tech

A student interviewed me for a class project on “the impact of music technology on the music industry.” Her questions and my answers follow.

How did you get interested in music technology?

I got interested in music technology the first time I touched an instrument. So did you! I don’t think we should even have a subject called “music technology”, because it properly includes every aspect of music other than unaccompanied singing. Saxophones and pianos are no more “natural” or non-technological than computers. For that reason, I don’t do much teaching about technology in class; I teach the creative processes of music production, specifically, recorded music of the African diasporic vernacular tradition, what the music academy calls “popular.” I talk about that because other college courses don’t, and I think it’s important for music educators to know how to make the music that their students like. I have the freedom to do that because there is no standard way to teach music tech – when I was hired, I was told to pretty much do whatever I saw fit.

Bouncy Synth - Ableton Arrange View

I got interested in recording technology when I first tried recording myself with a tape recorder at age six. I got interested in learning how to do it well when I was in college, and my folk band went into a studio. We spent a bunch of money and got back a result that was so-so. It became clear that my money would be better spent on a computer, an interface, some software and a couple of microphones. This was in the late 90s, when the price of all of those things was falling dramatically, and it was becoming possible to get professional-sounding results in your apartment without spending tens of thousands of dollars. At first I was only interested in recording voices and live instruments. I started programming drums, samples, and synth parts as placeholders for “real” instruments. But then I got interested in making those sound better, because so much of the music I like uses synths and samples. The world helped push me in that direction, since there’s a lot more demand for producers than for guitarists.

How do you think programs like GarageBand have changed the music industry?

When we talk about the “music industry,” we’re talking about a lot of different things: the recorded music industry, which includes both consumers paying for recordings or being advertised to via streaming services, but also licensing for movies, TV shows and games; the musical instrument industry; the live music industry; and of course, the music education industry. GarageBand has impacted the world of music sales in a limited way. Commercial music is rarely made with GarageBand; instead, professionals use Pro Tools, Logic, or Ableton Live. But GarageBand has created an entire new generation of amateur producers in bedrooms and classrooms. Apple accidentally revolutionized music education by putting GarageBand free on every Mac and iOS device, and a huge number of kids are getting their first experience with musical creativity via that program. It’s too early to tell what the effect of that is going to be. My hope is that it bridges the gap between “school music” and “real music” – that kids get more of a chance to make the music they consider to be valid and legitimate in school, and that they can connect their school experiences to their non-school lives. As to what the commercial impact is going to be of all this bedroom producing, no one knows, though it has already made the bottom fall out of guitar sales – most kids that would formerly have been forming rock bands are now producing hip-hop and EDM.

How does a rise in amateur musicians affect the recording industry?

The recorded music industry is such a tiny fraction of all the music being made out there. I don’t know how many tracks get uploaded to SoundCloud and YouTube every day, but it must be astronomical. Most independently released commercial recordings make a few bucks. The number of people who are paying their rent with recorded music is tiny. Every once in a while, a bedroom producer attains commercial success – thinking here of Danger Mouse, Grimes, M.I.A., Desiigner, and Clams Casino. Grimes did that using GarageBand, but the rest were using more professional-level software and hardware. The fact that the professional-level gear is so cheap tends to blur the meaning of the words “professional” and “amateur” – a lot of people use that top-shelf software who never make a dime from it. I have made about a hundred times as much money teaching and writing about music as I have producing or composing it, though of course I wouldn’t have been able to get those teaching gigs without doing all that unpaid production and composition.

How do you feel about using technology to simulate instruments instead of using live instruments?

Live instruments are great. There’s a whole universe of nuance and expression you can only get from a live performer. That said, recording live performances is expensive and logistically difficult. If I want bass clarinet on a track, it takes me ten seconds and zero dollars to use the one in Ableton Live, whereas booking a bass clarinetist and finding a studio to record them in is a very different story. Once you’re talking about drum kits or orchestras, the costs and logistics of recording get prohibitive quickly. So for me, the choice is not really between synthetic instruments or live instruments, it’s between synthetic instruments or nothing. I’m only going to use live musicians if I have access to a substantial budget, which happens about once every ten years.

I know you have lots of experience working in a studio. How has the studio process changed over time?

My college folk band did our one and only album on tape. Everything I’ve recorded since then has been on the computer. The difference between tape and the computer is like the difference between setting lead type and Microsoft Word – it’s such a profoundly different experience as to barely even be the same art form. This is especially true when you’re doing that computer-based production in your apartment. Being in a studio is stressful because it’s so expensive and clinical, so you’re constantly watching the clock and feeling self-conscious about everything. Recording in your house is the least stressful thing imaginable. It opens up opportunities for playfulness, experimentalism and open-ended messing around that in the analog tape era were only available to superstars like the Beatles. So it’s an enormous change. I prefer hip-hop and electronic music to most other genres right now because of the wide-open creative freedom that musicians in those styles can express. They make rock, jazz and classical seem hopelessly stodgy and conservative.

How have different genres of music adapted to changing technology?

Changing technology has created huge new genres and subgenres from scratch. Just like the electric guitar and the modern PA system made rock possible, so too have cheap computers and synths made hip-hop, EDM, and all their related styles possible. Hip-hop is the most popular genre of music on earth, and it would be inconceivable without the accessibility of recording and sampling gear. More tradition-bound genres are having a hard time adapting. When rock was a youth music, it was constantly at the cutting edge of technology. The Beatles are the best example of that, but the same is true of Pink Floyd, Queen, Led Zeppelin, et al – we don’t hear that music as being “high-tech” anymore, but it very much was in its time. But now rock is almost a classical music in its concern for preserving its past, so it has an uneasy relationship with computers. Some artists try to work samples and synths and modern production techniques into their sounds – every metal band seems to have a DJ in it now. Others try to pretend that nothing has happened since 1975 – looking here at Dave Grohl and Jack White. Jazz and classical are having an even worse time. A few experimentalists are trying to reconcile their music and the broader culture, with mixed results. Others take the Jack White approach and live in the past.

How have you seen technology and live/acoustic music work together?

Live “acoustic” music has been inextricably wrapped up in technology since the invention of microphones. Outside of unamplified classical performances, every live performance is a high technology production. After that it just becomes a matter of degree. The tension has been uncomfortable for many decades. Rock bands have been hiding their keyboard players offstage since the 1970s because the idea of synthesizers doesn’t fit with their fans’ ideas of what “real” music is. More current genres embrace their own artificiality. Michael Jackson started lip-sycing to his recordings in the early 1980s when their production got to be too elaborate to be reproducible by live musicians. He treated those shows as dance performances rather than “musical” ones. And they were bangers!

People like Daft Punk know it’s boring to watch people standing in front of laptops twisting knobs and pushing buttons, so they compensate by having mind-boggling light shows and video projections. The Grateful Dead had an interesting approach in their later years – they used tons of synthesizers onstage, but used them for live improvisation, so there was a sense of organic unpredictability balancing out the seeming fakery of the sound. I also admire the approach taken by Björk – she has all of her beats and synths playing off a computer, but then has different combinations of live musicians playing on top of them for every tour. It might be a tabla, harp, and pedal steel guitar; or an all-female brass section, a drummer, and a person playing a giant touchscreen; or a women’s chorus and several people playing instruments that she invented; or an orchestral string ensemble and a guy doing synth manipulation on a DJ controller. It’s always exciting to see what she’s going to come up with next.

How do you feel about the argument that music technology is destroying different genres of music? Are certain genres in danger of becoming extinct due to music technology? If so, what genres and how?

Music tech doesn’t destroy genres of music; lack of creativity and cultural relevance does. Rock lost out in popularity to hip-hop and electronic dance music not because of computers, but because it ran out of ideas. Jazz has been unpopular for generations, with the conspicuous exception of those artists who have evolved their music to express the social realities of the world they live in. Herbie Hancock stays popular, year in and year out, because his music stays in dialogue with the culture outside of jazz. When I went to see him play a few months ago, he had one of Kendrick Lamar’s producers in the band, adding sample-based ambiance and vocoder singing. Miles Davis did that too; the album he was working on when he died was a hip-hop album with Biggie Smalls’ producer.

Have you noticed any controversy surrounding specific technologies?

Every new technology is controversial. The piano was controversial when it was introduced. So was the phonograph, the microphone, the tape recorder, and the synthesizer. Among current technologies, Auto-Tune certainly provokes strong emotions, because of the way it challenges the concept of virtuosity. Other technologies have been tools we manipulate with our hands, but Auto-Tune affects the voice, which feels much more profoundly intimate and bodily. I think the biggest controversy should be the fact that computers are getting very close to be able to produce good-sounding music from scratch. Software can already write “new” Bach and Chopin pieces that are good enough to fool professional musicologists. The day when Spotify will be able to generate new Michael Jackson songs is not far off. The only reason this isn’t provoking more emotion is because it’s not widely understood outside of computer science circles. But it’s coming.

How do you think music technology and acoustic music should be taught in schools? Should one be taught first? Should one be focused on more than the other?

The split between teaching “music technology” and acoustic instruments is a false dichotomy. The real conflict in music education is between teaching creativity and score-following. Right now, schools are having a hard time figuring out what to do with computers, not because they’re difficult to use, but because they fit awkwardly with the band-choir-orchestra paradigm. It’s very rare for kids to create their own music in school. Historically, there were practical reasons for that. You have to be a very adept trumpet player before you can improvise, and you have to know notation very well before you can do pencil and paper composition. But rock instruments have a lower barrier to entry, and rock songwriting doesn’t require notation. So schools in the UK, Australia and Scandinavia have been making big strides in incorporating informal learning and songwriting into music education. With the advent of GarageBand, meaningful musical creativity has never been more accessible. The question facing the music education profession is, what are our priorities? If kids are going to be creative, it will probably be in the styles that are meaningful to them: pop, hip-hop, EDM, rock, country, salsa, reggae, etc. As popular culture gets more Afrocentric, it gets further and further removed from the Western canon, from orchestral instruments, from notation, and from the large ensemble model. How much are schools willing to let go of tradition to accommodate this change? I don’t feel much attachment to the Western canon and would happily toss it overboard in favor of helping kids be able to express themselves in whatever stylistic idiom makes sense to them. But I’m in the political and philosophical minority on that.

2 replies on “Big thoughts on music tech”

  1. Do the think there’s something inherently wrong with conservative music-making a la Dave Grohl and Jack White? I would certainly fault someone for disparaging people who use newer technologies and social influences to create music, but I don’t think that sticking to older ways is an objectionable choice for one’s own personal music-making. What are your thoughts?

    1. Traditionalism is fine, to a point, but there’s something particularly grating about rockist dad traditionalism. It bothers me that these guys are fetishizing a past music that, in its time, was striving to be at the technological and imaginative cutting edge. Imagine if Led Zeppelin had had the same attitude as Dave Grohl – they would have limited themselves to the musical and technical techniques of forty years earlier, so they would have only made big band jazz recorded to disc via a single microphone. The modern equivalent of Zeppelin isn’t Dave Grohl, it’s Future or SZA.

Comments are closed.