AI slop and musical creativity

Next week, my NYU graduate seminar on technology in music education is supposed to start talking about AI: large language models, prompt-based generators, stem separation and so on. I am not feeling much enthusiasm for this unit, for a couple of reasons. First of all, we are currently talking about YouTube, which is a richly complicated and important music topic for music ed, and for music generally. I decided that we should definitely push AI a week to spend more time on YouTube. But then I was thinking, maybe we could just give AI a miss entirely? Or is it irresponsible of me to deprive the grad students just because I don’t enjoy thinking about it?

As I was working all this through, I wanted to put my distaste into words. I wrote a free-associative BlueSky thread and then figured it deserved expansion into a proper essay. So here we go. First, to set the stage, let’s contemplate this image of Shrimp Jesus, sourced from Wikipedia’s AI Slop article.

This is a representative example of AI slop: kind of interesting at first glance, and then emptier and emptier the more you think about it. This has been my experience of AI text, video and music output too: a mildly intriguing surface with nothing underneath it.

But just because I hate AI, maybe I’m missing legitimate educational purposes for it? I read a report by The National Association for Music Education entitled Guiding Principles, Frameworks, and Applications for AI in Music Education. This report is illustrated throughout with AI slop images, which tells you a lot. On page 16, it lists some ways to integrate AI tools into creative music-making:

  • Use an AI tool to suggest various parameters of a song/composition (e.g., song title, lyrics, form, instrumentation).
  • Have students use AI tools to generate a still pictures slideshow or a video that corresponds to a recording of a programmatic composition (based on the idea behind Mussorgsky’s Pictures at an Exhibition).
  • Use an AI composition app to transform a student melody into an arrangement in the style of various composers throughout history (e.g., see https://doodles.google/doodle/celebrating-johann-sebastian-bach/).
  • As a class or in groups, study a classic pop/rock tune, listening to various parameters of the music (e.g., key, tempo, instrumentation, style/genre, form, length). Next, input these parameters into an AI song generator. Compare the original hit with the AI imitation, comparing/contrasting each parameter.

On their face, none of these ideas seem terrible, but they don’t seem like good ideas, either. I guess there might be some value in asking students to articulate the difference between an actual song and an AI song. The lower reaches of the pop charts are full of songs that sound AI-generated, so it’s good to be able to understand why those songs are boring. Continuing:

To maximize the educational value of AI tools, music educators can incorporate structured assignments that balance AI-generated material with human interpretation. For example, students might use an AI-generated melody or harmonic progression as a starting point, then refine and expand upon it to develop a complete composition. Similarly, students could analyze AI-generated pieces in different styles, comparing them to historical examples and discussing how AI interprets stylistic traits.

I don’t know what you need AI for here; it’s not like there’s any shortage of actual music in the world to talk about.

By framing AI as a tool for exploration rather than an endpoint, educators can encourage students to engage with technology in a way that fosters creativity rather than diminishes it. Projects in which students interact with AI-generated material—modifying, rearranging, and responding to it—can deepen their understanding of both musical form and expressive decision-making. Through these approaches, AI can be used not only to generate music but also to inspire deeper musical engagement and creative thinking.

Can it inspire deeper musical engagement and creative thinking, though? I haven’t seen any evidence that this is true. It might be true! But I wouldn’t expect it to be. There are so many ways to deepen your understanding of both musical form and expressive decision-making that don’t require slop generation.

The following are examples of ways to employ AI tools that facilitate musical creativity for diverse learners:

  • Use an AI text-to-speech app to “speak” the script of a podcast, the voiceover for a mock radio ad, etc. (i.e., selective mute, speechimpairment, etc.)
  • Use an AI music production tool to generate accompaniment tracks at various tempi to use in classroom rhythm clapping, ensemble warmup, jazz improvisation practice, etc.

Here is the one argument for generative AI that I can’t dismiss out of hand. Text to speech is very useful, I use it constantly to listen to news articles and such while I’m walking around. And maybe generative music could be great for all those kids who can’t play instruments or who don’t have access to the tools. Though, who are these kids? If you can use Suno or Udio, seems like you’d be able to use FL Studio.

The accompaniment track suggestion is mildly interesting. I do like the idea of using Moises or something similar to make “karaoke tracks” for practice and analysis. If you’re a bass player, it’s pretty great to be able to remove the bassline from all the Beatles songs and become Paul McCartney yourself. I have some music educator friends who swear by this approach.

I myself have so far mainly used Moises in class to demonstrate that it exists. This week, I did use it for a specific teaching task: I separated the vocals from “You Never Give Me Your Money” by the Beatles and “Fly Me To The Moon” by Frank Sinatra so I could lay them over the instrumental to “I Will Survive” by Gloria Gaynor, to show how all three tunes use the same chord progression. The students thought this was amusing, and also musically appealing, but I also could have achieved the same purpose by just singing each of the songs over them myself, like I did in this podcast episode.

Anyway, when I think about AI, I’m not mainly thinking about stem separation, I’m thinking about using text prompts to generate media, and that is what makes me feel all the hostility. NAfME concludes:

AI has the potential to make music creation more accessible to a wider range of students, particularly those who may not have traditional musical training. Many AI-based composition tools allow users to create complex musical structures without requiring proficiency in notation or instrumental performance. This opens the door for students who may not have had access to formal music education to explore composition and production in meaningful ways. By reducing technical barriers, AI can potentially empower more students to participate in the creative process.

Is this true? Has AI empowered anyone to participate in the creative process? Is prompting ChatGPT or Suno a creative act? The product superficially resembles the end result of creativity. But I don’t think the product matters all that much in creativity, especially not at the student level. Creativity is a practice, a process, an experience. Prompt-based generation skips the entire process. That’s why it’s so poisonous for education.

I support the impulse to make musical creativity more accessible, but the thing is, it is already extremely accessible. It’s the easiest thing in the world from a technical perspective. Little kids make up songs constantly. The question is not, how do we help people be creative? They are already, from birth. The question is, why do we grind creativity out of kids so thoroughly, and how do we stop doing that? Teaching songwriting and other creative music-making requires only that you disinhibit the strong creative impulse that is already there.

Okay, but what about lowering technical barriers to creativity? The thing is, those barriers are one inch tall. If you can play the black keys on the piano, or play a single chord on the guitar or ukulele, or select loops from the GarageBand loop library, or pound out a steady rhythm on a table, that is all the instrumental backing you need to get going. The obstacles to musical creativity have nothing to do with equipment or money or education or anything else, they are one hundred percent psychological. The challenge of music education is to help the kids give themselves emotional permission to be as musical as they were when they were toddlers. Everything else is extra.

The technical aspects of music do matter. Once you have some ideas and you want to express them in a more refined and effective way, it is nice to have DAWs and instruments and mics and notation and vocabulary and theory. However, none of those things are necessary at the outset, and nor are they sufficient. You just have to have ideas, remember them, and communicate them. People have been doing that for 40,000 years at least. It’s not extremely difficult! The hardest part of making up a song is just having the nerve to do it. You have to take an emotional risk. Everything intellectual and technical is downstream from that. If you remove the emotional risk, you remove the entire foundation of the structure. It doesn’t matter what you pile on top after that.

If you are already a fully formed musician, producer or songwriter and you are trying to meet a deadline, I don’t see any harm in generating AI slop and using it as a sample library or whatever. But for kids? Students? People on the way up? Prompt-based AI generation is harmful to them in the same way that high-fructose corn syrup is harmful. It fools the body into thinking it’s getting nutrition. I think passing off AI prompts as creativity is worse than doing nothing. Kids who are sufficiently driven can find their way into music with or without music teachers’ help. But if that drive gets misdirected into generating AI slop, that could do lasting harm.

From a BlueSky follower:

Gen AI is the final form of the Capitalist notion that your productivity is your only worth. Something entirely antithetical to disability & human rights. That so many buy into it as “accessible” is pathetic, honestly.

🍉Reborn Ruffian🌟 (@reburnruffian12.bsky.social) 2025-10-03T15:57:55.508Z

One last thing, back on the subject of YouTube. I asked my undergrads whether they had used online videos for meaningful learning, or just for edutainment. One of the students pointed out that it’s less about the content of the videos and more about their own mental attitude when watching. So if they are in the practice room with instrument in hand, ready to get down to business, then YouTube videos can be great for learning. If they are just on the couch or the subway in passive viewing mode, a YouTube video might make them say, huh, that’s interesting, but then it just slips from their memory. I know some academics who approach LLMs with that “let’s get down to business” mindset, and I don’t doubt that they are useful for research. But undergrads and younger kids need help getting to that “let’s get down to business” mindset in the first place, and staying there, and the last thing we want to do is encourage them to sidestep it.

Join the Conversation

2 Comments

Leave a Reply

  1. Totally agree! AI might generate “music”, but it can’t replace the real creative process. Kids already have natural creativity; what they need is guidance to explore it, not shortcuts that skip the hard, fun work of making music.

  2. I am working from the Jerry Bergonzi Inside Improvisation books and I had a question about something that was confusing me and I asked chatgpt and it gave me a really clear and excellent answer. It was discomforting but helpful.

    Last night I was listening to some music on youtube music (Jeff Beck playing “You Know What I Mean” Live (Excellent, btw)) and the algorithm went to some AI Blues slop. It was very odd.