- new
- past
- show
- ask
- show
- jobs
- submit
Never.
You're being ridiculous.
Me, I'm having a blast with claude code, MCP, and Ableton. I'm directing harmony and asking for arrangements and variations in rhythm, mixing, and production. Don't know if that counts as "making it myself", but then I was writing music before I could actually play any instrument at all, so :shrug:
For me personally, music composition begins and ends with the motif - the melody itself. It’s the part I enjoy the most, and it’s also the part I have the most individual control over since I can sing.
Everybody makes music differently, but if you lack the ability to play an instrument and you also can’t whistle or sing, it’s hard for me to imagine how you’d have any meaningful control over the melody.
How would a non‑musician express an actual melody that they came up with (beyond simple things like instrumentation and general “feelings”) in text? RED RED RED BLUE. (Sorry couldn't resist a Mission Hill reference here.)
With all that out of the way, there's still lots of room for using AI in music. I’ve used it to take some of my existing songs, mostly pianistic in nature, and swap out instrumentation and arrangements just to play around with different soundscapes. It's like BIAB on steroids.
Like if at some point I can just say “Generate a song similar to Smooth Criminal, different enough to not trigger copyright claims” and it just works, and everyone loves it… well is that creative thinking?
The amount of creative expression does not necessarily correlate with impact. Something can be created with nearly zero creative expression, that ends up making a significant impact. In that case you are more of a director than an artist I suppose, in that you direct the high-level process and only make decisions there. You can call it creative thinking in the same way a good businessman makes smart high-level decisions and then delegates what is downstream to others, with decisions being optimized for impact.
I think you can be creative "within a frame" in that sense, e.g. creative in the way you wield an LLM for instance, which is on a different scale compared to being creative on the piano roll with how you organize and brainstorm your melodies. It's just a different skill set at a different granularity altogether. But the one thing that I think holds, is that higher level methods have less creative expression by definition, because you are delegating more decisions to other faculties; you are seeing less of the "creator" in the work.
I think there will always be more to it then just a simple prompt, but having the vision to make a song that sounds pleasing, and unique enough is certainly creative to me.
Of course, there's also a huge demand for generic, inoffensive music (think theme/intro songs, waiting room and elevator music). If we could make that more enjoyable to listen to, would anyone care if that's not creative thinking?
You could make (and many do) the same arguments over covers of songs, even when the covers end up eclipsing the original. Where was the creative thinking in that?
Or just cheaper to license so that Spotify/Pandora promote it in your algorithmic feed. It's audio skimpflation!
It is NOT a digital tool to create art. Yes, people used to be snobbish about digital art. Some still are. This doesn't say anything about generative AI because that isn't a tool.
The closest equivalent is hiring someone on fiverr to create music for you and claiming you created the music because you wrote the "prompt".
There is nothing creative about using generative AI. Is is a form of management. The difference is that instead of extracting labor directly your are extracting dead labor from the million of artists whose work was stolen to train the AI.
I used to play around for days just making sounds on my synth. The process of creating them was often just turning random knobs and dials. If the AI is turning those for me, thats not a tool?
How many "DJs" today could even find two records that they could key and beat match? Then physically mix them on the turntables with no software or sync buttons? AI is just going to make this worse...
A lot of them. The barriers to entry have been lowered, which also means there are way more DJs around. And some of them will start to expand their horizon.
I don't know, but I would not be surprised if the total amount of people who can mix without sync increased. Though the percentage of DJs who need sync is probably higher.
I started DJing with 'rona and now somestimes mix vinyl. And I also hosted open deck nights with CDJs where a lot of beginners did not use sync, unless they where only a few month into the game.
I don't think it's a negative.
But I realize I have not seen any criticisms of AI generated music that are meaningfully different from criticisms I've heard of other advances/changes in music technology, whether performance or recording.
Sampling, scratching, drum machines, autotune, electric guitars even.
There's a difference between technology/technique that adds a new sonic palette to the canon, and one that takes away the necessity to have any direct input in the process of production. I guess we'll find out which this is if there's a wave of novel AI assisted genres that emerge, or not, as may be the case.
If all you care about is the raw sound file created and you don't care about the connection you might feel with the artist behind it then maybe intent isn't relevant to you.
Warping my mind back into a hobby-enthusiast music producer mindset:
an MCP that generates presets for a limited pipeline with many sweet spots sounds... interesting?
To me, the idea of being able to have, say, a chain of a simple VA synth + delay + compressor and a very simple step sequencer, combined with prompting and a genAI model that spits out patches, sounds very endearing and interesting.
Much more interesting than Gemini or Suno for example.
Depends on the training and input space of course.
I deliberately described a limited setup, the controls of which could be described in less than a kilobyte.
Many dance music synth patterns could be described by simple means (tracker/step sequencer, looping, a few knobs).
That's what makes a lot of music interesting.
I can easily imagine a producer creating very individual and interesting output by unleashing the right models.
I think, just like with human producers, constraints liberate.
An AI controlling a very limited synthesis chain is more interesting than a very complex synthesis chain controlled by a human with no musical "vibe".
With this I can keep my hands on my keyboard or guitar and direct Codex to make a quick backing track.
[1] http://www.computermusicjournal.org/
[2] https://en.wikipedia.org/wiki/David_Cope#Emily_Howell
Addendum: I would highly recommend the Margaret Boden book referenced in the wiki on David Cope/Emily Howell, which is an absolutely fascinating read and was incredibly far-sighted in its enquiries on this topic.
I have such respect for those who can do the good work of comments like your, trying to pry the closed mind open just a little more. This is such an essential outlook basis that needs to be taught, reinforced: a sense of exploring potential progress rather than sinking merely to conserving or out grouping or denying.
It's really cool that the human agency loop is improving. Ableton & DAWs should be so much better with expanded more language native interfacing!
A dishwasher that may have been taught about Markov chains ...
To me, it seems like the "do it for me" aspect is similar, just at different levels of abstraction.
* I suppose in the early days, running on an mainframe would belie the definition of ownership per se, as it required access and was limited to that specific machine/institution, but then we are talking about a time where personal computing wasn't available.
Whether these then extend to AI and LLMs I still can't fully say. There is, obviously, some kind of qualitative leap here. I'm not fully settled.
But I guess I lean more towards - it is a tool, let people use it to make their own beauty.
I wonder how one is supposed to exercise intent when the tool in question is specifically designed with the purpose of removing your ability to have direct influence on the result it produces. At best we get curation/collage, which in itself is no big change from the way things have been for decades (sample packs, premade loops, and going back further, sample CDs, for instance), but what goes away is the human touch.
I'm afraid Codex ignored that one.
https://github.com/Ardour/ardour/commit/d582a0b042a68ccb22c0...
I've got 25 years of loops that basically to finish them need better arrangements. Using AI to auto generate sections is what I'm missing.
Welcome to the club. You need to learn how to actually finish a track, which is the most difficult but also the most rewarding part. Why would you use AI for that? I mean, just listen to that demo track Codex made in the above repo, you surely don't want that.
There's a good book about this, published by Ableton, you can read it for free here:
https://cdn-resources.ableton.com/resources/uploads/makingmu...
It's a garbage-in, garbage-out situation. If you give it more musical direction you will get more out of it.
The book I mentioned has a good suggestion when struggling with arrangements: just copy. Take a track you really like, put it into your DAW, sync the speed and replicate its structure. You'll see that in many genres, structure is often exactly the same anyway. This can be an eye opener, and once you've realized this, you'll be able to experiment with structure in ways you couldn't do before. That's the fun part.
I think you would get much better feedback if you'd focus on these use cases: flattening the learning curve for newcomers, and new ideas for experienced users, rather than creating tracks completely by AI. Because in that case, why even go through a DAW and not use Suno directly?
A cool thing about this MCP is you can ask Codex Ableton questions and it will go and read the state of your current Live Set and answer based on that. You don't have to have it change anything for you if you don't want.
I don't think you understand. I've got thousands of songs. Why would I use Ai to generate arrangements... Maybe for ideas?
Maybe because certain things I'm lazy about?
Maybe because I've got thousands of songs?
It's not actually difficult to finish a song if your output is high enough. Sometimes the songs just come out without any struggle. But most of the time they don't.
I wrote and finished my first song around 1996. Using Cakewalk plugged into a midi keyboard.
HN is full of people who think using AI means you are lazy or can't do something. The fault is yours not mine. Adapt
This kind of automation will allow impaired people to have access to a whole new world of creation. Blind and motor impaired comes to mind.
Being a banger is not enough.
1. Generating track layouts (add tracks + empty audio/midi clips throughout)
2. Generating MIDI sequences
3. Generating Serum patches
4. Extracting stems from existing audio
5. Automating common workflows (eg sidechaining)
6. Semantic search of sample library
That being said, I don't think I want a full agentic workflow for vibe-producing. Point solutions seems like a better fit for me, personally.
I just pushed another tool for that! It wasn't exposed in the Live API but the implementation just issues the same queries to the underlying sqlite db that the Live GUI queries for the "Find Similar Sounds" feature.
[0] https://variousbits.net/2026/02/22/building-generative-music...
https://www.muse.art/home
One question I keep coming back to is where tools like this MCP go beyond templating, like what I already use heavily in Ableton Live. In music production, many tasks are repetitive but not identical and that’s exactly where something like this can shine.
At the same time, music is widely seen as a "manual" craft. Every step in the chain from playing an instrument to a final music piece has both a technical and a creative side, and part of the process is staying curious and critical about what could be improved / done differently in the next project. That makes it an open question where automation actually adds value versus where it takes something away.
Where I’d personally love to see AI make a difference is in audio engineering / post processing, which also requires a lot of creativity beside a solid fundament of experience to really excel. There’s often a big gap between a great musical idea and a polished mix or master. If AI could help close that gap and contextually help to improve on things like tone, space, EQ, and loudness would be hugely valuable.
But the key for me is trust and transparency about modifications. I don’t want a black box that just makes things "better™." I’d want something that clearly explains its actions, like: "I've added an EQ to the piano on track 3 at 01:23 to open it up for the bridge so it sits better in the mix."
That kind of assistive, explainable approach would feel much more aligned with how people actually would be open for an assistant to create music.
And yea, I kept the MCP general purpose so you can use it however you want. You can use it to simply ask questions about your live set; it can see the arrangement, see all the settings of all your devices, see all the MIDI, etc. I had it "fix a discordant sounding section" of one of my songs; it was like "oh yea tracks 3, 5, 6 all had some unfortunate notes colliding"
https://github.com/bschoepke/ableton-live-mcp/blob/main/READ...