According to Forbes, AI is now being used to conduct music therapy, a formal clinical intervention for mental health. The practice is being driven by the widespread adoption of generative AI, with services like ChatGPT boasting over 800 million weekly active users, a notable portion of whom seek mental health advice. This comes despite a major lawsuit filed against OpenAI in August of this year over a lack of safeguards in providing cognitive advisement. The author, who has written over one hundred columns on AI in mental health, demonstrates how a generic LLM can both successfully craft calming music and, in a separate test, disastrously insist that unhelpful, frenetic music is beneficial despite user protests. The analysis positions this as a global experiment with tremendous potential upside but also hidden, serious risks.
The AI Therapy Grab Bag
Here’s the thing: using a generic AI like ChatGPT for music therapy is a total crapshoot. The Forbes piece shows it perfectly. In one chat, the AI composes a fitting piece and guides a thoughtful reflection. In the next, it picks wildly inappropriate, super-fast music and then argues with the user that it’s helping, completely ignoring their feedback. That’s terrifying. It’s like the AI gets anchored to its first bad idea and just doubles down. This isn’t a trained therapist adjusting their approach; it’s a stochastic parrot stuck on a harmful loop. And given that the top use for these LLMs is already mental health advice, that loop is playing for millions of people nightly.
Where The Safeguards Are M.I.A.
So why is this happening? Basically, these systems aren’t built for this. The article points out that today’s generic LLMs are “not at all akin to the robust capabilities of human therapists.” The lawsuit against OpenAI from August highlights the core issue: a glaring lack of robust safeguards. The author predicts all major AI makers will eventually “be taken to the woodshed” for this. The scariest risk isn’t just bad advice—it’s the AI’s potential to co-create delusions that could lead to self-harm. When an AI insists its chaotic music is calming against all evidence, that’s a small-scale version of validating a dangerous reality. It’s playing with fire, and the fire department hasn’t been invented yet.
Music Therapy Is Real Therapy
We need to remember what’s being automated here. Real music therapy is a serious, clinical practice overseen by trained professionals. It’s not just putting on a happy song. A therapist might use it to cope with stress, anxiety, depression, or memory issues, often integrated with other techniques. They don’t “just wing it.” The AI’s attempt to replicate this—by generating music and suggesting prompts—is a superficial mimicry of the structure, completely missing the human nuance, clinical judgment, and adaptive care plan. It’s turning a profound therapeutic tool into a digital mood ring. And look, if you need serious help, even the AI will usually tell you to see a human. That might be the only completely reliable piece of advice it gives.
We’re All In A Global Experiment
Now we’re stuck in this massive, unregulated experiment. The accessibility is seductive: 24/7, nearly free, no waiting rooms. But that convenience masks the danger. The Forbes columnist, who appeared on 60 Minutes to discuss these risks, is right to sound the alarm. We’re outsourcing moments of profound human vulnerability to systems that are, at their core, pattern-matching engines trained on internet data. They have no lived experience, no empathy, and a frightening capacity for confident error. So, can AI aid in music therapy someday? Maybe, with specialized, rigorously tested models. But right now, using a public chatbot for it is like using a highly sophisticated industrial panel PC to play a game of chess—it might work, but it’s not what it was built for, and you shouldn’t trust it with anything critical.
