Systemic Design
AI
Constraint-Driven Innovation
Mental Health
Business Strategy

The Ghost in the Machine: Why AI Therapy is a Feature, Not a Bug

Critics worry that AI chatbots are a poor substitute for human empathy. They’re missing the point. In a world of friction-heavy systems, the 'good enough' availability of AI is a masterclass in inclusive design.

Cristian Brownlee

Author

December 26, 2025
4 min read
The Ghost in the Machine: Why AI Therapy is a Feature, Not a Bug

There is a certain type of hand-wringing that occurs whenever technology begins to colonise the "sacred" territory of human emotion. The latest target is the rise of the AI therapist. Critics are lining up to explain why a Large Language Model cannot possibly replicate the nuance, soul, or professional ethics of a human practitioner. They argue that leaning on a chatbot for emotional support is, at best, a hollow substitute and, at worst, a danger to our collective health.

They are, of course, technically correct and strategically wrong. By focusing on what the AI lacks, they are failing to see what it provides: a zero-friction entry point into a system that has spent decades making itself inaccessible to the people who need it most. In business, we call this a market failure. In the world of disability and neurodivergence, we just call it Tuesday.


The Friction Tax on Human Empathy

The primary argument against AI therapy is that it lacks "true" empathy. This is a lovely sentiment, but it ignores the logistical nightmare of actually obtaining human empathy in the current market. To see a human therapist, you usually need a combination of three things: significant disposable income, the executive function to manage a booking system, and the physical ability to get to an office.

For many, particularly those navigating the world from a wheelchair or managing a neurodivergent brain, these are not mere inconveniences. They are systemic barriers. A chatbot doesn't require you to navigate a flight of stairs, it doesn't charge £120 an hour, and it doesn't care if you have a meltdown at 3:00 AM on a Sunday. The AI isn't winning because it’s "better" than a human; it’s winning because it’s present.

"Innovation doesn't always come from making a service more 'premium.' It often comes from removing the psychological and physical tax required to access it in the first place."

Designing for the Extreme User

In my work with high-stakes environments, we often look at "failure data." We want to know what happens when a system is pushed to its absolute limit. If you look at the people currently using AI for emotional support, you aren't looking at people who are "settling" for a robot. You are looking at extreme users who have been priced out or shut out of the traditional model.

When we design for these margins, we accidentally create better systems for everyone. This is the logic of the "Curb Cut Effect." By making the world navigable for a wheelchair, you make it better for the parent with a pram and the traveller with a suitcase. If we can refine AI to be a safe, effective emotional support tool for those in crisis who have no other options, we create a scalable layer of "emotional infrastructure" that supports the entire population.

  • Radical Availability: AI scales in a way that humans cannot. It provides a baseline of support that prevents minor issues from escalating into systemic collapses.
  • The Judgment-Free Zone: For many, the "human" element of therapy is actually a barrier. The fear of being judged by another person is a form of friction that AI eliminates entirely.
  • Data-Driven Resilience: AI can spot patterns in language and behaviour across millions of interactions, identifying risks that a single human practitioner might miss.

The Business Logic of the "Good Enough"

We need to stop asking if AI is "as good" as a human therapist and start asking if it is "better than nothing." For a huge swathe of the population, "nothing" is the current alternative. From a business perspective, the AI chatbot is a classic disruptor. It starts at the bottom of the market, serving the people the incumbents don't want or can't reach, and it improves until the incumbents have to rethink their entire value proposition.

The "danger" isn't the AI. The danger is a rigid adherence to a legacy model of care that refuses to adapt to the reality of the 21st century. We should be looking at how to integrate these "ghosts in the machine" into a tiered system of support. Use the AI to handle the high-volume, low-complexity emotional maintenance, and free up the humans to do the deep, complex work they are actually trained for.

As it turns out, the most "human" thing we can do is use our tools to ensure no one is left to deal with their constraints alone. If that requires a bit of silicon and a few billion parameters, so be it.