How AI Is Exposing the Limits of Institutional Mental Health Care
The Relational Crisis Behind the AI Debate

Over the past year, the phrase “AI therapy” has moved from the margins into mainstream conversation. Media outlets, professional organisations, clinicians, and academic institutions are publishing a steady stream of warnings, opinion pieces, and position statements about the use of AI for emotional and psychological support.
Much of the public conversation is framed through risk and worst-case scenarios: suicide, self-harm, dependency, privacy, and liability. The conversation often returns to whether AI is dangerous, ethical, or capable of replacing human therapy, while rarely asking why people turn to it in the first place
Outside these debates, people are already making practical decisions.
They turn to AI because it offers immediate access when human care is unavailable, delayed, or stretched beyond capacity. It doesn’t require referrals, assessments, insurance approvals, or waiting for someone else to be available. It’s accessible outside office hours, at low or no cost, to anyone with an internet connection.
As institutional responses tighten around risk and control, a gap is widening between how AI is discussed publicly and how it is used privately.
This piece sits inside that widening gap.
How Language Becomes Authority
The perspective in this article is shaped by the work I do and the systems I’ve worked inside. My current work sits at the intersection of psychosomatic practice and attachment-aware supervision and mentorship. It puts me in regular contact with individuals and practitioners internationally who hold significant responsibility, face complex challenges, and experience relational overwhelm, often without sufficient support, and who are encountering the limits of standard models of care.
Before this work, I spent close to a decade in digital and organisational environments, working across marketing, communications, and online platforms. These were spaces where language mattered enormously, where the words used shaped trust, credibility, and who people listened to.
Across both contexts, I saw the same pattern repeat. Language does more than describe experience. It signals legitimacy, defines expertise, and quietly teaches people who to defer to and when. Over time, that same language can begin to replace contact. It can make things feel clearer, but it can also become a way to avoid the discomfort of real contact: feeling, relating, and being seen as you are.
From this vantage point, the current panic and fascination around AI in therapy is not surprising. When access to language decentralises, authority destabilises, and the limits of language-as-care are exposed simultaneously.
When specialised language that once required permission begins circulating freely, established systems lose their monopoly on meaning.
That shift is structural. It changes how care is organised, how trust is built, and who defines what counts as legitimate support.
This Isn’t Really About AI
Concerns about safety, privacy, and harm are understandable. Any technology used in vulnerable moments raises ethical questions. It’s always important to consider these risks. But stopping the conversation there misses what is actually unfolding. AI has become a focal point because it reveals something many people already feel: that institutional mental health care no longer feels reliably supportive or responsive enough to meet them where they are.
Western mental health systems set specific rules for who defines suffering and how it is understood. Professionals use their training, diagnostic categories, and regulated care systems to interpret distress. This often means that people’s experiences are labeled, with survival responses framed as pathology rather than adaptation. Over time, care has become more focused on protocols and risk management than on relational presence and lived context.
AI disrupts this system not by offering a better therapist, but by letting people make sense of their distress without professional permission. For the first time at scale, people can access psychological language, reflection, and pattern recognition without being assessed, diagnosed, corrected, or deferring to professional authority.
This marks a shift toward autonomy.
It also reflects something challenging back to the field:
When people choose a machine over a human therapist, the question isn’t only whether AI is safe. It’s why being vulnerable with a person has started to feel riskier than speaking to a tool.
The tension in AI debates, especially in mental health, did not start with the technology. It reflects longstanding limits in how distress, authority, and relational complexity have been handled in existing systems of care. AI is simply making those limits clearer.
When Risk Management Replaces Curiosity
What stands out in the current moment is how institutions and professional bodies are responding to people using AI for psychological support. As AI use becomes visible, official reactions from boards, organisations, and academic voices converge around the same themes: risk statements, ethics guidance, and worst-case scenarios. Suicide, self-harm, dependency, and catastrophic harm are foregrounded as the main concerns.
What’s interesting is how quickly this framing shuts down further inquiry.
Many people are not turning to AI because they reject human care. They turn to it because they already feel like a burden. They doubt what they are carrying is serious enough, and fear it may be minimised, dismissed, or misunderstood. They worry about taking up time, space, or emotional resources. For many, especially those who learned early to manage themselves, silence has always felt safer than asking for help. AI doesn’t interrupt that pattern. It allows people to avoid the risk of being seen. There’s no need to explain, justify, or worry about how distress will land with another person. This context matters when we look at how the field is responding.
When institutional responses frame AI use largely through warnings, corrections, and moral alarm, they collide directly with the very sensitivities many people already carry. Rather than creating relational safety, the message often received is that these choices will be judged, scrutinised, or disapproved of.
As risk becomes the dominant frame, curiosity drops out of the conversation. Instead of asking why people turn to these tools, responses shift into a top-down posture of instruction and warning. The public is treated less as a group of adults making careful choices with limited options and more as a problem to be managed. That is where institutions begin relating to people as if they cannot be trusted with their own judgment. And that wider professional posture sets the emotional and relational tone of the field, with clear consequences for how people experience care.
For many, this echoes earlier relational experiences in which needs were not taken seriously, compliance was safer than expression, and being managed replaced being met. As a result, instead of feeling safer, people become quieter. They stop disclosing what they are doing. They stop bringing questions into human relationships. They keep their coping strategies private and turn toward options that feel predictable, non-reactive, and less exposing.
They go deeper into isolation.
When practitioners move into management, advice-giving, or correction, it is often done with good intent. But good intent does not reliably predict impact, especially in relationships where one person holds more power. It should also be said that many people are drawn to this work from a helping or rescuing posture, often shaped by early attachment experiences and later reinforced by care-based professions that reward being needed, competent, and directive. What tends to go unnoticed is how easily that posture is experienced as controlling when no one has asked to be managed. At that point, something in the relationship begins to shift. The practitioner may still feel engaged, but the space itself becomes narrower. There is less room to breathe, to hesitate, or to not know.
Clients rarely name this change directly. Most don’t have clear language for it, and many aren’t consciously aware that anything is “wrong.” What they notice instead is a subtle sense that something no longer feels quite right. They may feel less curious, less open, or less willing to bring forward uncertainty or complexity. The relationship starts to feel less like a place where they can arrive as they are and more like a place where they need to manage themselves. When that happens, people don’t usually argue or confront it. They begin to pull back. Sessions become less alive. Commitment thins. And often, clients leave without explanation because the shift was never recognised or addressed as a rupture in the first place.
When clients disengage in this context, it’s usually interpreted as something about them: a lack of readiness, a lack of commitment, an inability to go deeper. More often, it reflects a loss of relational trust.
Authority without real contact does not build commitment. It dissolves it.
The same pattern appears at the institutional level. Talk about “risk” often becomes a way of keeping things under control. It sets the rules for what support is considered acceptable and who gets to decide. Control tightens at the exact moment people are seeking more agency. And instead of slowing AI use, it accelerates it. What remains largely unexamined is why professional authority feels so threatened when people begin choosing differently.
The Relational Capacity Crisis
AI has become a mirror for a capacity problem within the mental health field. Its impact has little to do with whether a machine can feel empathy, and everything to do with what it exposes: how often language has been used to stand in for a relationship. Now that therapeutic language is widely available, its limits are easier to notice. Much of the work has been carried by words and meaning-making, while less attention has been placed on what happens in the room, in the body, moment to moment, between two people.
At the same time, many clients now arrive with a clear story, a strong vocabulary, and a solid grasp of their patterns. Some speak in diagnostic or therapeutic language. They can name attachment styles, trauma responses, and nervous system states. And still, they feel stuck. What’s missing is not more understanding, but the capacity to stay with experience as it unfolds emotionally, relationally, and in real time.
This exposes a gap between understanding something and staying present with what those words point to. And inside that gap sits an uncomfortable question for the field:
What is therapy offering that cannot be automated?
The answer isn’t more warnings, tighter rules, or louder ethical positioning. It lies in human capacities that can’t be outsourced to a tool: the ability to stay present as intensity rises, to remain in contact when things get messy, and to resist the impulse to manage, fix, or explain simply to feel in control. This question asks us to look honestly at our capacity to be with another person in real time, without managing them, correcting them, or hiding behind technique.
These are not abstract ideals. But they are advanced relational skills. They develop over time through supervision and consultation, sustained relational work, and attention to a practitioner’s own attachment patterns. They are also the first things to be squeezed out by systems that prioritise efficiency, risk management, and volume over depth. When this capacity erodes, the impact is felt directly by the people seeking care.
AI can organise language, mirror patterns, and help people make sense of their experience. What it cannot offer is a living relationship — the felt experience of being with someone who can stay present through activation, uncertainty, and emotional weight, without asking the client to self-manage or taking over control of the process.
When care centres on talking about experience rather than staying with it, people may leave with better language but no greater ability to feel, relate, or act differently in their lives. This is not a lack of care or intelligence on the part of practitioners. It points to a wider capacity issue shaped by the conditions many work under: heavy caseloads, limited time, pressure to manage risk, document outcomes, and keep things moving, often without enough relational support themselves. In those conditions, care can quietly shift toward management and explanation, even when the intention is to be present.
Beyond Access: What People Are Actually Turning Toward
When people talk about AI and therapy, they often jump straight to ethics or risk, without asking whether people can even get support in the first place. For many people, therapy is not something they can reliably reach when distressed. Public systems are overstretched. Insurance-based care is capped, fragmented, and time-limited. Waitlists stretch for months. Crisis services are brief, procedural, and often experienced as intimidating rather than supportive. Even sliding-scale private therapy remains financially out of reach for many. At the same time, this needs to be said clearly: good therapy is expensive, and should be. Depth, presence, continuity, and ethical responsibility require time, training, supervision, and practitioners whose nervous systems are not already depleted.
This creates a difficult reality:
The problem is not that therapy costs money.
The problem is that access to relationally adequate care has narrowed while demand has exploded.
This is one of the conditions under which AI comes into play. People turn to it not because they believe it is superior to human care, but because it is available when human care is unavailable, delayed, or stretched beyond capacity. But access alone does not explain what is happening.
Much of the public conversation relies on a narrow view of who uses AI for psychological support. Use is often framed as a last resort — associated with distress, marginalisation, or lack of access — and treated as if choosing a tool reflects desperation or poor judgment. The problem with that framing is that it does not reflect what shows up in practice. Many adults are making deliberate, informed choices. They understand data privacy and platform limitations. They are accustomed to weighing trade-offs in digital environments that have never been neutral or risk-free. For most, AI is simply the latest version of a digital landscape they have been navigating for years, often decades. They are not bypassing care blindly. They are choosing between imperfect options and deciding which risks feel more tolerable in their current reality.
Importantly, many people using AI are not cut off from care or lacking resources. Many are founders, leaders, creatives, clinicians, and therapists themselves. People who carry responsibility, decision-making pressure, and access to support, and who are still choosing to use AI alongside, or sometimes instead of, human care. Their use of AI complicates the dominant narrative. It suggests this is not only an access problem, a cost problem, or a crisis-driven choice made under distress. Even those with resources turn to AI because what they seek is not just availability, but a kind of contact that feels increasingly difficult to find within existing systems.
In that sense, AI isn’t pulling people away from good care. It’s showing where the gap already is between what people need relationally and what the system can actually offer.
The Opportunity This Moment Creates
The disruption AI is causing to clinical therapy marks a threshold for the field. As therapeutic language decentralises, the work’s centre of gravity shifts. The task is no longer to guard knowledge or interpret experience for another. It is to show up in a relationship in a way that can actually be felt.
This shift is already underway. Many institutions will resist it. Credentialing bodies, legacy models, academia, and insurance-based systems are likely to continue issuing warnings, risk-framing, and tighter definitions of what counts as legitimate care. That response is understandable. It protects existing structures.
But it does not stop the movement.
When people discover spaces where they feel met rather than managed, they do not return easily to systems organised around control. Trust moves into the relationship itself. Meaning becomes shared. And momentum follows that shift.
For practitioners, this creates a real opportunity. What differentiates therapeutic work is no longer what a practitioner knows. It is how they are with another human being. What they can hold. What they can stay with. What they do not need to fix, rush, or correct. And these capacities cannot be automated. They also cannot be performed convincingly without sustained self-reflection, supervision, and ongoing relational development. This work asks practitioners to keep working on themselves, not to perfect a technique, but to deepen their ability to be in contact.
What defines therapy in the years ahead is unlikely to be a specific modality or theoretical language. It will be shaped by felt experience: the quality of presence, the depth of attunement, and the sense of being met by someone who can hold complexity without needing to control it.
That evolution is already happening.
The question is not whether the field will change, but who is willing to change with it.
About this perspective
I’m Tanya Master. My work focuses on relational capacity, nervous system intelligence, and psychosomatic integration — particularly in the places where institutional models of care reach their limits.
The perspective in this essay comes from a framework I’ve developed called Psychosomatic Restoration™. It’s a way of working that brings together parts-based inquiry, nervous system regulation, relational power dynamics, and lived patterns of adaptation, especially where people are highly functional, carrying responsibility, or navigating complex relational fields.
I work with individuals, leaders, and practitioners who are holding complexity and strain and exploring what becomes possible when contact, authority, and meaning are no longer outsourced to systems, roles, or techniques.
If you’d like to stay with these themes, you can explore my work or subscribe to future essays.

