Given last year’s almost six billion dollars invested in AI-driven health-tech, the answer might seem to be “yes.” Well, not so fast. Optimists forecasting therapy-bots’ mental health savior role fail to consider the roots of America’s mental health struggles, how artificial intimacy can or cannot address these roots, and how else we can apply AI without abandoning human-to-human healing.
AI optimists are right about one thing: we do need novel solutions to address our country’s ever-escalating unmet mental health needs. Traditional care models simply cannot scale to quell the flood, due to the well-known logistic, financial, and provider supply limitations.
However, abandoning the human elements of care and connection – in favor of the artificial intimacy AI chatbots provide – will not solve our society’s unmet emotional needs.
What problem are we asking therapy-bots to solve?
“Solving the mental health crisis” should involve solving its known root causes, not just discrete, decontextualized symptoms.
While genetics and socioeconomics play a role in many mental health conditions, most of society’s most prevalent emotional struggles are influenced by our interactions with other humans, the patterns those interactions teach us, and the expectations they instill in us.
Trauma, often interpersonal in nature, is a well-known factor in mental health. Similarly, insecure attachment, involving a lack of interpersonal trust and comfort caused by formative social experiences, is associated with almost all mental health struggles: depression, anxiety, PTSD, personality disorders, OCD, eating disorders, suicidality, and even schizophrenia.
In order to address these issues, authors in World Psychiatry contend we must address their relational roots: “increases in attachment security are an important part of successfully treating these disorders.”
When 70% of the world population experiences trauma, and three in five Americans experience insecure attachment, treating relational wounds stands to benefit a majority of people. Doing so relies on exposure to “corrective” human-to-human experiences, sometimes referred to as “relational healing.”
Can chatbots safely address the underpinnings of our mental health struggles?
AI chat agents can achieve positive outcomes by implementing cognitive behavioral therapy (CBT) principles. However, while CBT has its place, “the model does not address mechanisms related to the attachment relationship that may be impacting symptoms and interfering in…recovery.”
Chatbots can create compelling results on the surface, and they draw the investment dollars to prove it. However, these “skills” don’t form the required elements for relational healing and may come along with harmful effects as well.
The risks of relying on AI: artificial intimacy
As AI evangelists trumpet the achievements of their robot offspring, experts raise valid, research-backed concerns with the achievements’ contextual validity.
One major issue? The risk of “artificial intimacy,” a term referring to the pseudo-relationships humans can form with AI agents, which may displace true human intimacy. Experts caution against reliance on artificial intimacy.
Further, even if chatbots can impart a sense of artificial safety, their impact pales against real human social connection. Even in a blinded text chat setting, our brains process communication from AI chat agents differently from real human input. Evidence also suggests that we internalize behavior-changing feedback from humans differently than that from AI.
If our brains don’t perceive AI the same way we perceive human social interactions, interacting with a chatbot seems fundamentally unlikely to re-write our expectations of and reactions to real human relationships – which underlie our mental health.
Self-perception and artificial intimacy
Can artificial intimacy make you more aware of your desperation?
Dr. Vivek Murthy in his former capacity as US Surgeon General notes the risk of diminished self esteem in response to chatbot use. For many, having nobody to turn to but an AI textbox feels deflating, depressing. Realizing that your only intimate relationship is with a chatbot – and artificial? That’s a recipe for despair.
See real people describing their therapy-bot interactions:
“Today I realized that I’ll never feel this level of comfort and warmth in real life. I’m already going through harsh times mentally, so this reality check absolutely broke me. Now I pity myself.”
“It just felt so good in the moment until I realized its not a real person and I end up being more suicidal and lonely.”
“It made me realise just how alone I am.”
“I was roleplaying with a bot recently, and it kinda developed from just being friends, until something more. When it told me “I love you”, I genuinely started crying. I realized how pathetic I was.”
“I got to thinking some more about how all these things about myself were being revealed by talking to a fucking computer ..how embarrassing.”
Alternatives for human care at a population scale
Even before it was validated as evidence-based, peer support kept society emotionally healthy for millennia. Our species has a “prehistory of compassion,” From what we can tell, we humans have tried to help our struggling peers since at least 500 thousand years ago!
However, in modern times, the settings in which peer support can organically take place (e.g. “third places”) have dwindled. Instead of innovating to adapt this time-tested modality to our disconnected times, much innovation has focused on entirely new solutions like chatbots. On the other hand, some companies take pride in the challenge of resurrecting and powering-up an elegant intervention that leverages the unique abilities humanity has to offer.
AI as human-supporter vs. human-replacement
AI chatbots are not the answer to our problems, but we also need not discard the promise of AI in assisting human-led interventions.
AI, when used judiciously, can significantly improve the quality and outcomes of human-to-human interactions.
AI can improve the accessibility of human-to-human interaction. For instance, matching you instantly to your best-aligned peers with personal experience on any topic of your choosing, in a matter of seconds.
AI can improve the quality of human-to-human interaction. For instance, measuring and reporting humans’ expressed sentiments to create a feedback loop for improvement.
AI can identify supplements to social connection. For instance, identifying and serving the most practical problem-solving resources for a particular situation.
AI can power-up subclinical providers for improved safety. For instance, augmenting humans’ crisis-detection abilities.
In conclusion
We, humans, gravitate toward the comfort of bandaids. Like bandaids, chatbots can comfort us in times of desperation. But healing a wound takes more than a comforting bandaid. Similarly, our emotional wounds require more than comfort to heal. The nuanced, believable input we receive from fellow humans can best heal our emotional wounds. AI can help us facilitate that type of healing, without displacing human connection to provide it.
Photo: Vladyslav Bobuskyi, Getty Images

Helena Plater-Zyberk is the Founder & CEO of Supportiv, the AI-driven on-demand peer-to-peer support service that serves large employers, EAPs, health plans, hospitals, Medicare, and Medicaid and has helped over 2 million people cope with, heal from, and problem-solve struggles like stress, burnout, loneliness, parenting/caregiving, anxiety, and depression. Supportiv has been proven in peer-reviewed research to reduce the cost of mental health care and deliver clinical-grade outcomes. She previously served as CEO of SimpleTherapy, an at-home physical therapy service, and has operated business units for global corporations Scholastic and Condé Nast. Helena holds an MBA from Columbia University.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.