AI in digital mental health: the benefits, the risks, and why human-to-human connection can never be replaced

The last couple of years have seen an explosion of AI in the mental health space. Chatbots, AI “therapists,” and digital companions are emerging at a dizzying pace, often promising to fill critical gaps in access to care. The draw is understandable: demand for care exceeds the supply and healthcare systems are stretched, leaving many without timely support.

But amid the hype, there’s a trend worth tracking: AI-only solutions can leave people in distress without safeguards and real human connection when it matters most.


AI in mental health: Useful and ever-growing, but a cautionary tale

At Togetherall, we are committed to the responsible use of AI. For us, AI has immense potential value when used strategically and ethically. A common use-case is when it supports operational excellence and efficiency: aggregating data, identifying trends, supporting clinicians, or helping us improve the overall user experience. However, in the event that AI is offered to support people in distress, it is our belief that “human-in-the-loop” infrastructures are needed – that is: real, live, mental health professionals ready to support those in need.

AI can generate words – supportive words, even – but it cannot be held accountable, it cannot have a “shared lived experience”, and it has yet to establish that critical real-world connection to emergency systems. As a result, the AI-only “support bot” approach lacks that real-world connection; it can’t take action when needed, and it cannot be held accountable to the ethics that frame professional mental health systems. That accountability, supported by training, regulations, and standards of care, is what helps to keep people safe.


The dangers of AI in digital mental health

AI has shown tremendous promise in digital mental health, yet recent tragedies have also exposed potentially lethal risks. According to an August 2025 New York Times investigation, chatbots like ChatGPT (as they exist today) are being used by vulnerable teens experiencing suicidal ideation, and in some cases, AI-only systems not only failed to effectively manage the risk their own systems flagged, but they allegedly provided harmful responses and even technical guidance about how to die by suicide.

The wrongful death lawsuit brought by Adam Raine’s family highlights profound flaws in AI safety protocols – a 16-year-old repeatedly disclosed suicidal intentions to ChatGPT over several months, receiving responses that at times bypassed warning systems or normalized his ideation, with tragic consequences.

This danger was echoed in Laura Riley’s widely discussed New York Times essay about her own daughter’s death, in which Riley details troubling chatbot interactions that neither demonstrated true empathy nor recognized human nuance during crisis.

Both cases underscore the risks of seeking to scale mental health support without real-world safety mechanisms in place. As it stands today, AI’s capacity for pattern recognition cannot substitute for real-world actions taken by professionals with real-world systems. In addition, algorithmic conversations can inadvertently escalate risk or make individuals more isolated because they lack the moral and professional accountability of human clinicians, and they are siloed from critical real-world safety mechanisms.

Togetherall utilizes a human-in-the-loop (HITL) infrastructure, featuring round-the-clock moderation by licensed mental health practitioners who monitor for warning signs, provide immediate and proactive intervention, and work with real-world crisis services when necessary. This pairing of technology with real mental health professionals ensures that users experiencing severe distress or risk are met with responsive, compassionate, real-world care – often making the difference between life and death in moments of acute need.

Without HITL infrastructure, including the ability to manage user-safety in the real world, AI-only platforms are risking user safety in exchange for scale.


What human-in-the-loop (HITL) really means

HITL isn’t just a “nice to have” in mental health tech; it’s the very architecture that saves lives. HITL means:

  • Oversight by licensed mental health professionals
  • Strategic integration of tech-enabled systems (including AI) with mental health professionals to ensure member safety
  • Bridging digital interactions to real-world systems
  • Accountability rooted in ethics, training, and law

At Togetherall, we’ve spent 18 years building and refining this infrastructure. Our global community provides scalable peer-to-peer mental health support – but critically, it’s underpinned moment-to-moment by a team of licensed and registered clinicians who moderate, proactively evaluate/manage risk in partnership with technology, intervene when necessary, and keep both the community and individual users safe.

The vast majority of Togetherall users will not need direct intervention by our team of professionals, but for those who do, HITL can be critical. That’s where lives are saved. That’s where real trust is built.


Building the next generation of digital mental health

If mental health tech is to truly rise to meet the urgent need, the future will seek a blend of AI and HITL:

  • Scalable therapeutic ingredients (AI, peer support, digital tools, therapy-on-demand)
  • Human-in-the-loop infrastructure (moderation, safeguarding, escalation, accountability)
  • Technology designed to bring these pieces together into something seamless and safe

This is not a theoretical recipe – it’s one we’ve been proving at Togetherall for nearly two decades. It works. It’s scalable. And most importantly, it’s a safe and trusted space.


Where do we go from here?

The mental health field is entering uncharted territory. AI is becoming ubiquitous, and it is already reshaping how people seek support. But as companies rush to capitalize on this moment, we must ask hard questions:

  • Why are AI-only companies launching solutions without HITL systems, when tragedies are both foreseeable and avoidable?
  • How might regulation and ethics combine to hold AI mental health products more accountable and require critical HITL systems capable of bridging to the real world?
  • Where do we draw the line between bold marketing ambitions and the genuine duty to protect human lives?

At Togetherall, our position is clear: as we seek to scale access to therapeutic mental health support, we simultaneously ground ourselves in human-to-human connection and the critical role of mental health professionals. There is a role for AI in the future of mental health but HITL will be required to manage user safety.

Togetherall stands by two key points when it comes to AI in mental health:

  1. AI cannot provide peer support, as it cannot have a lived experience; it can pretend to do both – but we can acknowledge that is an artificial relationship.
  2. AI cannot (currently) bridge between interaction with a person-at-risk and emergency services, much less case-management. While AI can/does make spam calls – that is just not the same as navigating health-care systems. That’s what HITL does. 

In the arms race to deploy AI “therapy,” let’s not forget: shared-lived experience, accountability, and real-world safety are not programmable features.


About Togetherall 

Established in 2007, Togetherall is available to more than 20 million individuals worldwide. Togetherall is the leading clinically managed, peer-to-peer, online support community where members can share what’s on their minds, anonymously, safely, and in-the-moment, 24/7/365. Members can connect through shared lived experiences with a global network of peers, backed by the safeguarding of more than 50 licensed clinicians overseeing the community around-the-clock. These clinicians empower individuals in peer support and foster and maintain a safe, vibrant environment. 

If you are interested in offering safe and scalable ways to support your people’s mental health,contact usto learn moreabout Togetherall’s online community.