2026: Who rides AI in mental health — platforms, clinicians, or governance?

At the 10th eMHIC Congress in Toronto, one message surfaced again and again across plenaries, panels, and side conversations:

We are no longer lacking ideas, pilots, or technological capability in digital mental health. What we are lacking is operational governance.

Over 700 delegates from 44 countries gathered to discuss equity, AI, lived experience, prevention, workforce transformation, and system integration. The diversity of contexts was striking — but so was the convergence around a shared gap: we keep saying “human oversight” without defining who is responsible, how it operates, and where accountability truly sits.

When Coordination Replaces Accountability

Across multiple sessions, strong leadership and coordination models were presented — but often without clear responsibility chains.

Anil Thapliyal opened the Congress by marking eMental Health International Collaborative (eMHIC)’s evolution into a global movement and reminding us that digital mental health now operates at population scale. Yet scale without governance is fragile.

In her keynote, Rachel Green shared the story of Amber — a tragic illustration of what happens when systems are fragmented, directories are outdated, and responsibility is diffused. Her call was radical and practical: stop building more tools, and start fixing navigation. Build once, deploy everywhere, and make accountability visible.

From Dubai, Khulood Alsayegh brought the conversation back to trust, transparency, and human values. AI, she argued, is not neutral — it mirrors what we feed it. Without accountability, bias and opacity scale faster than care.

Ethics Are Not the Same as Operations

Several speakers articulated strong ethical principles — but ethics alone do not govern systems.

Smriti Joshi laid out five non-negotiables for digital mental health: safety before scale, transparency before trust, privacy as dignity, equity by design, and human oversight. Crucially, her work showed that translation without cultural adaptation destroys empathy — and that equitable AI requires co-design, not retrofitting.

From an industry and systems perspective, Kana Enomoto highlighted the sheer scale of the challenge: a global mental health market measured in trillions, with task-sharing models as a necessity — not a choice. But task-sharing without governance simply shifts risk downward.

Megan Jones Bell reinforced a critical boundary: AI should not be unleashed directly into treatment. Readiness assessments, low-risk entry points, and organizational safeguards must come before deployment — not after harm occurs.

The Workforce Is Carrying a Systemic Burden

In the Humans at Work plenary, William Ajayi asked the uncomfortable but necessary question: Who is actually responsible for wellbeing at work? Too often, digital tools become a way to offload organizational responsibility onto individuals.

From Singapore, Daniel Fung spoke candidly about evidence standards, data integration, and the tension between innovation and regulation. The problem is not lack of data — it is lack of operational clarity around its use.

Across sessions, one pattern was consistent:

  • Clinicians are told they are “in the loop”
  • Leaders are told they are “accountable”
  • Systems assume responsibility will emerge organically

It doesn’t.

The Missing Layer: Operational Governance

What was largely absent — despite being repeatedly implied — is a defined operational layer between technology and care:

  • Who supervises AI-mediated decisions day to day?
  • Who can override systems — and under what criteria?
  • Who is accountable when something goes wrong?
  • Who trains, certifies, and audits this new reality?

Saying “everyone is responsible” is not governance. It is risk diffusion.

Why 2026 Matters

By 2026, digital mental health will be unavoidable — not experimental. The real shift ahead is not technological. It is executive and operational. We need:

  • C-level fluency in AI governance, not just AI strategy
  • Clearly defined human oversight roles, not abstract principles
  • Workforce upskilling that includes ethics, data, and accountability
  • Systems designed for continuous care, not episodic intervention

The conversations at eMHIC showed maturity, honesty, and readiness. The next step is execution.

2026 must be the year we stop talking about AI in mental health — and start governing how it actually operates.

This article is published with permission from Dr Juan José Martí Noguera. Juanjo is also the 2025 recipient of the Leadership Excellence Award for Spain. Visit the eMHIC Global Awards Hall of Fame to learn more. 

Share this post

About the Author

Juanjo Marti Noguera

at DMHC

I help organizations shape ethical, scalable solutions in governance, innovation, and human-centered design, drawing on experience in regional development, digital transformation, and global health. My work, including contributions to UNESCO and the EU Commission, focuses on AI governance, responsible innovation, and human-centered change.

Digital Mental Health Consortium

Authors

Juanjo Marti Noguera

Digital Mental Health Consortium

ADVERTISEMENT

Disclaimer

The views shared are those of the authors and do not necessarily reflect those of eMHIC.  For more details, see our Privacy Policy & Terms of Service

Our Audience

eMHIC has an audience of 26 member countries (and growing) with thousands of subscribers around the world.

Something to Share?

Contribute quality news and resources to the eMHIC Knowledge Bank. Your submissions will be carefully considered for our global community.

More Reading