Rethinking Mental Health When Machines Learn to Care

Dr. Scott Wallace (MD) examines the growing role of artificial intelligence in mental health care, highlighting its potential to extend the reach of compassionate support, reduce global disparities, and assist clinicians in delivering timely, personalized care. Rather than replacing human connection, he argues, AI can reinforce it—offering new ways to meet rising demand while preserving empathy at the heart of care.

Scott Wallace, PhD (Clinical Psychology)

Consider a teenager lying awake at night, her mind spiraling with fears she can’t name. She doesn’t reach for a friend, a parent, or even a journal. She reaches for her phone. Not to scroll, not to self-diagnose, but to speak with something—something that won’t interrupt, listens without judgement, and won’t disappear when the conversation gets tough.

A chatbot replies instantly, greeting her and acknowledging her distress. She keeps typing. Somehow, it feels safe.

 

Or take the middle-aged man who’s just lost his job and lies in bed at 3 a.m., staring into the dark. The weight of debt, identity, and shame makes his chest ache. Too embarrassed to talk to a therapist, and unable to afford one if even he wanted, he turns to a mental health chatbot and types, “I think I’ve failed.” The bot replies “That’s a strong statement. It sounds like something is weighing heavy on you.” And strangely, that’s enough—for now.

These scenes are no longer rare. A new mother unraveling under the weight of postpartum depression. A retired police officer silenced by trauma he can’t put into words. Across professions and generations, people are turning to AI-driven chatbots—now numbering in the hundreds among the 10,000-plus mental health apps available on the App Store, Google Play, and other platforms. They’re not doing so because they believe a machine can care. They’re doing it because, in that moment, the machine is what’s there. Always on. Never overwhelmed. Ready to respond, even when no one else is.

And that’s where things start to shift. Because what we’re witnessing is not about tech for good, or access for all. It’s something deeper. We are witnessing the emergence of a new kind of therapeutic presence. One that doesn’t breathe, but listens. One that doesn’t feel, but responds. And if we’re not careful—one that will come to redefine not just how care is delivered, but what it means to be cared for.

The question isn’t whether AI can help us manage mental distress. It already does. The question is what happens when AI starts shaping how we define distress, which feelings count, and what healing is supposed to look like.

Artificial intelligence isn’t just scaling mental health support—it’s reshaping the very story we tell about what it means to suffer, to heal, to be whole. It’s redefining how we name our distress, how we quantify resilience, and how we make sense of our inner lives. But if we’re not careful, we won’t just outsource the delivery of care, we’ll surrender the meaning of care itself.

We’re not merely digitizing mental health care. We’re reimagining it through systems that don’t feel what we feel, yet still claim to understand us.

This is the inflection point. A moment to ask what we’re gaining, what we’re risking, and what must be preserved—before the shape of care slips quietly beyond human hands.

 

Mental Health Used to Start With a Conversation. Now It Starts With Data.

Traditionally, mental health care began with a conversation. A person walked into a room, sat down, and tried to put pain into words. Now, the conversation often starts before we even know one is happening. A smartwatch notices a dip in heart rate variability. A phone logs fewer steps, more late-night screen time. An app pushes a prompt: You might be struggling. Want to talk?

In a twist of digital irony, the therapeutic dialogue today often begins with a notification delivered not by a person, but by a system that’s been quietly listening all along.

This shift is not just technical—it’s existential. When machines interpret our emotions before we speak, the question isn’t only about accuracy. It’s about authorship. Are we still telling our own stories? Are we still authors of our experience? Or have we become supporting characters in a story shaped by data?

The StudentLife project at Dartmouth captured this perfectly. By tracking students’ digital footprints—their location, sleep, screen time—researchers could predict mental health changes in real time. It worked. But it also bypassed what has long been the heart of care: being seen, being heard, being known in our own words.

And yet, there’s potential here—not just to detect distress earlier, but to reimagine support that meets people where they are, without judgment or delay. If we use these tools with care and conscience, they can help widen the doorway to help, not narrow it. They can make space for conversations to begin sooner, with more context, and perhaps more compassion.

The future of mental health doesn’t have to be data instead of dialogue. It can be data in service of dialogue—restoring the human voice at the center, not replacing it.

Chatbots That Listen and Feel Like Therapists


Apps like like Woebot* and Wysa* aren’t just digital tools. They are emotionally responsive systems trained to simulate therapeutic dialogue—engaging users with empathy-adjacent cues, contextual memory, and natural language flow. These aren’t decision trees disguised as conversation. They ask follow-up questions, track prior disclosures, and respond with a coherence and immediacy that, for many users, feels more attuned than rushed or distracted human interactions. And they do it at 2 a.m., without fatigue, bias, or shame.

Clinical studies offer measurable support. A randomized controlled trial published in JMIR Mental Health found that users of Woebot experienced significant reductions in anxiety and depressive symptoms within just two weeks. But the most striking insights come from subjective user reports. Many describe the interaction as calming, validating—even intimate. Some go further: the bot feels safer than a human. Easier to be honest with. Less likely to judge, interrupt, or miss the mark.

This is no longer a novelty—it’s a phenomenon. Researchers call it algorithmic attachment: the formation of genuine emotional bonds with artificial agents. It’s not about confusion. Users understand they’re speaking to code. But if the experience of being understood is authentic, does it matter that the understanding is simulated?

If AI can successfully evoke the experience of being supported then what exactly are we defending when we insist on “real” therapists? Is it professional training? Ethical grounding? Human intuition?

This is where the ground begins to shift. If a conversational agent can foster trust, provoke insight, and deliver psychological relief, what becomes of the sacred architecture of the therapeutic alliance? What, precisely, are we protecting when we insist that healing must come from a licensed, flesh-and-blood clinician?

Is it clinical training? Ethical accountability? Or is it something more existential—our belief that only human presence can confer meaning, that healing is inseparable from humanity?

These tools don’t just challenge the boundaries of therapy. They challenge its essence.

The Hybrid Model: Human and Machine, Together

Let’s be clear. AI isn’t replacing therapists—not now, and maybe not ever. But it is redefining their role. And that shift is more than cosmetic.

At platforms like Spring Health and Lyra Health*, AI is already the first to act. It triages new users, predicts dropout risk, flags elevated symptoms, and recommends treatment plans based on patterns no human could see. Behind the scenes, algorithms process thousands of data points—doing in seconds what would take a clinician hours, if not days. It’s quiet, efficient, and invisible. But it’s not neutral.

AI isn’t here to replace therapists. But it is challenging long-held assumptions about what therapists do, what they’re for, and what human-centered care means in an increasingly digital world.

Other systems take this even further. Chatbots handle check-ins. Apps monitor mood, track sleep, and surface trends. Dashboards turn experience into metrics. The therapist enters the loop later—for sense-making, escalation, or depth. It’s a division of labor: scale and structure to machines; context and complexity to humans.

In theory, this is the future: augmented care. But theory rarely accounts for history or for bias.

AI isn't supplanting human therapists. It’s reframing the value proposition of human care. Not eliminating it—repositioning it.

Take the now-infamous case of an AI model trained to detect depression in social media posts. It worked well for White users. It failed for Black users. Not because it malfunctioned—but because the emotional language it learned to recognize was culturally narrow. The model did exactly what it was trained to do. The failure was in the training—and in the assumption that one size fits all.

Now imagine that system at the front door of a mental health service. Deciding who gets flagged. Who gets help. Who gets missed. That’s not just a data gap. It’s a clinical blind spot. A structural risk. A question of equity disguised as efficiency.

The hybrid model is coming—arguably, it’s already here. But if we don’t interrogate the systems that underpin it, we risk building a mental health infrastructure that is fast, scalable, and deeply unequal. One where some receive nuanced care, and others get misread by an algorithm that was never trained to recognize them in the first place.

When AI becomes a gatekeeper to care—and increasingly, it already is—its errors aren’t just technical glitches. They’re clinical risks.

So yes, the hybrid model may be the future. But if we don’t get it right, we risk building a two-tier system: one where machine-guided care works beautifully for some, and dangerously misfires for others.

When Mental Health Becomes Data

There’s a part of AI in mental health that too often stays in the margins of our conversations, even though it should be front and center: the data itself. Not just what users type into a chatbot, but the full spectrum of passive digital signals we emit every day. Sleep patterns. GPS pings. Voice inflections. Typing cadence. Social media activity. These aren’t fringe inputs. They are the raw material of algorithmic mental health.

Increasingly, AI systems are trained to interpret these signals as emotional proxies—to detect risk, estimate mood, and adapt support in real time. At their best, these systems can personalize care, surface early warnings, and offer interventions with a speed and scale that human clinicians simply can’t match. But there’s a catch: once emotional data becomes the engine of care, it also becomes a potential site of surveillance, commodification, and harm.

Who owns this data? Is it the individual? The platform hosting the app? The employer who licensed the program? The third-party vendor managing the backend? Ownership isn’t just a legal issue—it defines who gets to benefit, and who gets to decide how care is delivered. And where is the data stored? Under which jurisdiction? Some platforms fall under HIPAA or GDPR. Others, conveniently, categorize what they collect as “wellness data”—exempt from the stricter standards that govern medical information. That legal gray zone isn’t an accident. It’s a design feature.

Then there’s the question of use. Could an employer quietly act on “aggregate” mental health scores to inform retention strategy? Could insurers factor mood volatility into coverage decisions? Could advertisers target a teenager’s loneliness for profit?

This isn’t conjecture. This is real and it’s happening now. Former Director of Public Policy at Facebook) Sarah Wynn-Williams’ memoir Careless Peopleprovides a striking example of how emotional data can be exploited for profit—particularly targeting vulnerable teenagers. She claims that Facebook identified teenage girls who deleted selfies on its platforms and then forwarded their data to companies that used this information to market beauty products to them. This practice involved tracking when 13- to 17-year-old girls deleted selfies, allowing advertisers to serve beauty ads at that moment.

So these aren’t far-off hypotheticals. As AI becomes further embedded in teletherapy platforms, wellness apps, and digital clinics, the boundary between support and surveillance is eroding. And if we don’t draw ethical, legal, and technological lines in the sand now, it may soon be too late.

 

Rethinking Mental Health in the Age of Intelligent Systems

One of the most important changes AI is bringing to mental health isn’t about how care is delivered. It’s about how we define mental health itself.

In the past, mental illness was something you talked about. You met with a therapist, described what you were feeling, and made sense of it together. Later, in the era of brain chemistry and pharmaceuticals, mental illness became something to diagnose and treat, something happening inside your biology.

Some of the most profound effects AI will have on mental health aren’t about how care is delivered. They’re about how we come to understand what mental health is.

But now, in the age of algorithms, mental health is becoming something you measure and predict. Your smartwatch picks up on changes in your sleep or heart rate. Your phone tracks your social activity and screen time. Machine learning models notice changes in your voice or writing tone. These systems use patterns in behavior, often gathered passively, to flag potential problems before you’re even aware of them. Sometimes, they’re astonishingly accurate. They can flag patterns that point to depression, anxiety, even suicidal risk early enough to intervene. That’s powerful. But it’s also unsettling. and raises new questions: What happens when your wearable tells you you’re anxious before you feel anxious? When a chatbot suggests you’re depressed before anything feels wrong?

We’re entering a world where self-awareness is increasingly outsourced to sensors, where inner experience risks being flattened into data points.

Over time, we risk handing over self-knowledge to systems that don’t understand our lives, our values, or our stories. They can identify signals, but not what those signals mean. A flagged drop in energy might be sadness—or it might be rest, introspection, change, meaning. These systems can detect signals, but they cannot interpret stories. They lack context. They don’t understand culture. They don’t hold space for contradiction. That’s not just a technical limitation. It’s an existential one.

What We Must Choose to Protect

Let me be clear: I’m not anti-AI. Quite the opposite. I’ve spent decades designing, evaluating, and advising on digital mental health tools. My interest in technology-enabled care goes back long before apps existed—before smartphones, before Google, before we even imagined therapy could be delivered through a screen.

There were only 27,000 websites on the entire internet when I first hard-coded a psychoeducational mental health site. No templates. No automation. Just raw HTML and a modem. Around that same time, I collaborated with a colleague to develop one of the earliest interactive ‘cybertherapy’ programs: a rules-based system that delivered tailored support via floppy disk. We were building digital therapy before the digital age.

So yes, I believe in the promise of AI. It can extend care, personalize treatment, and prevent suffering on a scale we’ve never seen before. But not everything that can be automated should be.

Real care isn’t just about detecting patterns—it’s about sitting with pain. About understanding context, culture, and change. About holding space for ambiguity, complexity, and meaning. These are the dimensions of care that don’t show up in data—but define the human experience. The question now isn’t whether AI can simulate empathy. It already does. The real questions are:

  • What must remain human in the age of intelligent care?
  • What kind of mental health system are we building—not just technically, but ethically, relationally, and culturally?

 

These aren’t engineering questions. They’re questions about values.

And the future of care depends on how we answer.

If this resonates…

I’m writing here at the intersection of AI, ethics, psychology, and digital health. If you’re building in this space—or wrestling with its consequences—consider joining my LinkedIn group Artificial Intelligence in Mental Health. No ads, no marketing or self-promotion allowed.

https://www.linkedin.com/groups/14227119/

 

* Note: I have no affiliations, financial interests, or relationships with any companies or products mentioned in this article.

This article was originally published on LinkedIn by Scott Wallace, MD. It is republished here with permission. Read the original article.

Share this post

The views shared are those of the authors and do not necessarily reflect those of eMHIC. This content is for general informational or educational purposes only and is not a substitute for professional mental health advice, diagnosis, or treatment. If you are experiencing a mental health crisis, please immediately contact local emergency services or a crisis support service in your area.
For more details, see our Privacy Policy & Terms of Service