Responsible AI for Youth Mental Health; What We Are Learning and What Must Change

Young people are increasingly turning to generative artificial intelligence (AI) chatbots—such as ChatGPT, Snapchat AI, and Replika for emotional support, companionship, and even therapy. Seventy-two percent of U.S. youth aged 13–17 have tried AI companions at least once, and more than half use them regularly. In a 2025 nationally representative study, 13.1% of youth aged 12–21, around 5.4 million young people, reported using generative AI tools for mental health or emotional support when feeling sad, angry, or nervous. Among those aged 18–21, that figure rises to 22.2%.

This may be in part because AI tools are free, always available, and feel private. For many young people facing long waitlists, high costs, or stigma around traditional counseling, AI can feel like the only accessible option.

But these benefits come with serious risks. Emotional dependency, inaccurate advice, and the inability of current AI systems to handle crises, such as self-harm or suicidal ideation, have already led to tragic outcomes. Several major AI companies are now facing lawsuits following teen suicides linked to harmful chatbot interactions. 

With stakes this high, a central question emerges: How can generative AI be used to support, rather than harm, young people’s mental health?

Bringing Researchers, Tech Companies, Civil Society and Young people together

To address this question, we recently organized a series of interdisciplinary workshops focused on responsible uses of generative AI for youth mental health.

The first, I co-hosted with Vicki Harrison at the Stanford Center for Youth Mental Health and Wellbeing and supported by Hopelab, brought together researchers, clinicians, policymakers, tech leaders, civil society organizations, and young people themselves. Participants included Stanford faculty; representatives from the California State Assembly; AI companies such as OpenAI, Anthropic, Character.AI, Google, and TikTok; and organizations including Koko, the Family Online Safety Institute, the Young People’s Alliance, the Rithm Project, and the Center for Youth and AI.

The second meeting, we held as a pre-event to the eMHIC25 conference and co-hosted with Kids Help Phone Canada. It included university faculty from Canada and the Netherlands, members of Kids Help Phone’s youth board, the Mental Health Commission of Canada, Kooth, and government representatives from Australia and New Zealand.

Across both meetings, the shared goal was to develop targeted priorities to advance responsible practices for generative AI chatbots through research, policy, and industry action.

Discussions centered on two categories of tools: General-purpose AI systems not designed for mental health support, and well-being-oriented AI tools, whether or not they are grounded in clinical evidence

We focused on teenagers aged 13–18 and used the World Health Organization’s definition of mental health as “a state of mental well-being that enables people to cope with the stresses of life, realize their abilities, learn and work well, and contribute to their community.”

What Young People Told Us

Young participants shared the many ways teenagers and young adults are using generative AI, including for helping them with social skills, dealing with tough situations, and as emotional confidantes. They found these conversations helpful, but also raised critical questions: What is the right balance between digital and human mental health support? What safeguards should AI companies implement? How can we protect young people without infringing on their rights or autonomy?

Young people emphasized that digital tools should complement, not replace, human support; that safeguards should be embedded through positive design rather than bans. Outright restrictions are often ineffective, and young people can find their way around them.

As one youth panelist put it:

“Banning things outright with little explanation and little understanding has never worked… However, trusting young people with tools and context creates resilience.” (Nico Fisher, 18)

AI literacy emerged as a key priority. Young people need support to critically evaluate information, understand bias in AI systems, and the privacy of their data. 

The discussions also showed the clear limitations of current AI systems as therapeutic tools and the urgent need for stronger protocols around crisis situations, such as suicidality and self-harm. Can AI systems balance engaging with difficult mental health topics, while consistently directing users to professional support?

Our next steps

Importantly, AI systems are often developed without meaningful engagement with young people, especially marginalized youth, including racial and ethnic minority youth, LGBTQ+ youth, and those from socioeconomically disadvantaged backgrounds. This is problematic because young people have the right to be heard on issues that affect their lives and including them leads to better digital services. young people have their own ways of communicating, distinct developmental needs, and unique digital realities. They may be able to foresee ethical issues that adults may not fully understand. 

There was strong agreement on the need for better and more structural ways to involve young people, guidelines for AI companies to deal with mental health crises, and independent, peer-reviewed research, including measuring the long-term outcomes of using AI chatbots on youth mental health, to guide policy and design. 

We are now working on next steps, including establishing working groups and launching new research projects.

Stay tuned for what comes next and contact me if you want to join our efforts!

Share this post

About the Author

Caroline Figueroa

, PhD

at Delft University of Technology

I am an Assistant Professor at the Technical University in Delft, the Netherlands. My research focus is on digital interventions for mental health and healthy living, with an emphasis on developing cutting-edge innovations that tailor to the needs of underserved populations.

Delft University of Technology

Authors

Caroline Figueroa

PhD

Delft University of Technology

ADVERTISEMENT

Disclaimer

The views shared are those of the authors and do not necessarily reflect those of eMHIC.  For more details, see our Privacy Policy & Terms of Service

Our Audience

eMHIC has an audience of 26 member countries (and growing) with thousands of subscribers around the world.

Something to Share?

Contribute quality news and resources to the eMHIC Knowledge Bank. Your submissions will be carefully considered for our global community.

More Reading