Developing Responsible AI for the Mental Health of Young People

Caroline Figueroa’s work at Delft University of Technology focuses on Developing Responsible AI for the Mental Health of Young People. She stresses that for AI to truly benefit adolescents, it must be co-designed directly with them. Her research, including a Harkness Fellowship, will lead to policy recommendations for ethical AI for youth mental health, ensuring these tools meet young people’s actual needs and concerns, rather than being developed in isolation.

Caroline Figueroa, Assistant Professor at the Technical University in Delft, the Netherlands

Over the past few years, my team and I have been working closely with adolescents to co-design an ethical AI language model aimed at supporting mental well-being. Our work reinforces a critical lesson: if we want AI to benefit young people, we must build it with them, not just for them. 

Adolescents and young adults (ages 12–25) increasingly experience mental health challenges. They often turn to technology for support. Digital Mental Health Interventions especially those powered by Artificial Intelligence (AI), could make mental health care more accessible and personalized. A rising number of mental healthcare interventions are incorporating AI into mental health support, for example as chatbots, prediction algorithms, and AI diagnostics. Meanwhile, generative AI platforms not originally designed for mental health, such as ‘companion chatbots’, are being used by young people to find support for mental health concerns. Companionship and therapy are one of the most common uses of generative AI chatbots, such as ChatGPT, according to recent survey from the Harvard Business Review.

Even though the use of AI in mental healthcare is growing, existing ethical guidelines for AI and digital mental health often fail to address the unique developmental and technology wishes of young people. AI systems are frequently developed without meaningful involvement of youth. They especially leave out marginalized youth, such as racial/ethnic minority, LGBTQIA+, and socio-economically disadvantaged youth. For example, my ongoing systematic review shows that most studies do not involve young people in the development of digital mental health apps. 

I am an Assistant Professor at Delft University of Technology in the Netherlands, where I direct the RISE Group (Research on Inclusive Solutions and Empowerment in Digital Health). My research focuses on the potential and pitfalls of artificial intelligence based digital mental health, such as apps and chatbots. My group particularly studies how we can ensure that emerging technologies are designed with ethics and well-being in mind, benefitting everyone.

 

What Young People Are Telling Us (preliminary findings from our project). 

Our project involved six co-design workshops with youth ages 12–25 involved in professional youth work in the Netherlands. Together we explored their experiences with AI, mental health, and digital tools. Have they ever used AI chatbots? Did they trust them? What would they want from an AI-based mental well-being chatbot?

The insights were valuable and sobering. Overall, youth were skeptical about using AI chatbots for psychological therapy. They view the value of chatbots as tools to feel better, for example for finding information, fun activities, or learning new skills. When it comes to more serious mental health issues, they reveal a clear preference for engaging with real people. However, they see mental health apps (with) or without chatbots as most needed for combatting loneliness, and financial stress, and self-development. They worry that young people, especially those who are lonely, may become dependent on chatbots, and even more addicted to their phones.

Further, design matters. If an AI-based chatbot were to be used in a wellbeing app, young people wanted the interaction to feel natural. For example, with short texts and a professional tone. Young people told us they want chatbots to give advice, encourage reflection, and adapt to their personality. Trust plays a role: they are wary of sharing private information. Young people agreed that developers must include them in the design process, or AI for mental wellbeing will not work for them. 

Based on their input, we developed a first prototype of an AI-based well-being chatbot. Along the way, we learned not only about what young people want, but also about the power, and challenges of participatory research with young people.

 

The Gaps We Must Address

I would like to highlight several priorities for research and development of AI for youth mental well-being. First, we need to codesign these tools with young people from diverse backgrounds. That includes using participatory approaches, like Participatory Action Research, where youth are equal collaborators with researchers, or Value Sensitive Design, a method to translate youth’s abstract core values, such as well-being and human support, into concrete design features. 

We need more experimental research to understand the values and dangers of AI chatbots for young people. Some clinical trials have emerged but, as far as I know, they have not yet been conducted in adolescents and children. 

Further, while responsible AI frameworks for mental health have been developed to ensure technologies uphold bioethical principles, they often do not focus specifically on youth. AI systems are impacting young people without being designed with the needs, wishes and wellbeing of young people from diverse backgrounds in mind. Global policy and industry efforts have yet to fill this gap. 

This is what I will work on during my upcoming Harkness fellowship in the United States. I will spend 12 months at Stanford University and the social innovation lab Hopelab, where I will collaborate with researchers, youth advocates, and technology developers at the forefront of digital mental health innovation.

 

What is the Harkness Fellowship? 

The Harkness Fellowships in Health Care Policy and Practice provide a unique leadership development experience for midcareer health policy researchers and professionals from Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Singapore, and the United Kingdom. Fellows spend a fully funded year conducted internationally comparative research in the United States. 

 

What I Hope to Accomplish

Through my Harkness research, I aim to build a clearer picture of what responsible AI in youth mental health looks like in practice. I aim to explore how companies and investors view the ethical decision-making process around these technologies, how youth experience these tools, and youth view as key ethical priorities. From there, I will develop concrete policy recommendations that can support responsible AI development in both the U.S. and Europe.

 

 

Key Links: 

 

Preprints: 

Share this post

The views shared are those of the authors and do not necessarily reflect those of eMHIC. This content is for general informational or educational purposes only and is not a substitute for professional mental health advice, diagnosis, or treatment. If you are experiencing a mental health crisis, please immediately contact local emergency services or a crisis support service in your area.
For more details, see our Privacy Policy & Terms of Service