Imagine having a compassionate, non-judgmental therapist available 24/7 at your fingertips. This is the emerging promise of generative AI chatbots. But are people actually using ChatGPT and similar tools for mental health support? And if so, what is it like to use? Researchers at King’s College London and Harvard Medical School set out to explore these questions, by interviewing 19 participants about their real-life experiences of use generative AI chatbots such as ChatGPT, Pi, and others to work on their mental health. Their study has recently been posted in preprint, pending peer review.

Key Findings

 

The findings revealed several positive impacts from these new AI tools, including improved mood, reduced anxiety, healing from trauma and loss, and enhanced relationships, which, for some participants, were considered life-changing. Using reflexive thematic analysis, the researchers developed four main themes:

  • Emotional Sanctuary: Participants valued the safe, non-judgmental, always-available space provided, where they could express their emotions freely.
  • Insightful Guidance: Many participants found the advice given by the chatbots to be insightful and helpful, especially in the realm of relationships.
  • Joy of Connection: Participants reported a sense of connection and companionship, leading to high levels of engagement.
  • Comparisons with Human Therapy: with some participants finding AI a helpful supplement to traditional therapy, and others noting limitations in replicating the empathy and depth of human interaction.

Potential and Challenges

 

Generative AI chatbots are built upon a different technology from the rule-based chatbots that have become prevalent in digital mental health in recent years: they are trained on vast amounts of data, rather than programmed with scripted responses to user inputs. This can result in novel capabilities, which was reflected in some participants’ experiences, such as the sense of being deeply understood, or the ability to work on mental health in flexible and creative ways, such as through role-play, imagery and fiction.

 

However, the study also identified several challenges and areas for improvement:

  • Safety Guardrails: Participants reported frustration with the AI’s safety protocols, which sometimes interrupted the flow of conversation and made users feel unheard during critical moments. 
  • Human-like Memory: The current inability of chatbots to remember past interactions and build a coherent understanding of the user over time was a significant drawback. 
  • Leading the Therapeutic Process: Users expressed a need for the chatbots to take a more proactive role in guiding the therapeutic process, helping them stay accountable and providing structured support over time.
  • Trust and Accuracy: While some users trusted the chatbot’s advice, others were skeptical about its accuracy and reliability. Ensuring that AI chatbots provide correct and beneficial guidance is essential for their credibility and effectiveness.

Future Directions

 

The study suggests that people may already be receiving meaningful mental health support from widely-available consumer-focused generative AI chatbots, which underscores the need for further research to better understand their safety and effectiveness.

 

Developers have an important role to help these tools reach their full potential, by enhancing listening skills, memory, and interactive capabilities of AI chatbots, and making them accessible to a broader audience.

 

In conclusion, generative AI chatbots represent a promising avenue for digital mental health support. With continued research and development, these tools could become a vital part of the mental health care ecosystem, offering scalable and effective support to those in need.