A new study published in Frontiers in Digital Health explores how mental health professionals view the growing use of generative AI chatbots in clinical settings.
As AI tools like ChatGPT become increasingly integrated into everyday life, understanding how they might support or complicate mental health care is essential.
What Was the Study About?
Researchers in Australia interviewed 23 mental health clinicians, including psychologists, psychiatrists, and social workers, to understand their perspectives on the risks and benefits of using generative AI chatbots in mental health care. Clinicians were also shown a demonstration of a chatbot and asked how it influenced their views.
Key Takeaways
Potential Benefits
-
Offers 24/7 support, which may increase access for people in remote or underserved areas
-
Can deliver multilingual services, supporting culturally and linguistically diverse users
-
May help clients stay engaged between sessions with reminders and structured tasks
-
Could support early intervention and routine mental health needs
-
Might improve affordability and reduce pressure on clinical services
-
Seen as potentially engaging for youth or those uncomfortable with face-to-face interaction
Risks and Concerns
-
Lack of regulation and accountability in critical situations such as crises
-
Concerns about privacy, data protection, and ethical handling of sensitive information
-
Inability to understand emotional nuance, tone, and non-verbal cues
-
Risk of inaccurate advice or unsafe responses for high-risk individuals
-
Possible bias or harm if AI is trained on flawed or limited datasets
-
Over-reliance on chatbots may reduce human connection in care
Clinicians’ Overall Views
Most clinicians held mixed or cautious views. While some saw potential in limited, low-risk scenarios such as administrative support or basic education, many felt that the risks outweighed the benefits in more complex cases.
The chatbot demonstration reinforced these mixed perspectives. Some clinicians were impressed by its efficiency in summarising notes, but others felt it responded inadequately in emotionally sensitive contexts.
What Needs to Happen Next?
The study highlights a need for:
-
Strong regulatory frameworks and ethical guidelines
-
Greater clinician training and hands-on exposure to AI tools
-
Ongoing research into AI safety, effectiveness, and real-world application
-
A balanced approach, where AI enhances, not replaces, human-delivered care
Read the full study: Frontiers in Digital Health