Balancing Risks and Benefits: Clinicians’ Perspectives on the Use of Generative AI Chatbots in Mental Healthcare

A new study published in Frontiers in Digital Health explores how mental health professionals view the growing use of generative AI chatbots in clinical settings.

As AI tools like ChatGPT become increasingly integrated into everyday life, understanding how they might support or complicate mental health care is essential.

What Was the Study About?

Researchers in Australia interviewed 23 mental health clinicians, including psychologists, psychiatrists, and social workers, to understand their perspectives on the risks and benefits of using generative AI chatbots in mental health care. Clinicians were also shown a demonstration of a chatbot and asked how it influenced their views.

Key Takeaways

Potential Benefits

  • Offers 24/7 support, which may increase access for people in remote or underserved areas

  • Can deliver multilingual services, supporting culturally and linguistically diverse users

  • May help clients stay engaged between sessions with reminders and structured tasks

  • Could support early intervention and routine mental health needs

  • Might improve affordability and reduce pressure on clinical services

  • Seen as potentially engaging for youth or those uncomfortable with face-to-face interaction

Risks and Concerns

  • Lack of regulation and accountability in critical situations such as crises

  • Concerns about privacy, data protection, and ethical handling of sensitive information

  • Inability to understand emotional nuance, tone, and non-verbal cues

  • Risk of inaccurate advice or unsafe responses for high-risk individuals

  • Possible bias or harm if AI is trained on flawed or limited datasets

  • Over-reliance on chatbots may reduce human connection in care

Clinicians’ Overall Views

Most clinicians held mixed or cautious views. While some saw potential in limited, low-risk scenarios such as administrative support or basic education, many felt that the risks outweighed the benefits in more complex cases.

The chatbot demonstration reinforced these mixed perspectives. Some clinicians were impressed by its efficiency in summarising notes, but others felt it responded inadequately in emotionally sensitive contexts.

What Needs to Happen Next?

The study highlights a need for:

  • Strong regulatory frameworks and ethical guidelines

  • Greater clinician training and hands-on exposure to AI tools

  • Ongoing research into AI safety, effectiveness, and real-world application

  • A balanced approach, where AI enhances, not replaces, human-delivered care


Read the full study: Frontiers in Digital Health

References

Hipgrave L, Goldie J, Dennis S, Coleman A. Balancing risks and benefits: clinicians’ perspectives on the use of generative AI chatbots in mental healthcare. Frontiers in Digital Health. 2025;7:1606291.

Share this post

Sources

Frontiers in Digital Health

ADVERTISEMENT

Our Audience

eMHIC has an audience of 26 member countries (and growing) with thousands of subscribers around the world.

Something to Share?

Contribute quality news and resources to the eMHIC Knowledge Bank. Your submissions will be carefully considered for our global community.

More Reading