As AI chatbots become increasingly integrated into the daily lives of children and adolescents, eMHIC emphasizes the critical need for thoughtful implementation and robust safeguards. Various research highlights both the benefits and significant risks these tools pose for young people’s mental health and well-being.
The American Psychological Association (APA) Health Advisory: Protecting Adolescent AI Users
The APA’s “Artificial Intelligence and Adolescent Well-being: An APA Health Advisory” (June 3, 2025) highlights the nuanced and complex effects of AI on adolescents, defining this period as ages 10-25. The advisory urges developers to prioritize features that prevent exploitation, manipulation, and the erosion of real-world relationships. Key recommendations include:
-
Healthy Boundaries with Simulated Relationships: AI systems mimicking human companionship should include safeguards, such as clear notifications that interaction is with a bot and resources promoting human contact, especially when distress is expressed. Adolescents are noted to be less likely than adults to question AI information accuracy.
-
Age-Appropriate Design: AI systems for youth should have age-appropriate default privacy settings, content limits, and minimized persuasive design. Transparency and explainability should be prioritized, allowing young users to understand how AI works.
-
Encouraging Healthy Development: AI can assist in learning tasks like brainstorming, organizing, summarizing, and synthesizing information, but students must be aware of AI’s limitations.
-
Limiting Harmful Content Exposure: Developers should implement protections against inappropriate, dangerous, illegal, biased, or discriminatory content.
-
Protecting Data Privacy and Likenesses: Adolescents’ data privacy and likenesses should be protected, limiting data use for targeted advertising or sale to third parties. Unauthorized use of images or voices (e.g., deepfakes) poses severe psychological risks.
-
Human Oversight and Support: Mechanisms for human intervention should be readily available for users to report concerns or seek help.
-
Rigorous Testing: AI systems require thorough and continuous testing with diverse youth groups to mitigate unintended negative impacts.
-
Comprehensive AI Literacy Education: The APA calls for AI literacy to be integrated into core curricula, teaching young people to understand AI’s benefits, limitations, privacy concerns, and risks of over-reliance.
Internet Matters Research: “Me, Myself and AI Chatbot”
The “Me, Myself and AI: Chatbot research” report by Internet Matters (published July 1, 2025) provides insights into how children are using AI chatbots and the associated risks. Key findings include:
-
Widespread Use: Children are using AI chatbots for schoolwork (47% of 15-17 year olds), seeking advice (nearly a quarter), and companionship (over a third feel like they’re talking to a friend, rising to 50% for vulnerable children).
-
Vulnerability: Vulnerable children are more likely to use AI chatbots and rely on them for companionship, with 12% of all child users reporting they talk to them because they have no one else.
-
Concerns over Trust and Accuracy: 40% of children have no concerns about following advice from chatbots, and 36% are uncertain, despite instances of misleading or inaccurate responses. Children often prefer chatbots over traditional search.
-
Blurred Boundaries: Some children personify AI chatbots, and experts suggest increased emotional reliance is a risk.
-
Parental and Educational Gaps: While many parents are concerned about AI, only 34% have discussed how to judge the truthfulness of AI content with their children. Education on AI in schools is inconsistent.
The report calls for a system-wide, coordinated approach, including safety-by-design for platforms, clear government guidance, and enhanced AI and media literacy education in schools.
Insights from the Integrity Institute
The Integrity Institute also contributes to the understanding of these issues, with their perspectives on safeguarding children and adolescent use of AI chatbots aligning with the aforementioned concerns. Their insights emphasize:
-
Identifying the Core Problem: AI chatbot use by children and adolescents presents potential for over-reliance and the blurring of boundaries between real and artificial relationships.
-
Vulnerability of Youth: Young people, especially those experiencing mental health challenges, may be particularly susceptible to the risks due to their developmental stage and potential for seeking emotionally driven support from chatbots not designed for such use.
-
Importance of Human Connection: While AI can offer support, it is critical to ensure it complements, rather than displaces, genuine human social interactions and professional mental health support.
-
Ethical AI Design: The need for AI systems to be designed with children’s developmental stages in mind, incorporating transparency, ethical data use, and mechanisms to prevent harmful content.
-
Empowering Users: Promoting AI literacy among youth and caregivers is crucial to enable informed choices and critical evaluation of AI-generated content.
Access the Integrity Institute’s Google Slides presentation here
The eMental Health International Collaborative (eMHIC) remains dedicated to monitoring developments in AI chatbot technology and its impact on children and adolescents. Our commitment is to foster best practices in this crucial domain, ensuring digital tools truly support mental well-being while safeguarding young users.
If you have resources, research, or best practices related to this critical area, please do not hesitate to contact us at [email protected].
