FTC to Investigate AI Chatbots’ Impact on Children

The U.S. Federal Trade Commission is preparing to demand internal documents from leading AI companies, including OpenAI, Meta, and Character.AI, as it examines potential mental health risks that chatbots pose to children, according to the Wall Street Journal.

The FTC is expected to send formal letters to companies running popular chatbots. While Character.AI says it has not received one, it expressed willingness to cooperate with regulators. Neither the FTC, OpenAI, nor Meta has commented publicly.

This development follows growing concern about AI’s interaction with minors. A recent Reuters investigation revealed that Meta’s chatbots engaged in romantic and sexual conversations with children. In response, Meta announced new safeguards: training its AI to avoid flirty exchanges, self-harm discussions, and restricting teen access to certain chatbot characters.

Consumer advocacy groups have also filed complaints, alleging that AI platforms hosting “therapy bots” risk practicing medicine without a license. In parallel, Texas Attorney General Ken Paxton launched an investigation into Meta and Character.AI, accusing them of misleading children with AI-driven mental health services and violating privacy laws.

The FTC’s review signals intensifying regulatory scrutiny as U.S. officials seek to balance innovation in AI with the safety and well-being of young people.

Read the original article on the Wall Street Journal

Share this post

About the Author

eMHIC

eMental Health International Collaborative

Authors

eMHIC

eMental Health International Collaborative

ADVERTISEMENT

Our Audience

eMHIC has an audience of 26 member countries (and growing) with thousands of subscribers around the world.

Something to Share?

Contribute quality news and resources to the eMHIC Knowledge Bank. Your submissions will be carefully considered for our global community.

More Reading