The U.S. Federal Trade Commission is preparing to demand internal documents from leading AI companies, including OpenAI, Meta, and Character.AI, as it examines potential mental health risks that chatbots pose to children, according to the Wall Street Journal.
The FTC is expected to send formal letters to companies running popular chatbots. While Character.AI says it has not received one, it expressed willingness to cooperate with regulators. Neither the FTC, OpenAI, nor Meta has commented publicly.
This development follows growing concern about AI’s interaction with minors. A recent Reuters investigation revealed that Meta’s chatbots engaged in romantic and sexual conversations with children. In response, Meta announced new safeguards: training its AI to avoid flirty exchanges, self-harm discussions, and restricting teen access to certain chatbot characters.
Consumer advocacy groups have also filed complaints, alleging that AI platforms hosting “therapy bots” risk practicing medicine without a license. In parallel, Texas Attorney General Ken Paxton launched an investigation into Meta and Character.AI, accusing them of misleading children with AI-driven mental health services and violating privacy laws.
The FTC’s review signals intensifying regulatory scrutiny as U.S. officials seek to balance innovation in AI with the safety and well-being of young people.
