Mind has announced a year-long global inquiry into the intersection of artificial intelligence and mental health, following a The Guardian investigation that found “very dangerous” mental health advice appearing in Google’s AI Overviews.
The investigation revealed that AI-generated summaries, displayed above traditional search results and viewed by billions of users each month, had delivered inaccurate or misleading health information. This included content relating to psychosis, eating disorders, cancer, liver disease and women’s health. Some experts described the mental health guidance as incorrect, harmful, or capable of leading people to avoid seeking help.
In response, Mind will convene leading clinicians, people with lived experience, policymakers, health providers and technology companies to examine both the risks and opportunities of AI in mental health. The commission aims to identify appropriate safeguards, standards and regulatory approaches to ensure innovation does not compromise safety or wellbeing.
Dr Sarah Hughes, Chief Executive of Mind, emphasised that AI has significant potential to widen access to support and strengthen public services, provided it is developed and deployed responsibly. The inquiry will gather evidence, create space for lived experience to shape digital design, and seek to define what a safer digital mental health ecosystem should look like.
The initiative is described as the first of its kind globally and reflects growing concern about the influence of generative AI tools on public understanding of health information.
