The States Step In to Regulate AI: A Policy Vacuum Filled by Local Action in the United States

A political and legislative vacuum created by the US federal government’s focus on promoting AI innovation has prompted individual US states to take the lead in establishing AI regulation, particularly within the critical and rapidly evolving healthcare sector of the United States.

State Regulation Targets Mental Health AI

A systematic search of state legislation on health AI within the United States from January 2023 to June 2025 revealed a dramatic surge in activity. Of the 99 bills introduced and 16 enacted, regulation concerning the use of AI by health professionals was a key focus, with approximately half of these bills specifically targeting mental health professionals.

The primary goal of this legislation is to ensure patient safety, professional oversight, and transparency when AI tools are integrated into mental and behavioral healthcare.

Key regulatory efforts centered on the use of AI by mental health professionals include:

  • Disclosure and Transparency: Multiple states now require practitioners to inform patients when AI is being used. For example, Utah now requires disclosure when providers enable patients to interact with generative AI for medical advice, a provision highly relevant to automated or AI-powered mental health support tools.
  • Professional Oversight: Bills in states such as Massachusetts, Rhode Island, and Texas proposed that mental health professionals must obtain approval from their licensing board or another agency before using AI and, crucially, must remain involved in monitoring patient care.
  • Substitution Limits: Other legislation has sought to prevent AI from replacing human judgment. Bills adopted in Illinois and Nevada specifically limit what AI tools may do without a mental health professional’s direct involvement, underscoring the necessity of human supervision in diagnostic and treatment processes.

Broader Health Regulation and Governance

While mental health receives specific attention, state regulations are also vigorously addressing other areas of healthcare where AI presents risks:

Insurers’ Use of AI

States are heavily focused on regulating insurers’ use of algorithms in prior authorization and medical necessity reviews. This area is critical to mental healthcare access. States aim to ensure that AI-based decisions:

  • Are based on relevant clinical information about the individual patient.
  • Do not supplant reviews by qualified medical professionals.
  • Do not harm or discriminate against patients seeking care.

Algorithmic Discrimination

Several US states introduced broad anti-discrimination regulations, often modeled after Colorado’s SB 205. These laws mandate monitoring and information disclosure requirements to detect and mitigate algorithmic bias, which is essential for ensuring equitable access to mental health treatment across different demographics.

A Critical Gap: The Call for AI Safety Assurance

The current analysis finds that the prevailing consumer protection orientation of state bills often diverts attention from safety assurance. This is a major concern, particularly in mental health where the risk of poor outcomes from misdiagnosis or inappropriate treatment is significant.

To address this, it is strongly recommended that US states mandate that healthcare organizations adopt a robust AI governance process. This governance process should ensure that AI tools are safe and effective, and must include steps to:

  • Confirm an AI tool performs well, including in patient subgroups.
  • Ensure the integration plan into the workflow minimizes risks and maximizes benefits.
  • Identify and mitigate potential ethical concerns.
  • Confirm a robust, adequately resourced plan to monitor the AI tool’s ongoing performance.

While a sensible federal scheme would better balance the goals of protecting innovation and promoting responsible use of AI across the United States, state experimentation—particularly in defining the professional use of AI in mental health—is a necessary and welcome intervention until the US Congress acts

This article is based on an open-access journal article and is republished under the terms of the CC-BY License. Link to Original Article: (https://jamanetwork.com/journals/jama-health-forum/fullarticle/2810052)

 

Share this post

About the Author

eMHIC

eMental Health International Collaborative

Authors

eMHIC

eMental Health International Collaborative

ADVERTISEMENT

Our Audience

eMHIC has an audience of 26 member countries (and growing) with thousands of subscribers around the world.

Something to Share?

Contribute quality news and resources to the eMHIC Knowledge Bank. Your submissions will be carefully considered for our global community.

More Reading