State AI Laws Are Now Setting the Rules for Healthcare AI: Here’s What Colorado, Texas, Utah, Nevada, California and Others Actually Require

Overview

U.S. states moved from exploratory AI bills to enforceable rules. In 2024–2025, several states enacted laws that directly affect healthcare AI: comprehensive risk-management statutes (Colorado, Texas), sector-specific bans and guardrails (Utah, Nevada, Illinois), and patient-facing disclosure mandates (California). If you build, validate, buy, or deploy AI in clinical workflows, this patchwork now determines what is permissible, how you document it, and how you monitor it post-deployment. The bottom line: treat state AI compliance like HIPAA—foundational, not optional.

What’s Happening

  • From principles to obligations. Colorado’s 2024 Colorado AI Act (CAIA) created the first comprehensive duty-of-care regime for “high-risk” AI systems, with developer/deployer obligations, risk assessments, notices, and appeal rights. Texas followed in 2025 with a cross-sector statute that ties safe harbor to NIST AI RMF practices.

  • Health-specific guardrails. States have zeroed in on high-risk clinical exposure points— mental health chatbots, insurer utilization review, and patient communications—imposing outright prohibitions or strict conditions.

  • Consumer-facing transparency. California now requires disclosure when generative AI drafts patient communications, and New York added guardrails for AI companions (safety checks, regular disclosures).

  • Government use and inventories. Connecticut’s 2023 law forces impact assessments before state agencies deploy AI; other states expanded public-sector inventories and disclaimers.

  • Momentum continues. New York’s broad “RAISE Act” advanced in 2025 (status: pending governor action at time of writing), while several states proposed insurer-AI limits similar to California’s insurance-specific rules.

This is not theoretical. These laws carry civil penalties, AG enforcement, and—in healthcare—regulator scrutiny where patient safety, non-discrimination, and human oversight are core.

Snapshot: Key State AI Laws Shaping Healthcare (2024–2025)

Use this as a working map for product, compliance, and clinical leadership.

  • Colorado – SB24-205 (Colorado AI Act) (effective Feb 1, 2026): Risk-based duties for “high-risk” AI; requires risk management, disclosures, adverse-action notices, and a path to human appeal.

    • Implication: expect algorithmic discrimination controls, documentation, and post-deployment monitoring for any consequential clinical decision support.

  • Texas – HB 149 (Responsible AI Governance Act) (signed 2025; effective Jan 1, 2026): Establishes a safe harbor aligned to NIST AI RMF and mandates documentation on inquiry; prohibits unfair/discriminatory AI practices.

    • Implication: formalize an AI risk program to qualify for safe harbor.

  • Utah – HB 452 (2025) + AIPA updates (SB 226/SB 332) (effective May 7, 2025): Regulates mental-health chatbots; bans selling/sharing identifiable mental-health inputs, adds disclosures for regulated services, and creates rebuttable presumptions for compliant policies.

    • Implication: mental health features must include strict data handling and safety policies; disclosures are not optional.

  • Nevada – AB 406 (2025) (effective Jul 1, 2025): Prohibits offering AI systems that provide professional mental/behavioral healthcare and bars licensed professionals from using AI to deliver therapy (administrative use allowed).

    • Implication: no direct AI therapy; clinical tools must be clinician-mediated.

  • Illinois – HB 1806 (WOPR Act, 2025) (signed Aug 4, 2025): Prohibits AI therapy and bars licensed professionals from using AI to make independent therapeutic decisions or directly interact with clients in therapeutic communication; DFPR can levy penalties.

    • Implication: wellness chatbots are distinct from therapy; marketing and product claims are now high-risk.

  • California – AB 3030 (2024): Requires disclosure when patient-facing communications are generated by AI (with narrow exceptions for human-reviewed drafts).

    • Implication: build provenance flags into patient messaging and portals.

  • California – SB 1120 (2024) (“Physicians Make Decisions Act”): Restricts health plan utilization-review AI from supplanting provider judgment or discriminating; mandates processes and documentation.

    • Implication: payer-facing AI must preserve human primacy and fairness.

  • Tennessee – ELVIS Act (2024): Protects voice and likeness against AI impersonation.

    • Implication: patient-education, voice bots, and marketing content require rights clearances and model training provenance.

  • New York – AI Companions & Algorithmic Pricing (2025): Requires safety measures and regular disclosures for AI companions (effective Nov 5, 2025); separate pricing transparency took effect Jul 8, 2025.

    • Implication: mental-health-adjacent “companion” features must detect self-harm signals and clearly disclose non-human status.

  • Connecticut – SB 1103 (2023): State agencies must conduct AI impact assessments before deploying AI and avoid disparate impacts. 2024–2025 business-regulatory bills stalled, but deepfake and education funding measures progressed.

    • Implication: expect revived comprehensive proposals; government procurement may require assessments.

  • New York – ADS Inventory (2025): Requires agencies to publish information on automated decision tools they use.

    • Implication: public-sector buyers will demand supplier transparency.

  • Pending/near-term: New York RAISE Act (frontier-model governance) and multiple state bills targeting health insurance utilization review AI.

    • Implication: even where not enacted, requirements are converging on human override, reasoned explanations, and bias controls.

State Rules Raise the Bar on Trustworthy AI

  • Clinical safety & liability: Mental-health chatbots and insurer-review AI are now regulated. Violations risk state enforcement, payer contract exposure, and malpractice-adjacent claims if tools “supplant” clinicians.

  • Regulatory risk: State AGs can compel documentation, risk assessments, and training/validation evidence—before or after harm.

  • Equity & bias: Statutes explicitly target “algorithmic discrimination.” Subpopulation performance gaps become compliance risks, not just quality issues.

  • Trust & disclosure: Patient-facing AI must be labeled. Companion apps must detect suicidal ideation and route appropriately.

  • Business impact: Cross-state variation raises delivery costs. Teams that align to NIST AI RMF, FDA GMLP, and PCCP patterns will scale faster across jurisdictions.

Strategic Recommendations for AI Teams

  • Adopt a single, auditable risk program mapped to NIST AI RMF, ISO/IEC 42001, FDA GMLP, and PCCP patterns. Make it your master template for state add-ons (CO, TX).

  • Classify your models by clinical consequence. Flag any system that can influence utilization review, diagnosis/triage, medication, or mental-health support as “high-risk.”

  • Document subpopulation performance (by age, sex, race/ethnicity proxies where lawful, SDoH proxies) and justify clinical relevance. Track shifts post-deployment.

  • Build human-in-the-loop by design: Ensure clinicians can override, appeal, and audit decisions. Log rationale and human involvement for each consequential decision.

  • Disclose patient-facing AI automatically (portals, SMS, email). Add provenance markers in content and persist them in the EHR/CRM.

  • Harden mental-health features: If any UX resembles “companion” or “emotional support,” implement crisis detection, escalation, and ban therapy-like outputs in restricted states.

  • Prepare insurer-AI controls: If you sell to payers, show that your tool does not supplant provider judgment, avoids discriminatory impact, and supports individualized clinical context.

  • Prove your training/validation chain: Maintain data lineage, consent/licensing where applicable (voice/likeness), and an immutable audit trail for all versions.

  • Run red-team and bias audits quarterly for high-risk systems; tie findings to corrective actions and model cards.

  • Maintain a state law heatmap (owner: Regulatory or Compliance) with binding requirements, effective dates, and product implications.

How Gesund.ai Helps

  • Bias & Fairness Validation: Quantifies algorithmic-discrimination risk with subpopulation performance tracking and drift alerts—core to CO CAIA, payer-AI fairness, and equity mandates.

  • Explainability Workflows: Case-level and cohort-level explanations support clinician override, payer review, and appeal rights.

  • Regulatory-Grade Documentation: Auto-generates auditable model cards, risk assessments, change logs, and PCCP-style documentation to satisfy state inquiries and payer audits.

  • Audit Trails & Version Control: Immutable lineage for datasets, code, prompts, parameters, and deployments—evidence for TX safe harbor and CO high-risk obligations.

  • Human-in-the-Loop Integration: Structured review queues, double-read workflows, and decision holds to keep clinicians in charge.

  • Post-Deployment Monitoring: Continuous quality, drift, and bias monitoring with alerting and rollback; supports insurer-AI constraints and agency inventories.

  • Guardrails for Mental-Health & Companion Use Cases: Policy-based suppression of therapy-like outputs in restricted states, crisis-signal detection, and jurisdiction-aware disclosures.

Build trust the hard way—through evidence

State AI laws now set real constraints on healthcare AI. If your governance is ad hoc, you will bleed time on one-off remediations. If you operationalize a single risk program mapped to NIST/FDA and layer state deltas on top, you’ll ship safely across jurisdictions and defend your AI in front of regulators, payers, and clinicians.

Book a demo to see how Gesund.ai operationalizes state-ready AI validation and monitoring across the full lifecycle.

Note: New York’s RAISE Act advanced through the legislature in July 2025 but required the governor’s signature as of this writing; several insurer-AI bills in multiple states remained pending.

About the Author

Gesundai Slug Author

Enes HOSGOR

CEO at Gesund.ai

Dr. Enes Hosgor is an engineer by training and an AI entrepreneur by trade driven to unlock scientific and technological breakthroughs having built AI products and companies in the last 10+ years in high compliance environments. After selling his first ML company based on his Ph.D. work at Carnegie Mellon University, he joined a digital surgery company named Caresyntax to found and lead its ML division. His penchant for healthcare comes from his family of physicians including his late father, sister and wife. Formerly a Fulbright Scholar at the University of Texas at Austin, some of his published scientific work can be found in Medical Image Analysis; International Journal of Computer Assisted Radiology and Surgery; Nature Scientific Reports, and British Journal of Surgery, among other peer-reviewed outlets.