As artificial intelligence becomes deeply embedded in healthcare delivery, clinical decision-making, administrative operations, and patient engagement, the question is no longer “should we use AI?” but “how do we use AI responsibly?”
To address this need, the American Medical Association (AMA), in partnership with Manatt Health, released a first-of-its-kind AI Governance Framework for Healthcare Providers—a free, interactive toolkit designed to help health systems, clinics, and provider groups implement AI in a safe, ethical, and operationally sound manner.
This marks a pivotal moment for healthcare: a professional society is stepping in to lead the creation of AI governance standards—not just for developers, but for end users.
While most guidance in the past focused on how to build safe models (e.g., GMLP, PCCP), this new framework focuses on how healthcare organizations can govern AI at the point of use. It recognizes that even the most rigorously validated algorithm can become dangerous if deployed without proper oversight, clinical integration, and transparency.
The AMA’s emphasis is on augmented intelligence—that is, AI that enhances human capabilities rather than replaces them. This foundational value underpins the framework’s structure, which is divided into five core pillars:
Establish a cross-functional AI governance body that includes clinicians, legal, IT, ethics, quality, and patient representatives. This group:
Sets AI strategy and policy
Approves and prioritizes AI projects
Ensures ongoing accountability and review
Why it matters: Without centralized oversight, AI projects risk becoming fragmented, unvetted, or misaligned with clinical priorities.
Develop policies for:
AI procurement and onboarding
Risk stratification of models (low-risk vs. high-risk)
Human-in-the-loop workflows
Documentation and version control
Decommissioning or retraining AI models
Why it matters: Just like medical devices or pharmaceuticals, AI must have lifecycle policies in place to ensure safety and efficacy over time.
Ensure that AI tools:
Are integrated into clinical workflows
Have clear escalation pathways and override mechanisms
Do not replace clinician judgment
Include post-implementation monitoring
Why it matters: Models often perform well in labs but break in real clinical contexts. Governance must extend beyond deployment into real-world use.
Organizations should:
Disclose AI use to patients in understandable ways
Provide options for human review
Train staff on how to explain AI’s role in care decisions
Why it matters: Trust in AI isn’t built through performance alone—it’s built through communication and consent.
All AI systems must be evaluated for:
Performance across subpopulations
Sources of bias in training data
Equity implications in deployment (e.g., access to AI-driven services)
Why it matters: AI can perpetuate and amplify disparities unless bias is proactively monitored and mitigated.
The AMA-Manatt toolkit includes:
An interactive self-assessment tool for governance maturity
Policy templates and examples
Model intake and risk stratification forms
Sample AI use case evaluation rubrics
Playbooks for clinician training and patient disclosure
It’s designed to be actionable for real-world use—whether you’re a large academic health center or a mid-sized group practice evaluating your first clinical algorithm.
At Gesund.ai, we enable healthcare organizations to operationalize AI governance with infrastructure—not just documents.
Here’s how we directly support the AMA-Manatt framework:
Audit trails & model versioning: Track who approved each model, what data it used, and when/why it was modified.
Clinician-in-the-loop workflows: Embed human review checkpoints and escalation pathways directly into AI use.
Bias and subgroup analysis: Validate models across age, race, gender, and geography, and monitor for drift post-deployment.
Patient transparency tooling: Support AI tagging, disclosure scripting, and log-level traceability for human-readable outputs.
Post-market surveillance: Continuously monitor models, detect performance degradation, and trigger retraining or decommissioning workflows.
In short: we make AI governance executable, repeatable, and ready for review.
Every clinician knows how to review a new drug: evidence, side effects, dosage, interactions. We now need the same mindset for AI—because models are becoming core to care delivery.
The AMA-Manatt toolkit gives us the map. Gesund.ai gives you the tools to make it work in practice.
→ See how Gesund.ai can help your organization govern AI with confidence: https://gesund.ai/get-in-touch-gesund
American Medical Association
Governance for Augmented Intelligence: Establish a Framework for the Safe, Effective, and Equitable Use of AI in Health Care: https://edhub.ama-assn.org/steps-forward/module/2833560
Manatt Health
An AI Governance Framework for Providers from the AMA and Manatt: https://www.manatt.com/insights/newsletters/health-highlights/an-ai-governance-framework-for-providers-from-the-ama-and-manatt
American Medical Association
Advancing Health Care AI Through Ethics, Evidence, and Equity: https://www.ama-assn.org/practice-management/digital-health/advancing-health-care-ai-through-ethics-evidence-and-equity