The AMA & Manatt AI Governance Toolkit: A Blueprint for Responsible AI in Healthcare

Overview

As artificial intelligence becomes deeply embedded in healthcare delivery, clinical decision-making, administrative operations, and patient engagement, the question is no longer “should we use AI?” but “how do we use AI responsibly?”

To address this need, the American Medical Association (AMA), in partnership with Manatt Health, released a first-of-its-kind AI Governance Framework for Healthcare Providers—a free, interactive toolkit designed to help health systems, clinics, and provider groups implement AI in a safe, ethical, and operationally sound manner.

This marks a pivotal moment for healthcare: a professional society is stepping in to lead the creation of AI governance standards—not just for developers, but for end users.

A Shift from Technical to Organizational Readiness

While most guidance in the past focused on how to build safe models (e.g., GMLP, PCCP), this new framework focuses on how healthcare organizations can govern AI at the point of use. It recognizes that even the most rigorously validated algorithm can become dangerous if deployed without proper oversight, clinical integration, and transparency.

The AMA’s emphasis is on augmented intelligence—that is, AI that enhances human capabilities rather than replaces them. This foundational value underpins the framework’s structure, which is divided into five core pillars:

The Five Components of the AMA-Manatt Governance Framework

1. Governance Structure & Oversight

Establish a cross-functional AI governance body that includes clinicians, legal, IT, ethics, quality, and patient representatives. This group:

  • Sets AI strategy and policy

  • Approves and prioritizes AI projects

  • Ensures ongoing accountability and review

Why it matters: Without centralized oversight, AI projects risk becoming fragmented, unvetted, or misaligned with clinical priorities.

2. Policy & Process Frameworks

Develop policies for:

  • AI procurement and onboarding

  • Risk stratification of models (low-risk vs. high-risk)

  • Human-in-the-loop workflows

  • Documentation and version control

  • Decommissioning or retraining AI models

Why it matters: Just like medical devices or pharmaceuticals, AI must have lifecycle policies in place to ensure safety and efficacy over time.

3. Clinical Integration & Safety

Ensure that AI tools:

  • Are integrated into clinical workflows

  • Have clear escalation pathways and override mechanisms

  • Do not replace clinician judgment

  • Include post-implementation monitoring

Why it matters: Models often perform well in labs but break in real clinical contexts. Governance must extend beyond deployment into real-world use.

4. Transparency & Patient Engagement

Organizations should:

  • Disclose AI use to patients in understandable ways

  • Provide options for human review

  • Train staff on how to explain AI’s role in care decisions

Why it matters: Trust in AI isn’t built through performance alone—it’s built through communication and consent.

5. Equity, Ethics, and Bias Mitigation

All AI systems must be evaluated for:

  • Performance across subpopulations

  • Sources of bias in training data

  • Equity implications in deployment (e.g., access to AI-driven services)

Why it matters: AI can perpetuate and amplify disparities unless bias is proactively monitored and mitigated.

The Toolkit in Practice: Not Just Philosophy—Execution

The AMA-Manatt toolkit includes:

  • An interactive self-assessment tool for governance maturity

  • Policy templates and examples

  • Model intake and risk stratification forms

  • Sample AI use case evaluation rubrics

  • Playbooks for clinician training and patient disclosure

It’s designed to be actionable for real-world use—whether you’re a large academic health center or a mid-sized group practice evaluating your first clinical algorithm.

Where Gesund.ai Fits In

At Gesund.ai, we enable healthcare organizations to operationalize AI governance with infrastructure—not just documents.

Here’s how we directly support the AMA-Manatt framework:

  • Audit trails & model versioning: Track who approved each model, what data it used, and when/why it was modified.

  • Clinician-in-the-loop workflows: Embed human review checkpoints and escalation pathways directly into AI use.

  • Bias and subgroup analysis: Validate models across age, race, gender, and geography, and monitor for drift post-deployment.

  • Patient transparency tooling: Support AI tagging, disclosure scripting, and log-level traceability for human-readable outputs.

  • Post-market surveillance: Continuously monitor models, detect performance degradation, and trigger retraining or decommissioning workflows.

In short: we make AI governance executable, repeatable, and ready for review.

Final Thought: It’s Time for Healthcare to Govern AI Like Medication

Every clinician knows how to review a new drug: evidence, side effects, dosage, interactions. We now need the same mindset for AI—because models are becoming core to care delivery.

The AMA-Manatt toolkit gives us the map. Gesund.ai gives you the tools to make it work in practice.

→ See how Gesund.ai can help your organization govern AI with confidence: https://gesund.ai/get-in-touch-gesund

Bibliography

  1. American Medical Association

    Governance for Augmented Intelligence: Establish a Framework for the Safe, Effective, and Equitable Use of AI in Health Care: https://edhub.ama-assn.org/steps-forward/module/2833560

  2. Manatt Health

    An AI Governance Framework for Providers from the AMA and Manatt: https://www.manatt.com/insights/newsletters/health-highlights/an-ai-governance-framework-for-providers-from-the-ama-and-manatt

  3. American Medical Association

    Advancing Health Care AI Through Ethics, Evidence, and Equity: https://www.ama-assn.org/practice-management/digital-health/advancing-health-care-ai-through-ethics-evidence-and-equity

About the Author

Gesundai Slug Author

Enes HOSGOR

CEO at Gesund.ai

Dr. Enes Hosgor is an engineer by training and an AI entrepreneur by trade driven to unlock scientific and technological breakthroughs having built AI products and companies in the last 10+ years in high compliance environments. After selling his first ML company based on his Ph.D. work at Carnegie Mellon University, he joined a digital surgery company named Caresyntax to found and lead its ML division. His penchant for healthcare comes from his family of physicians including his late father, sister and wife. Formerly a Fulbright Scholar at the University of Texas at Austin, some of his published scientific work can be found in Medical Image Analysis; International Journal of Computer Assisted Radiology and Surgery; Nature Scientific Reports, and British Journal of Surgery, among other peer-reviewed outlets.