The FDA Is Now Using Generative AI to Review Drugs and Devices. Here’s What That Means for You.

Overview

In a bold move that underscores how seriously it takes artificial intelligence, the U.S. Food and Drug Administration (FDA) is rolling out a generative AI tool across all of its scientific review divisions by the end of June 2025. After a successful pilot earlier this year, the agency is now scaling the technology across its drug, biologics, and medical device centers to help staff process and review complex regulatory documents more efficiently

The AI system won’t make final decisions—but it will assist reviewers by scanning, summarizing, and organizing thousands of pages of technical data, allowing human scientists to focus on critical judgment calls rather than paperwork.

Commissioner Marty Makary has called the initiative “aggressive,” and that’s no understatement. It’s a high-speed modernization of internal review operations—one that sets a powerful precedent for the entire healthcare ecosystem. If the FDA is using AI to review your AI, the bar for transparency, traceability, and trust just got raised.

What’s Actually Happening Inside the FDA?

The FDA’s generative AI tool—built in-house—will be used to

  • Summarize large sections of clinical trial data

  • Cross-reference previous applications and review standards

  • Help reviewers organize unstructured inputs (e.g., PDFs, scanned reports, tables)

Importantly, this is not an external pilot or a narrow proof-of-concept. The rollout spans all three major review centers: drugs, biologics, and medical devices. The goal is to enhance consistency, reduce administrative overload, and improve review timelines for both agency staff and applicants.

While AI outputs won’t replace expert human reviewers, the FDA is integrating AI directly into its regulatory muscle—and that’s a first.

What This Signals for the Healthcare AI Industry

If you develop AI tools for diagnostics, triage, patient monitoring, drug development, or any other regulated use case, this news carries weight. Here’s why:

1. The FDA is becoming AI-literate—fast.

The agency is no longer just evaluating AI products—it’s using them. Expect reviewers to ask sharper, more technically grounded questions during submissions. Vague validation claims or non-transparent models will have a harder time getting through.

2. Governance expectations will increase.

If FDA staff now work with AI under strict controls—versioning, logs, human-in-the-loop oversight—those same expectations will likely be passed onto vendors and applicants.

3. Review timelines may shift.

Streamlined internal workflows could eventually mean shorter review times, but also tighter feedback loops and faster back-and-forth on technical questions. Be prepared.

Why This Strengthens GMLP, PCCP & Lifecycle Guidance

The FDA’s internal use of AI aligns perfectly with its external frameworks:

  • Good Machine Learning Practice (GMLP)

  • Predetermined Change Control Plan (PCCP)

  • AI/ML Lifecycle Guidance Document

Until now, these were viewed by many as theoretical. But this rollout shows that the FDA isn’t just recommending lifecycle monitoring, explainability, and traceability—it’s living them. Any team submitting an AI tool for review should treat these frameworks as operational mandates, not paperwork.

The Regulator Has Become the Reviewer

Commissioner Makary is not waiting for a slow culture change. According to reporting from STAT and RAPS, the agency’s leadership wants this tool in full deployment across review centers by June 30.

Of course, concerns remain:

  • Will these systems be auditable or publicly explainable?

  • How will proprietary data be protected?

  • Could AI-driven reviews introduce new forms of risk or automation bias?

But the signal is clear: AI inside the FDA is here to stay.

Who Needs to Pay Attention

This development affects more than just pharmaceutical giants.

  • Digital health startups submitting AI-powered clinical decision support

  • Medtech companies filing De Novo or 510(k) applications

  • Health systems and payors implementing AI in care pathways

  • AI vendors supplying triage, detection, or workflow tools

If you submit to—or sell into—organizations that follow FDA standards, you’re in scope.

What AI Teams Should Do Now

  • Conduct internal audits

    Would your model documentation stand up to FDA-style scrutiny?

  • Elevate explainability

    Regulators now expect interpretable outputs.

  • Track lifecycle changes

    Use automated logs and change controls.

  • Map to guidance

    Operationalize GMLP and PCCP with real workflows.

Where Gesund.ai Fits In

At Gesund.ai, we’re building the AI validation infrastructure this new regulatory era demands. Our platform supports:

  • Develop and validate GMLP-compliant AI models

  • Ensure full traceability via audit trails and model/dataset lineage

  • Provide no-code annotation with built-in human oversight

  • End-to-end support for pre-market and post-market validation

  • Offer secure, scalable deployment (on-prem, air-gapped, or multi-cloud)

Whether you’re preparing for FDA review or working with customers who are, we help ensure your AI is compliant, equitable, and trusted.

📍The FDA is using AI to evaluate the future of AI in healthcare. Are your tools ready for that level of scrutiny?

Let’s make sure they are.

→ Learn how Gesund.ai helps clinical-grade AI meet the moment: gesund.ai/get-in-touch-gesund

About the Author

Enes HOSGOR

CEO at Gesund.ai

Dr. Enes Hosgor is an engineer by training and an AI entrepreneur by trade driven to unlock scientific and technological breakthroughs having built AI products and companies in the last 10+ years in high compliance environments. After selling his first ML company based on his Ph.D. work at Carnegie Mellon University, he joined a digital surgery company named Caresyntax to found and lead its ML division. His penchant for healthcare comes from his family of physicians including his late father, sister and wife. Formerly a Fulbright Scholar at the University of Texas at Austin, some of his published scientific work can be found in Medical Image Analysis; International Journal of Computer Assisted Radiology and Surgery; Nature Scientific Reports, and British Journal of Surgery, among other peer-reviewed outlets.