A recent FDA debate over radiology AI is being framed too narrowly.
At the center of it is a petition from Harrison.ai asking the FDA to exempt certain radiology AI products from additional 510(k) review once a company has demonstrated competence within a device category, provided manufacturers meet conditions around transparency, training, and post-market monitoring.
The public comment period has now closed. But the real significance of this debate is not whether the petition is accepted or rejected.
It’s what the discussion reveals about where clinical AI regulation is heading.
After reviewing the petition and the public comments submitted to the FDA docket, one thing becomes clear:
The system is moving away from one-time approval toward continuous, lifecycle oversight.
And it is not ready for that shift.
The current conversation is being framed as a familiar tradeoff:
faster innovation vs stricter oversight
premarket review vs post-market monitoring
That framing is misleading.
The real shift underway is not about how much regulation there is.
It is about where the burden of evidence sits.
If the FDA reduces reliance on repeated premarket review, then evidence does not disappear.
It moves downstream — into real-world clinical deployment.
The FDA’s regulatory model was built for static devices.
Clinical AI is different.
models evolve
training data changes
performance varies across sites, equipment, and patient populations
A study at a single point in time cannot fully capture how an algorithm behaves once deployed across diverse clinical environments.
Supporters of the petition are right about this.
Today’s system often forces radiology AI into narrow, single-finding approvals that do not reflect how clinicians actually use these tools. Even STAT’s reporting on the petition highlights how fragmented this approach can be relative to real-world practice.
But identifying the problem is not the same as solving it.
Once evidence moves downstream, a more important question appears:
Who generates that evidence?
This is where the current debate becomes incomplete.
If the answer is the vendors themselves; then the system is introducing a structural conflict of interest.
No safety-critical industry operates that way for long.
finance relies on independent auditors
pharmaceuticals rely on independent clinical investigators
medical devices already incorporate third-party review mechanisms
and in software, independent certification frameworks such as SOC 2 emerged because enterprise customers could no longer rely solely on vendor assurances.
The FDA itself provides a precedent. Through its 510(k) Third Party Review Program, accredited organizations can review certain devices while the FDA retains final authority. The model is simple: distributed technical evaluation, centralized regulatory control.
Clinical AI is approaching the same need — but across the entire lifecycle.
This concern surfaced repeatedly across the FDA docket.
Different stakeholders used different language, but the underlying issue was consistent:
vendor self-monitoring alone is not a sufficient foundation for trust.
That is not surprising.
In real-world radiology workflows:
multiple AI models from multiple vendors interact
systems are deployed across heterogeneous environments
models are updated, retrained, and reconfigured over time
Performance is not static — and neither is risk.
If the same entity that builds the model is also responsible for generating and interpreting the evidence of its performance:
comparability breaks down
trust becomes fragile
oversight becomes fragmented
Hospitals are not equipped to rebuild validation infrastructure for every vendor they use. Regulators cannot reconstruct real-world evidence across thousands of deployments.
Something is missing.
If clinical AI is moving toward lifecycle oversight, it will require a new layer of infrastructure:
independent lifecycle validation.
In practice, this means:
evaluating models against standardized datasets
generating expert ground truth
running reader studies
monitoring real-world performance
auditing model updates
comparing results across vendors and clinical environments
This layer does not replace regulators.
It does not centralize authority.
And it does not slow innovation.
It does something more important:
It creates a shared, trusted evidence base.
The Harrison.ai petition has done the field a service by surfacing a real structural problem.
Clinical AI no longer fits inside a regulatory model designed for static products and one-time review.
But replacing repeated premarket review with vendor-led monitoring would not solve that problem.
It would simply trade one bottleneck for a trust gap.
Clinical AI does not need less evidence.
It needs better evidence — generated continuously, validated independently, and trusted across vendors, clinicians, health systems, and regulators.
That is where the field is heading.
And the companies and institutions that recognize this early will shape what comes next.
For inquires, please contact us.