FDA denied the Harrison.ai petition. The bigger signal is what comes next for clinical AI

On April 1, the FDA denied Harrison.ai’s petition to partially exempt certain radiology CAD and CADt devices from 510(k) review. It did so after receiving more than 45 comments, a majority of which opposed the proposal. That was the right outcome. But the more important signal is not that FDA said no. It is why.

The petition tried to solve a real problem. Clinical AI does not fit neatly inside a regulatory framework designed for static devices. Models evolve. Indications expand. Performance varies across sites, scanners, workflows, and patient populations. The industry is right to push for a better way to manage that reality. But FDA’s response makes clear that blanket exemption tied to prior clearance and vendor-led post-market monitoring is not that way.

1. FDA rejected the idea that one clearance proves future competence

A central premise of the petition was that if a manufacturer has already cleared one device in a category, that should count as evidence that it can safely develop and market future devices in related categories without new 510(k) review.

FDA rejected that logic directly.

The agency explained that clearance of one CAD or CADt device does not automatically translate into proficiency for future devices. Different indications require different data curation methods, different reference standards, different workflow understanding, and different performance metrics. FDA also noted that the manufacturer holding a 510(k) may not even have built the original device itself; it could have acquired it or relied heavily on third parties during development.

One detail in the letter is especially telling: FDA said that over several years, all 510(k)s cleared for one CAD product code had been placed on hold during review, in each case for reasons that included deficiencies in performance testing methodology or results, even when the manufacturer already had prior CAD authorization. That is not what a “passport” model looks like. It is what an indication-specific evidence problem looks like.

2. FDA rejected vendor-defined post-market monitoring as a substitute for review

This is the most important part of the response.

The petition proposed that manufacturers be allowed to define and implement a “robust post-market plan” proportionate to device risk. FDA said that approach would “flip the standard … on its head” because Congress directed the agency, not the manufacturer, to determine what is necessary to assure safety and effectiveness.

That line matters because it goes straight to the core governance question in clinical AI: who decides what evidence is sufficient once more of the evidentiary burden moves downstream?

FDA’s answer is clear. It is not enough for the manufacturer to decide that for itself.

The letter also explains why. For these devices, agreement between model output and the “true” disease state is often not routinely captured, and there are no standard feedback mechanisms linking final diagnosis or ground truth back to the interpreting radiologist or the device output. In plain language: the system does not yet have a mature way to generate reliable post-market evidence at scale.

3. FDA said the evidence layer is not mature enough

The denial is not just a rejection of one petition. It is also an implicit diagnosis of the market’s infrastructure problem.

FDA said there are no vertical consensus or performance standards for these device types. It also pushed back on the idea that clinicians can always detect problematic changes or drift through routine use. For some outputs, especially where final truth arrives later or depends on follow-up testing, users may not know whether the system is wrong until much later — if at all.

FDA also dismissed the idea that low adverse-event reporting in MAUDE is proof of safety. A false positive or false negative may not itself be captured as an adverse event, and the contribution of a model to downstream harm may be hard to detect in real time. The agency even cited user comments describing inconsistent sensitivity and specificity, degradation across protocols, lack of indication-specific validation documentation, false positives, and missed findings in products used overseas.

That is the deeper takeaway from the denial: the issue is not just whether FDA should review more or fewer AI submissions. The issue is that the evidence layer needed for lifecycle oversight is still underbuilt.

4. FDA still signaled where it wants the system to go

The denial should not be read as a defense of the status quo.

FDA’s response explicitly says the agency remains committed to innovative approaches for regulating digital health technology, and it points manufacturers toward Predetermined Change Control Plans (PCCPs) and the Q-Submission Program as the practical path forward. FDA noted that PCCPs are already an existing mechanism to implement certain future modifications without a new marketing submission each time, and said it believes familiarity with PCCPs will help facilitate and expedite access to safe and effective device advances.

That is an important signal.

FDA is not saying clinical AI should remain frozen inside a purely static premarket model. It is saying the evolution has to happen through controlled, reviewable pathways rather than broad exemption built on assumptions about manufacturer competence and self-defined monitoring.

5. What the field still needs: independent lifecycle validation

This is where the denial matters beyond this single petition.

The real debate is no longer “premarket versus post-market.” It is who generates trustworthy evidence once more of that burden moves downstream.

If the answer is “the vendor,” the system runs into a structural problem. The same company building the model becomes the primary party generating, interpreting, and presenting evidence of its own performance across time. That is not a stable trust architecture for a safety-critical field.

Clinical AI needs a stronger outside evidence layer: independent lifecycle validation.

That means standardized evidence generation, cross-vendor comparability, real-world performance monitoring, and auditable model updates. It does not mean replacing FDA or building a single centralized authority. It means creating credible mechanisms that let hospitals, regulators, and vendors evaluate performance over time without relying on self-certification alone.

There is already a precedent for this broader logic in the device world. FDA’s 510(k) Third Party Review Program allows accredited review organizations to review certain eligible low-to-moderate-risk devices through a voluntary alternative process, while FDA retains final decision authority. FDA says the program is intended to speed review, help the agency focus on higher-risk devices, and still maintain oversight; it also says approximately half of 510(k)s are eligible for the program.

Clinical AI needs the lifecycle version of that idea.

The denial was right. The infrastructure gap remains.

FDA denied the shortcut. It did not eliminate the destination.

Clinical AI is still moving toward lifecycle oversight. The petition simply exposed, and the denial confirmed, that the health care system does not yet have the standards, feedback loops, and independent evidence infrastructure to support that shift at scale.

That is the real lesson of the last week.

Clinical AI does not need less evidence. It needs better evidence — continuous, comparable, and independent enough to be trusted across vendors, health systems, and regulators.

For any inquires, please contact us.

About the Author

Gesundai Slug Author

Enes HOSGOR

CEO at Gesund.ai

Dr. Enes Hosgor is an engineer by training and an AI entrepreneur by trade driven to unlock scientific and technological breakthroughs having built AI products and companies in the last 10+ years in high compliance environments. After selling his first ML company based on his Ph.D. work at Carnegie Mellon University, he joined a digital surgery company named Caresyntax to found and lead its ML division. His penchant for healthcare comes from his family of physicians including his late father, sister and wife. Formerly a Fulbright Scholar at the University of Texas at Austin, some of his published scientific work can be found in Medical Image Analysis; International Journal of Computer Assisted Radiology and Surgery; Nature Scientific Reports, and British Journal of Surgery, among other peer-reviewed outlets.