The lack of public validation data for the AI based Software as a Medical Device

clinical validation FDA SAMD3

More and more clinical AI products are entering the market, but what is the state of publicly available validation data for these applications? A recent commentary in Nature Medicine underscores the gap between regulatory approval and available clinical validation data of AI software as medical devices (SaMD).

The study aimed to evaluate the extent of clinical validation in FDA-authorized AI devices. It is important to note that clinical validation is a prerequisite for FDA authorization, but this research is focusing on which of these data are publicly available. Researchers analyzed all 521 FDA authorizations of AI medical devices from 1995 to 2022, with 75% of these devices targeting radiology. Of the 521 authorization, 292 (56%) included available reports of clinical validation. Among these, 144 (27.6%) were 'retrospectively validated' and 148 (28.4%) were 'prospectively validated'. A smaller subset of 22 devices (4.2% of all approved devices) with prospective validation were specifically validated through randomized controlled trials (RCTs).

These findings underscore a significant gap in the availability of publicly accessible validation data for AI medical devices in the US. The situation in Europe is even more concerning, as no centralized database exists to track validation activities for AI applications. Instead, the primary source of publicly available information in Europe comes from scientific publications. This lack of transparency highlights the relevance of resources like the Health AI Register, which closely monitors and compiles relevant publications. Each product listed on healthairegister.com includes a section detailing related scientific studies, helping to bridge the information gap in clinical validation.

Read full study


Not all AI health tools with regulatory authorization are clinically validated

Nature Medicine (2024), commentary

Abstract

Advances in artificial intelligence (AI) are beginning to revolutionize healthcare. AI algorithms attempt various combinations of statistical equations to find patterns in data that solve real-world problems. AI-powered devices can detect cancers and strokes on radiology scans, accurately predict the onset of disease and dose insulin. However, the implementation of medical AI devices has led to concerns about patient harm, liability, patient privacy, device accuracy, scientific acceptability and lack of explainability, sometimes called the ‘black box’ problem1–5 . These concerns underscore the importance of the validation of AI technologies. Patients and providers need a gold-standard indicator of efficacy and safety for medical AI devices. Such a standard would build public trust and increase the rate of device adoption by end users. As the chief legal regulatory body for medical devices in the USA, the Food and Drug Administration (FDA) currently authorizes AI software as medical devices (SaMD)6 . However, for the public to accept FDA authorization as an indication of effectiveness, the agency and device manufacturers must publish ample clinical validation data.