Validation of AI applications beyond regulatory compliance

Beyond regulatory compliance

This recent publication underscores the critical need for rigorous validation of AI applications to ensure their safety and efficacy in clinical settings. The paper highlights the inadequacy of current regulatory frameworks and post-market surveillance to fully capture the real-world performance of AI tools, suggesting that there are often discrepancies between clinical trial results and actual clinical outcomes.

To bridge this gap, the authors argue for independent post-market audits and real-world clinical evaluations of AI applications. They propose a structured framework for such evaluations, including in-lab benchmarking and clinical audits, which could significantly contribute to the safer integration of AI technologies in healthcare. This approach aims to mitigate the risks associated with the use of AI in radiology, thereby accelerating its adoption and use in clinical practice.

Read full study


Abstract


The implementation of artificial intelligence (AI) applications in routine practice, following regulatory approval, is currently limited by practical concerns around reliability, accountability, trust, safety, and governance, in addition to factors such as cost-effectiveness and institutional information technology support. When a technology is new and relatively untested in a field, professional confidence is lacking and there is a sense of the need to go above the baseline level of validation and compliance. In this article, we propose an approach that goes beyond standard regulatory compliance for AI apps that are approved for marketing, including independent benchmarking in the lab as well as clinical audit in practice, with the aims of increasing trust and preventing harm.