The lack of public documentation on medical AI products

Lack of transparency

This study highlights the critical need for transparency on AI-based radiology products in the European Union. This research examines 14 CE-certified AI radiology products classified as IIb risk, indicating a medium to high risk due to their potential impact on medical decisions. The authors developed and used a survey to assess the transparency of these products based on several criteria, including intended use, algorithmic development, ethical considerations, technical validation, and deployment caveats. The results show transparency scores ranging from 6.4% to 60.9%, with a median of 29.1%, and reveal significant gaps in public documentation regarding training data, ethical considerations, and limitations of use.

The study emphasizes the critical need for transparency in AI-based radiology products within the European Union—a need that we at Health AI Register acknowledge and strive to address through our platform and newsletter.

Read full study


Abstract

Trustworthy medical AI requires transparency about the development and testing of underlying algorithms to identify biases and communicate potential risks of harm. Abundant guidance exists on how to achieve transparency for medical AI products, but it is unclear whether publicly available information adequately informs about their risks. To assess this, we retrieved public documentation on the 14 available CE-certified AI-based radiology products of the II b risk category in the EU from vendor websites, scientific publications, and the European EUDAMED database. Using a self-designed survey, we reported on their development, validation, ethical considerations, and deployment caveats, according to trustworthy AI guidelines. We scored each question with either 0, 0.5, or 1, to rate if the required information was “unavailable”, “partially available,” or “fully available.” The transparency of each product was calculated relative to all 55 questions. Transparency scores ranged from 6.4% to 60.9%, with a median of 29.1%. Major transparency gaps included missing documentation on training data, ethical considerations, and limitations for deployment. Ethical aspects like consent, safety monitoring, and GDPR-compliance were rarely documented. Furthermore, deployment caveats for different demographics and medical settings were scarce. In conclusion, public documentation of authorized medical AI products in Europe lacks sufficient public transparency to inform about safety and risks. We call on lawmakers and regulators to establish legally mandated requirements for public and substantive transparency to fulfill the promise of trustworthy AI for health.