A UK retrospective study evaluated Prostate Intelligence™ (Lucida Medical), an AI tool for detecting clinically significant prostate cancer (csPCa, Gleason Grade Group ≥ 2), using multiparametric MRI (mpMRI) in 252 patients across six hospitals.
The AI software was compared to MDT-supported expert radiologists using PI-RADS/Likert scoring, with performance assessed against biopsy-proven histopathology and independent radiologist-verified MRI lesions.
The AI was non-inferior to radiologists, with an AUC of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, AI achieved 95% sensitivity and 67% specificity, while radiologists at PI-RADS/Likert ≥ 3 had 99% sensitivity and 73% specificity. However, AI missed 14% of GG ≥ 2 lesions at this threshold, compared to 7% missed by radiologists. AI performed consistently across scanner vendors and field strengths (AUC ≥ 0.83 per site).
Despite comparable sensitivity, AI had a higher false-positive rate, averaging 1.54 false-positive lesions per patient at 90% sensitivity, versus 0.21 for radiologists. These results highlight the need for further model refinement and site-specific threshold adjustments.
Read full study
AI-powered prostate cancer detection: a multi-centre, multi-scanner validation study
European Radiology, 2025
Abstract
Objectives
Multi-centre, multi-vendor validation of artificial intelligence (AI) software to detect clinically significant prostate cancer (PCa) using multiparametric magnetic resonance imaging (MRI) is lacking. We compared a new AI solution, validated on a separate dataset from different UK hospitals, to the original multidisciplinary team (MDT)-supported radiologist's interpretations.
Materials and methods
A Conformité Européenne (CE)-marked deep-learning (DL) computer-aided detection (CAD) medical device (Pi) was trained to detect Gleason Grade Group (GG) ≥ 2 cancer using retrospective data from the PROSTATEx dataset and five UK hospitals (793 patients). Our separate validation dataset was on six machines from two manufacturers across six sites (252 patients). Data included in the study were from MRI scans performed between August 2018 to October 2022. Patients with a negative MRI who did not undergo biopsy were assumed to be negative (90.4% had prostate-specific antigen density < 0.15 ng/mL2). ROC analysis was used to compare radiologists who used a 5-category suspicion score.
Results
GG ≥ 2 prevalence in the validation set was 31%. Evaluated per patient, Pi was non-inferior to radiologists (considering a 10% performance difference as acceptable), with an area under the curve (AUC) of 0.91 vs. 0.95. At the predetermined risk threshold of 3.5, the AI software's sensitivity was 95% and specificity 67%, while radiologists at Prostate Imaging-Reporting and Data Systems/Likert ≥ 3 identified GG ≥ 2 with a sensitivity of 99% and specificity of 73%. AI performed well per-site (AUC ≥ 0.83) at the patient-level independent of scanner age and field strength.
Conclusion
Real-world data testing suggests that Pi matches the performance of MDT-supported radiologists in GG ≥ 2 PCa detection and generalises to multiple sites, scanner vendors, and models.
Key points
QuestionThe performance of artificial intelligence-based medical tools for prostate MRI has yet to be evaluated on multi-centre, multi-vendor data to assess generalisability. FindingsA dedicated AI medical tool matches the performance of multidisciplinary team-supported radiologists in prostate cancer detection and generalises to multiple sites and scanners. Clinical relevanceThis software has the potential to support the MRI process for biopsy decision-making and target identification, but future prospective studies, where lesions identified by artificial intelligence are biopsied separately, are needed.