GPT-4 for detecting errors in radiology reports

gpt 4 error

A retrospective German study has explored the potential of GPT-4 in identifying errors within radiology reports, comparing its performance against human radiologists in terms of accuracy, time efficiency, and cost-effectiveness.

A range of intentional errors was incorporated in 200 radiology reports. GPT-4 and six radiologists (including senior radiologists, attending physicians, and residents) were tasked to identify these errors.

GPT-4 matched the overall error detection performance of the radiologists and significantly outperformed them in terms of processing time and cost-efficiency. GPT-4 detected errors with 82.7% accuracy, comparable to attending physicians and residents, and required significantly less time per report (approximately 3.5 seconds) compared to the fastest human reader (25.1 seconds).

The use of GPT-4 in detecting errors in radiology reports could significantly reduce workload and costs in radiology departments. The study suggests that integrating AI tools like GPT-4 could be beneficial in routine clinical practice, providing a supportive tool for radiologists and potentially improving patient outcomes by minimizing report error.

Read full study


Potential of GPT-4 for Detecting Errors in Radiology Reports: Implications for Reporting Accuracy

Radiology, 2024

Abstract

Background: Errors in radiology reports may occur because of resident-to-attending discrepancies, speech recognition inaccuracies, and large workload. Large language models, such as GPT-4 (ChatGPT; OpenAI), may assist in generating reports.

Purpose: To assess effectiveness of GPT-4 in identifying common errors in radiology reports, focusing on performance, time, and cost-efficiency.

Materials and Methods: In this retrospective study, 200 radiology reports (radiography and cross-sectional imaging [CT and MRI]) were compiled between June 2023 and December 2023 at one institution. There were 150 errors from five common error categories (omission, insertion, spelling, side confusion, and other) intentionally inserted into 100 of the reports and used as the reference standard. Six radiologists (two senior radiologists, two attending physicians, and two residents) and GPT-4 were tasked with detecting these errors. Overall error detection performance, error detection in the five error categories, and reading time were assessed using Wald χ2 tests and paired-sample t tests.

Results: GPT-4 (detection rate, 82.7%;124 of 150; 95% CI: 75.8, 87.9) matched the average detection performance of radiologists independent of their experience (senior radiologists, 89.3% [134 of 150; 95% CI: 83.4, 93.3]; attending physicians, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; residents, 80.0% [120 of 150; 95% CI: 72.9, 85.6]; P value range, .522–.99). One senior radiologist outperformed GPT-4 (detection rate, 94.7%; 142 of 150; 95% CI: 89.8, 97.3; P = .006). GPT-4 required less processing time per radiology report than the fastest human reader in the study (mean reading time, 3.5 seconds ± 0.5 [SD] vs 25.1 seconds ± 20.1, respectively; P < .001; Cohen d = −1.08). The use of GPT-4 resulted in lower mean correction cost per report than the most cost-efficient radiologist ($0.03 ± 0.01 vs $0.42 ± 0.41; P < .001; Cohen d = −1.12).

Conclusion: The radiology report error detection rate of GPT-4 was comparable with that of radiologists, potentially reducing work hours and cost.