- AI in Medicine: Curae ex Machina
- Posts
- Can AI Outperform Radiologists in Trauma Centers?
Can AI Outperform Radiologists in Trauma Centers?
A New Study Weighs In

By Campion Quinn, MD
Assessing the Performance of Models from the 2022 RSNA Cervical Spine Fracture Detection Competition at a Level I Trauma Center
Purpose: The article evaluates the performance of the top seven models from the RSNA 2022 Cervical Spine Fracture Detection competition. These models were tested on a clinical dataset from a level I trauma center that included non-contrast and contrast-enhanced CT scans. The goal was to determine how these models perform in a real-world clinical setting, particularly in detecting cervical spine fractures.
Materials and Methods: The study retrospectively reviewed 1,828 CT scans (from 1,779 patients) from an emergency department. These scans included 130 positive fracture cases and 1,699 negative cases. Both noncontrast (1,308) and contrast-enhanced (521) scans were included. The study used metrics such as Area Under the Receiver Operating Characteristic Curve (AUC), sensitivity, and specificity to assess model performance. Additionally, a neuroradiologist analyzed false positives and negatives to understand their clinical implications.
Results:
Noncontrast CT Scans: The models performed well, with an average AUC of 0.89 (range 0.81–0.92). Sensitivity varied widely (30.9%–80.0%), while specificity was high (82.1%–99.0%).
Contrast-enhanced CT Scans: The performance of contrast-enhanced CT scans slightly dropped compared to non-contrast scans. The models achieved a mean AUC of 0.88 (range 0.76–0.94). Sensitivity was higher (42.7%–100%), but specificity was more varied and generally lower (16.4%–92.8%).
Error Analysis: The models identified ten fractures missed by radiologists. False positives were more common in contrast-enhanced scans, especially in patients with degenerative changes. False negatives were often associated with conditions such as osteopenia and degenerative changes.
Conclusion: The winning models from the RSNA 2022 competition demonstrated robust performance in detecting cervical spine fractures in a real-world clinical dataset, especially on non-contrast scans. Although the models performed slightly worse on contrast-enhanced scans, they still showed potential as clinical support tools, particularly in identifying fractures missed by radiologists. Further research and validation are needed before these models can be integrated into clinical practice.
This article is essential to physicians for several key reasons:
Improved Diagnostic Accuracy: The study highlights how AI models for cervical spine fracture detection can identify fractures that radiologists may miss, particularly in challenging cases. This could help reduce diagnostic errors and improve patient outcomes, especially in high-stakes, trauma-related scenarios.
Clinical Integration of AI: The article evaluates the performance of AI models in real-world clinical settings, specifically at a level I trauma center. This is crucial for physicians to understand how AI can be reliably integrated into emergency care to support diagnostic accuracy.
Radiologist Support, Not Replacement: Physicians, particularly radiologists, can benefit from AI tools that complement their expertise. The article emphasizes that AI models are not designed to replace radiologists but to assist them in making quicker, more accurate diagnoses, reducing their workload, and improving efficiency.
Handling Complex Cases: The article discusses how AI models performed on contrast-enhanced and non-contrast CT scans, identifying fractures even in degenerative changes or osteopenia. These conditions often complicate diagnosis for human radiologists, suggesting that AI could be particularly useful in complex or ambiguous cases.
Potential for Broader Application: The AI models tested in this study were validated in a clinical environment, proving their potential beyond experimental settings. This provides physicians with insight into the future use of AI in trauma and emergency radiology, opening possibilities for broader clinical applications.
AI as a Learning Tool: For medical educators and residents, AI models provide an opportunity to learn from a system that highlights areas prone to human error. This could play a significant role in training the next generation of radiologists and physicians in AI-augmented diagnostic processes.
By understanding how these AI models perform in clinical environments and their potential to improve diagnostic accuracy, physicians can better prepare for the future integration of AI in medical practice.