Gewählte Publikation:
SHR
Neuro
Krebs
Kardio
Lipid
Stoffw
Microb
Till, T; Scherkl, M; Stranger, N; Singer, G; Hankel, S; Flucher, C; Hržić, F; Štajduhar, I; Tschauner, S.
Impact of test set composition on AI performance in pediatric wrist fracture detection in X-rays.
Eur Radiol. 2025;
Doi: 10.1007/s00330-025-11669-z
PubMed
FullText
FullText_MUG
- Führende Autor*innen der Med Uni Graz
-
Stranger Nikolaus
-
Till Tristan
- Co-Autor*innen der Med Uni Graz
-
Flucher Christina
-
Hankel Saskia
-
Scherkl Mario
-
Singer Georg
-
Tschauner Sebastian
- Altmetrics:
- Dimensions Citations:
- Plum Analytics:
- Scite (citation analytics):
- Abstract:
- OBJECTIVES: To evaluate how different test set sampling strategies-random selection and balanced sampling-affect the performance of artificial intelligence (AI) models in pediatric wrist fracture detection using radiographs, aiming to highlight the need for standardization in test set design. MATERIALS AND METHODS: This retrospective study utilized the open-sourced GRAZPEDWRI-DX dataset of 6091 pediatric wrist radiographs. Two test sets, each containing 4588 images, were constructed: one using a balanced approach based on case difficulty, projection type, and fracture presence and the other a random selection. EfficientNet and YOLOv11 models were trained and validated on 18,762 radiographs and tested on both sets. Binary classification and object detection tasks were evaluated using metrics such as precision, recall, F1 score, AP50, and AP50-95. Statistical comparisons between test sets were performed using nonparametric tests. RESULTS: Performance metrics significantly decreased in the balanced test set with more challenging cases. For example, the precision for YOLOv11 models decreased from 0.95 in the random set to 0.83 in the balanced set. Similar trends were observed for recall, accuracy, and F1 score, indicating that models trained on easy-to-recognize cases performed poorly on more complex ones. These results were consistent across all model variants tested. CONCLUSION: AI models for pediatric wrist fracture detection exhibit reduced performance when tested on balanced datasets containing more difficult cases, compared to randomly selected cases. This highlights the importance of constructing representative and standardized test sets that account for clinical complexity to ensure robust AI performance in real-world settings. KEY POINTS: Question Do different sampling strategies based on samples' complexity have an influence in deep learning models' performance in fracture detection? Findings AI performance in pediatric wrist fracture detection significantly drops when tested on balanced datasets with more challenging cases, compared to randomly selected cases. Clinical relevance Without standardized and validated test datasets for AI that reflect clinical complexities, performance metrics may be overestimated, limiting the utility of AI in real-world settings.