Medizinische Universität Graz - Research portal

Logo MUG Resarch Portal

Selected Publication:

SHR Neuro Cancer Cardio Lipid Metab Microb

Napravnik, M; Hržić, F; Urschler, M; Miletić, D; Štajduhar, I.
Lessons learned from RadiologyNET foundation models for transfer learning in medical radiology.
Sci Rep. 2025; 15(1): 21622 Doi: 10.1038/s41598-025-05009-w [OPEN ACCESS]
Web of Science PubMed PUBMED Central FullText FullText_MUG

 

Co-authors Med Uni Graz
Urschler Martin
Altmetrics:

Dimensions Citations:

Plum Analytics:

Scite (citation analytics):

Abstract:
Deep learning models require large amounts of annotated data, which are hard to obtain in the medical field, as the annotation process is laborious and depends on expert knowledge. This data scarcity hinders a model's ability to generalise effectively on unseen data, and recently, foundation models pretrained on large datasets have been proposed as a promising solution. RadiologyNET is a custom medical dataset that comprises 1,902,414 medical images covering various body parts and modalities of image acquisition. We used the RadiologyNET dataset to pretrain several popular architectures (ResNet18, ResNet34, ResNet50, VGG16, EfficientNetB3, EfficientNetB4, InceptionV3, DenseNet121, MobileNetV3Small and MobileNetV3Large). We compared the performance of ImageNet and RadiologyNET foundation models against training from randomly initialiased weights on several publicly available medical datasets: (i) Segmentation-LUng Nodule Analysis Challenge, (ii) Regression-RSNA Pediatric Bone Age Challenge, (iii) Binary classification-GRAZPEDWRI-DX and COVID-19 datasets, and (iv) Multiclass classification-Brain Tumor MRI dataset. Our results indicate that RadiologyNET-pretrained models generally perform similarly to ImageNet models, with some advantages in resource-limited settings. However, ImageNet-pretrained models showed competitive performance when fine-tuned on sufficient data. The impact of modality diversity on model performance was tested, with the results varying across tasks, highlighting the importance of aligning pretraining data with downstream applications. Based on our findings, we provide guidelines for using foundation models in medical applications and publicly release our RadiologyNET-pretrained models to support further research and development in the field. The models are available at https://github.com/AIlab-RITEH/RadiologyNET-TL-models .
Find related publications in this database (using NLM MeSH Indexing)
Humans - administration & dosage
COVID-19 - diagnostic imaging
Deep Learning - administration & dosage
Radiology - methods
SARS-CoV-2 - administration & dosage

Find related publications in this database (Keywords)
Transfer learning
Foundation models
RadiologyNET
Model pretraining
Segmentation
Regression
Classification
© Med Uni GrazImprint