Medizinische Universität Graz - Research portal

Logo MUG Resarch Portal

Selected Publication:

SHR Neuro Cancer Cardio Lipid Metab Microb

Till, H; Esposito, C; Yeung, CK; Patkowski, D; Shehata, S; Rothenberg, S; Singer, G; Till, T.
Artificial intelligence based surgical support for experimental laparoscopic Nissen fundoplication.
Front Pediatr. 2025; 13:1584628 Doi: 10.3389/fped.2025.1584628 [OPEN ACCESS]
Web of Science PubMed PUBMED Central FullText FullText_MUG

 

Leading authors Med Uni Graz
Till Holger
Co-authors Med Uni Graz
Singer Georg
Till Tristan
Altmetrics:

Dimensions Citations:

Plum Analytics:

Scite (citation analytics):

Abstract:
BACKGROUND: Computer vision (CV), a subset of artificial intelligence (AI), enables deep learning models to detect specific events within digital images or videos. Especially in medical imaging, AI/CV holds significant promise analyzing data from x-rays, CT scans, and MRIs. However, the application of AI/CV to support surgery has progressed more slowly. This study presents the development of the first image-based AI/CV model classifying quality indicators of laparoscopic Nissen fundoplication (LNF). MATERIALS AND METHODS: Six visible quality indicators (VQIs) for Nissen fundoplication were predefined as parameters to build datasets including correct (360° fundoplication) and incorrect configurations (incomplete, twisted wraps, too long (>four knots), too loose, too long, malpositioning (at/below the gastroesophageal junction). In a porcine model, multiple iterations of each VQI were performed. A total of 57 video sequences were processed, extracting 3,138 images at 0.5-second intervals. These images were annotated corresponding to their respective VQIs. The EfficientNet architecture, a typical deep learning model, was employed to train an ensemble of image classifiers, as well as a multi-class classifier, to distinguish between correct and incorrect Nissen wraps. RESULTS: The AI/CV models demonstrated strong performance in predicting image-based VQIs for Nissen fundoplication. The individual image classifiers achieved an average F1-Score of 0.9738 ± 0.1699 when adjusted for the optimal Equal Error Rate (EER) as the decision boundary. A similar performance was observed using the multi-class classifier. The results remained robust despite extensive image augmentation. For 3/5 classifiers the results remained identical; detection of incomplete and too loose LNFs showed a slight decline in predictive power. CONCLUSION: This experimental study demonstrates that an AI/CV algorithm can effectively detect VQIs in digital images of Nissen fundoplications. This proof of concept does not aim to test clinical Nissen fundoplication, but provides experimental evidence that AI/CV models can be trained to classify various laparoscopic images of surgical configurations. In the future, this concept could be developed into AI based real-time surgical support to enhance surgical outcome and patient safety.

Find related publications in this database (Keywords)
artificial intelligence (AI)
computer vision (CV)
visual quality indicators (VQIs)
Nissen fundoplication
EfficientNet
© Med Uni GrazImprint