Selected Publication:
SHR
Neuro
Cancer
Cardio
Lipid
Metab
Microb
Mallol-Ragolta, A; Pokorny, FB; Bartl-Pokorny, KD; Semertzidou, A; Schuller, BW.
Triplet Loss-Based Models for COVID-19 Detection from Vocal Sounds.
Annu Int Conf IEEE Eng Med Biol Soc. 2022; 2022: 998-1001.
Doi: 10.1109/EMBC48229.2022.9871125
PubMed
FullText
FullText_MUG
- Co-authors Med Uni Graz
-
Bartl-Pokorny Katrin Daniela
-
Pokorny Florian
- Altmetrics:
- Dimensions Citations:
- Plum Analytics:
- Scite (citation analytics):
- Abstract:
- This work focuses on the automatic detection of COVID-19 from the analysis of vocal sounds, including sustained vowels, coughs, and speech while reading a short text. Specifically, we use the Mel-spectrogram representations of these acoustic signals to train neural network-based models for the task at hand. The extraction of deep learnt representations from the Mel-spectrograms is performed with Convolutional Neural Networks (CNNs). In an attempt to guide the training of the embedded representations towards more separable and robust inter-class representations, we explore the use of a triplet loss function. The experiments performed are conducted using the Your Voice Counts dataset, a new dataset containing German speakers collected using smartphones. The results obtained support the suitability of using triplet loss-based models to detect COVID-19 from vocal sounds. The best Unweighted Average Recall (UAR) of 66.5 % is obtained using a triplet loss-based model exploiting vocal sounds recorded while reading.