Enes Yavuz Ugan

Also published as: Enes Yavuz Ugan


2024

pdf bib
DECM: Evaluating Bilingual ASR Performance on a Code-switching/mixing Benchmark
Enes Yavuz Ugan | Ngoc-Quan Pham | Alexander Waibel
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Automatic Speech Recognition has made significant progress, but challenges persist. Code-switched (CSW) Speech presents one such challenge, involving the mixing of multiple languages by a speaker. Even when multilingual ASR models are trained, each utterance on its own usually remains monolingual. We introduce an evaluation dataset for German-English CSW, with German as the matrix language and English as the embedded language. The dataset comprises spontaneous speech from diverse domains, enabling realistic CSW evaluation in German-English. It includes splits with varying degrees of CSW to facilitate specialized model analysis. As it is difficult to collect CSW data for all language pairs, the provision of such evaluation data, is crucial for developing and analyzing ASR models capable of generalizing across unseen pairs. Detailed data statistics are presented, and state-of-the-art (SOTA) multilingual models are evaluated showing challanges of CSW speech.

pdf bib
The KIT Speech Translation Systems for IWSLT 2024 Dialectal and Low-resource Track
Zhaolin Li | Enes Yavuz Ugan | Danni Liu | Carlos Mullov | Tu Anh Dinh | Sai Koneru | Alexander Waibel | Jan Niehues
Proceedings of the 21st International Conference on Spoken Language Translation (IWSLT 2024)

This paper presents KIT’s submissions to the IWSLT 2024 dialectal and low-resource track. In this work, we build systems for translating into English from speech in Maltese, Bemba, and two Arabic dialects Tunisian and North Levantine. Under the unconstrained condition, we leverage the pre-trained multilingual models by fine-tuning them for the target language pairs to address data scarcity problems in this track. We build cascaded and end-to-end speech translation systems for different language pairs and show the cascaded system brings slightly better overall performance. Besides, we find utilizing additional data resources boosts speech recognition performance but slightly harms machine translation performance in cascaded systems. Lastly, we show that Minimum Bayes Risk is effective in improving speech translation performance by combining the cascaded and end-to-end systems, bringing a consistent improvement of around 1 BLUE point.

2023

pdf bib
KIT’s Multilingual Speech Translation System for IWSLT 2023
Danni Liu | Thai Binh Nguyen | Sai Koneru | Enes Yavuz Ugan | Ngoc-Quan Pham | Tuan Nam Nguyen | Tu Anh Dinh | Carlos Mullov | Alexander Waibel | Jan Niehues
Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023)

Many existing speech translation benchmarks focus on native-English speech in high-quality recording conditions, which often do not match the conditions in real-life use-cases. In this paper, we describe our speech translation system for the multilingual track of IWSLT 2023, which focuses on the translation of scientific conference talks. The test condition features accented input speech and terminology-dense contents. The tasks requires translation into 10 languages of varying amounts of resources. In absence of training data from the target domain, we use a retrieval-based approach (kNN-MT) for effective adaptation (+0.8 BLEU for speech translation). We also use adapters to easily integrate incremental training data from data augmentation, and show that it matches the performance of re-training. We observe that cascaded systems are more easily adaptable towards specific target domains, due to their separate modules. Our cascaded speech system outperforms its end-to-end counterpart on scientific talk translation, although their performance remains similar on TED talks.