Leibny Paola Garcia
Also published as: Leibny Paola Garcia Perera
2026
CSPB: Conversational Speech Processing Benchmark for Self-supervised Speech Models
Zili Huang | Matthew Maciejewski | Leibny Paola Garcia Perera | Shinji Watanabe | Sanjeev Khudanpur
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Zili Huang | Matthew Maciejewski | Leibny Paola Garcia Perera | Shinji Watanabe | Sanjeev Khudanpur
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advances in self-supervised learning (SSL) have led to powerful speech representation models, yet their robustness in real-world conversational settings remains largely untested. Most existing benchmarks focus on clean, single-speaker, single-channel audio, failing to reflect the complexities of natural human interaction—where background noise, reverberation, and overlapping speech are the norm. To bridge these critical gaps, we present the Conversational Speech Processing Benchmark (CSPB), a new benchmark designed to assess the robustness of SSL speech models in realistic conversational scenarios. CSPB is constructed from four multi-party datasets—AMI, AliMeeting, MMCSG, and DiPCo—and supports both single-channel and multi-channel evaluation. By releasing CSPB as an open-source toolkit, we aim to establish a unified framework for evaluating and advancing robust, spatially-aware self-supervised speech models.
2025
HENT-SRT: Hierarchical Efficient Neural Transducer with Self-Distillation for Joint Speech Recognition and Translation
Amir Hussein | Cihan Xiao | Matthew Wiesner | Dan Povey | Leibny Paola Garcia | Sanjeev Khudanpur
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Amir Hussein | Cihan Xiao | Matthew Wiesner | Dan Povey | Leibny Paola Garcia | Sanjeev Khudanpur
Proceedings of the 22nd International Conference on Spoken Language Translation (IWSLT 2025)
Neural transducers (NT) provide an effective framework for speech streaming, demonstrating strong performance in automatic speech recognition (ASR). However, the application of NT to speech translation (ST) remains challenging, as existing approaches struggle with word reordering and performance degradation when jointly modeling ASR and ST, resulting in a gap with attention-based encoder-decoder (AED) models. Existing NT-based ST approaches also suffer from high computational training costs. To address these issues, we propose HENT-SRT (Hierarchical Efficient Neural Transducer for Speech Recognition and Translation), a novel framework that factorizes ASR and translation tasks to better handle reordering. To ensure robust ST while preserving ASR performance, we use self-distillation with CTC consistency regularization. Moreover, we improve computational efficiency by incorporating best practices from ASR transducers, including a down-sampled hierarchical encoder, a stateless predictor, and a pruned transducer loss to reduce training complexity. Finally, we introduce a blank penalty during decoding, reducing deletions and improving translation quality. Our approach is evaluated on three conversational datasets Arabic, Spanish, and Mandarin achieving new state-of-the-art performance among NT models and substantially narrowing the gap with AED-based systems.
2024
Where are you from? Geolocating Speech and Applications to Language Identification
Patrick Foley | Matthew Wiesner | Bismarck Bamfo Odoom | Leibny Paola Garcia | Kenton Murray | Philipp Koehn
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Patrick Foley | Matthew Wiesner | Bismarck Bamfo Odoom | Leibny Paola Garcia | Kenton Murray | Philipp Koehn
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We train models to answer the question, Where are you from? and show how such models can be repurposed for language identification (LID). To our knowledge, this paper is the first to introduce data sources, methods and models to tackle the task of geolocation of speech at a global scale, and the first to explore using geolocation as a proxy-task for LID. Specifically, we explore whether radio broadcasts with known origin can be used to train regression and classification-based models for geolocating speech. We build models on top of self-supervised pretrained models, using attention pooling to qualitatively verify that the model geolocates the speech itself, and not other channel artifacts.The best geolocation models localize speaker origin to around 650km. We confirm the value of speech geolocation as a proxy task by using speech geolocation models for zero-shot LID. Finally, we show that fine-tuning geolocation models for LID outperforms fine-tuning pretrained Wav2Vec2.0 models, and achieves state-of-the-art performance on the FLEURS benchmark.
Speaking in Wavelet Domain: A Simple and Efficient Approach to Speed up Speech Diffusion Model
Xiangyu Zhang | Daijiao Liu | Hexin Liu | Qiquan Zhang | Hanyu Meng | Leibny Paola Garcia | Eng Siong Chng | Lina Yao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Xiangyu Zhang | Daijiao Liu | Hexin Liu | Qiquan Zhang | Hanyu Meng | Leibny Paola Garcia | Eng Siong Chng | Lina Yao
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Recently, Denoising Diffusion Probabilistic Models (DDPMs) have attained leading performances across a diverse range of generative tasks. However, in the field of speech synthesis, although DDPMs exhibit impressive performance, their prolonged training duration and substantial inference costs hinder practical deployment. Existing approaches primarily focus on enhancing inference speed, while approaches to accelerate training—a key factor in the costs associated with adding or customizing voices—often necessitate complex modifications to the model, compromising their universal applicability. To address the aforementioned challenges, we propose an inquiry: is it possible to enhance the training/inference speed and performance of DDPMs by modifying the speech signal itself? In this paper, we double the training and inference speed of Speech DDPMs by simply redirecting the generative target to the wavelet domain. This method not only achieves comparable or superior performance to the original model in speech synthesis tasks but also demonstrates its versatility. By investigating and utilizing different wavelet bases, our approach proves effective not just in speech synthesis, but also in speech enhancement.
ConEC: Earnings Call Dataset with Real-world Contexts for Benchmarking Contextual Speech Recognition
Ruizhe Huang | Mahsa Yarmohammadi | Jan Trmal | Jing Liu | Desh Raj | Leibny Paola Garcia | Alexei Ivanov | Patrick Ehlen | Mingzhi Yu | Ariya Rastrow | Dan Povey | Sanjeev Khudanpur
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Ruizhe Huang | Mahsa Yarmohammadi | Jan Trmal | Jing Liu | Desh Raj | Leibny Paola Garcia | Alexei Ivanov | Patrick Ehlen | Mingzhi Yu | Ariya Rastrow | Dan Povey | Sanjeev Khudanpur
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Knowing the particular context associated with a conversation can help improving the performance of an automatic speech recognition (ASR) system. For example, if we are provided with a list of in-context words or phrases — such as the speaker’s contacts or recent song playlists — during inference, we can bias the recognition process towards this list. There are many works addressing contextual ASR; however, there is few publicly available real benchmark for evaluation, making it difficult to compare different solutions. To this end, we provide a corpus (“ConEC”) and baselines to evaluate contextual ASR approaches, grounded on real-world applications. The ConEC corpus is based on public-domain earnings calls (ECs) and associated supplementary materials, such as presentation slides, earnings news release as well as a list of meeting participants’ names and affiliations. We demonstrate that such real contexts are noisier than artificially synthesized contexts that contain the ground truth, yet they still make great room for future improvement of contextual ASR technology
2023
Search
Fix author
Co-authors
- Sanjeev Khudanpur 3
- Hexin Liu 2
- Dan Povey 2
- Matthew Wiesner 2
- Xiangyu Zhang 2
- Bismarck Bamfo Odoom 1
- Wenhan Chao 1
- Eng Siong Chng 1
- Patrick Ehlen 1
- Patrick Foley 1
- Zili Huang 1
- Ruizhe Huang 1
- Amir Hussein 1
- Alexei V. Ivanov 1
- Philipp Koehn 1
- Shuyue Stella Li 1
- Daijiao Liu 1
- Jing Liu 1
- Matthew Maciejewski 1
- Hanyu Meng 1
- Kenton Murray 1
- Desh Raj 1
- Ariya Rastrow 1
- Jan Trmal 1
- Shinji Watanabe 1
- Cihan Xiao 1
- Beining Xu 1
- Lina Yao 1
- Mahsa Yarmohammadi 1
- Mingzhi Yu 1
- Qiquan Zhang 1