Anastasios Drosou
2025
On-device System of Compositional Multi-tasking in Large Language Models
Ondrej Bohdal
|
Konstantinos Theodosiadis
|
Asterios Mpatziakas
|
Dimitrios Filippidis
|
Iro Spyrou
|
Christos Zonios
|
Anastasios Drosou
|
Dimosthenis Ioannidis
|
Kyenghun Lee
|
Jijoong Moon
|
Hyeonmok Ko
|
Mete Ozay
|
Umberto Michieli
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Large language models (LLMs) are commonly adapted for diverse downstream tasks via parameter-efficient fine-tuning techniques such as Low-Rank Adapters (LoRA). While adapters can be combined to handle multiple tasks separately, standard approaches struggle when targeting the simultaneous execution of complex tasks, such as generating a translated summary from a long conversation. To address this challenge, we propose a novel approach tailored specifically for compositional multi-tasking scenarios involving summarization and translation. Our technique involves adding a learnable projection layer on top of the combined summarization and translation adapters. This design enables effective integration while maintaining efficiency through reduced computational overhead compared to alternative strategies requiring extensive retraining or sequential processing. We demonstrate the practical viability of our method within an on-device environment by developing an Android app capable of executing compositional tasks seamlessly. Experimental results indicate our solution performs well and is fast in both cloud-based and on-device implementations, highlighting the potential benefits of adopting our framework in real-world applications demanding high-speed operation alongside resource constraints.
Retrieval Augmented Generation based context discovery for ASR
Siskos Dimitrios
|
Stavros Papadopoulos
|
Pablo Peso Parada
|
Jisi Zhang
|
Karthikeyan Saravanan
|
Anastasios Drosou
Findings of the Association for Computational Linguistics: EMNLP 2025
This work investigates retrieval augmented generation as an efficient strategy for automatic context discovery in context-aware Automatic Speech Recognition (ASR) system, in order to improve transcription accuracy in the presence of rare or out-of-vocabulary terms. However, identifying the right context automatically remains an open challenge. This work proposes an efficient embedding-based retrieval approach for automatic context discovery in ASR. To contextualize its effectiveness, two alternatives based on large language models (LLMs) are also evaluated: (1) large language model (LLM)-based context generation via prompting, and (2) post-recognition transcript correction using LLMs. Experiments on the TED-LIUMv3, Earnings21 and SPGISpeech demonstrate that the proposed approach reduces WER by up to 17% (percentage difference) relative to using no-context, while the oracle context results in a reduction of up to 24.1%.
Search
Fix author
Co-authors
- Ondrej Bohdal 1
- Siskos Dimitrios 1
- Dimitrios Filippidis 1
- Dimosthenis Ioannidis 1
- Hyeonmok Ko 1
- show all...