Kartik Audhkhasi


2025

pdf bib
LegoSLM: Connecting LLM with Speech Encoder using CTC Posteriors
Rao Ma | Tongzhou Chen | Kartik Audhkhasi | Bhuvana Ramabhadran
Findings of the Association for Computational Linguistics: EMNLP 2025

Recently, large-scale pre-trained speech encoders and Large Language Models (LLMs) have been released, which show state-of-the-art performance on a range of spoken language processing tasks, including Automatic Speech Recognition (ASR). To effectively combine both models for better performance, continuous speech prompts and ASR error correction have been adopted. However, these methods are prone to suboptimal performance or are inflexible. In this paper, we propose a new paradigm, LegoSLM, that bridges speech encoders and LLMs using the ASR posterior matrices. The speech encoder is trained to generate Connectionist Temporal Classification (CTC) posteriors over the LLM vocabulary, which are used to reconstruct pseudo-audio embeddings by computing a weighted sum of the LLM input embeddings. These embeddings are concatenated with text embeddings in the LLM input space. Using the well-performing USM and Gemma models as an example, we demonstrate that our proposed LegoSLM method yields good performance on both ASR and speech translation tasks. By connecting USM with Gemma models, we can get an average of 49% WER reduction (WERR) over the USM-CTC baseline on 8 MLS testsets. The trained model also exhibits modularity in a range of settings – after fine-tuning the Gemma model weights, the speech encoder can be switched and combined with the LLM in a zero-shot fashion. Additionally, we propose to control the decode-time influence of the USM and LLM using a softmax temperature, which shows effectiveness in domain adaptation.

2013

pdf bib
Which ASR should I choose for my dialogue system?
Fabrizio Morbini | Kartik Audhkhasi | Kenji Sagae | Ron Artstein | Doğan Can | Panayiotis Georgiou | Shri Narayanan | Anton Leuski | David Traum
Proceedings of the SIGDIAL 2013 Conference