Ankur Gandhe


2024

pdf bib
Multi-Modal Retrieval For Large Language Model Based Speech Recognition
Aditya Gourav | Jari Kolehmainen | Prashanth Shivakumar | Yile Gu | Grant Strimel | Ankur Gandhe | Ariya Rastrow | Ivan Bulyko
Findings of the Association for Computational Linguistics ACL 2024

Retrieval is a widely adopted approach for improving language models leveraging external information. As the field moves towards multi-modal large language models, it is important to extend the pure text based methods to incorporate other modalities in retrieval as well for applications across the wide spectrum of machine learning tasks and data types. In this work, we propose multi-modal retrieval with two approaches: kNN-LM and cross-attention techniques. We demonstrate the effectiveness of our retrieval approaches empirically by applying them to automatic speech recognition tasks with access to external information. Under this setting, we show that speech-based multi-modal retrieval outperforms text based retrieval, and yields up to improvement in word error rate over the multi-modal language model baseline. Furthermore, we achieve state-of-the-art recognition results on the Spoken-Squad question answering dataset.

2021

pdf bib
Attention-based Contextual Language Model Adaptation for Speech Recognition
Richard Diehl Martinez | Scott Novotney | Ivan Bulyko | Ariya Rastrow | Andreas Stolcke | Ankur Gandhe
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2019

pdf bib
Neural Text Normalization with Subword Units
Courtney Mansfield | Ming Sun | Yuzong Liu | Ankur Gandhe | Björn Hoffmeister
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Industry Papers)

Text normalization (TN) is an important step in conversational systems. It converts written text to its spoken form to facilitate speech recognition, natural language understanding and text-to-speech synthesis. Finite state transducers (FSTs) are commonly used to build grammars that handle text normalization. However, translating linguistic knowledge into grammars requires extensive effort. In this paper, we frame TN as a machine translation task and tackle it with sequence-to-sequence (seq2seq) models. Previous research focuses on normalizing a word (or phrase) with the help of limited word-level context, while our approach directly normalizes full sentences. We find subword models with additional linguistic features yield the best performance (with a word error rate of 0.17%).

2013

pdf bib
Hypothesis Refinement Using Agreement Constraints in Machine Translation
Ankur Gandhe | Rashmi Gangadharaiah
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2011

pdf bib
A Word Reordering Model for Improved Machine Translation
Karthik Visweswariah | Rajakrishnan Rajkumar | Ankur Gandhe | Ananthakrishnan Ramanathan | Jiri Navratil
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

pdf bib
Handling verb phrase morphology in highly inflected Indian languages for Machine Translation
Ankur Gandhe | Rashmi Gangadharaiah | Karthik Visweswariah | Ananthakrishnan Ramanathan
Proceedings of 5th International Joint Conference on Natural Language Processing

pdf bib
Clause-Based Reordering Constraints to Improve Statistical Machine Translation
Ananthakrishnan Ramanathan | Pushpak Bhattacharyya | Karthik Visweswariah | Kushal Ladha | Ankur Gandhe
Proceedings of 5th International Joint Conference on Natural Language Processing