Paras Sharma
2025
Contextual ASR Error Handling with LLMs Augmentation for Goal-Oriented Conversational AI
Yuya Asano
|
Sabit Hassan
|
Paras Sharma
|
Anthony B. Sicilia
|
Katherine Atwell
|
Diane Litman
|
Malihe Alikhani
Proceedings of the 31st International Conference on Computational Linguistics: Industry Track
General-purpose automatic speech recognition (ASR) systems do not always perform well in goal-oriented dialogue. Existing ASR correction methods rely on prior user data or named entities. We extend correction to tasks that have no prior user data and exhibit linguistic flexibility such as lexical and syntactic variations. We propose a novel context augmentation with a large language model and a ranking strategy that incorporates contextual information from the dialogue states of a goal-oriented conversational AI and its tasks. Our method ranks (1) n-best ASR hypotheses by their lexical and semantic similarity with context and (2) context by phonetic correspondence with ASR hypotheses. Evaluated in home improvement and cooking domains with real-world users, our method improves recall and F1 of correction by 34% and 16%, respectively, while maintaining precision and false positive rate. Users rated .8-1 point (out of 5) higher when our correction method worked properly, with no decrease due to false positives.
Search
Fix data
Co-authors
- Malihe Alikhani 1
- Yuya Asano 1
- Katherine Atwell 1
- Sabit Hassan 1
- Diane Litman 1
- show all...