Can Visual Context Improve Automatic Speech Recognition for an Embodied Agent?

Pradip Pramanick, Chayan Sarkar


Abstract
The usage of automatic speech recognition (ASR) systems are becoming omnipresent ranging from personal assistant to chatbots, home, and industrial automation systems, etc. Modern robots are also equipped with ASR capabilities for interacting with humans as speech is the most natural interaction modality. However, ASR in robots faces additional challenges as compared to a personal assistant. Being an embodied agent, a robot must recognize the physical entities around it and therefore reliably recognize the speech containing the description of such entities. However, current ASR systems are often unable to do so due to limitations in ASR training, such as generic datasets and open-vocabulary modeling. Also, adverse conditions during inference, such as noise, accented, and far-field speech makes the transcription inaccurate. In this work, we present a method to incorporate a robot’s visual information into an ASR system and improve the recognition of a spoken utterance containing a visible entity. Specifically, we propose a new decoder biasing technique to incorporate the visual context while ensuring the ASR output does not degrade for incorrect context. We achieve a 59% relative reduction in WER from an unmodified ASR system.
Anthology ID:
2022.emnlp-main.127
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1946–1957
Language:
URL:
https://aclanthology.org/2022.emnlp-main.127
DOI:
10.18653/v1/2022.emnlp-main.127
Bibkey:
Cite (ACL):
Pradip Pramanick and Chayan Sarkar. 2022. Can Visual Context Improve Automatic Speech Recognition for an Embodied Agent?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 1946–1957, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Can Visual Context Improve Automatic Speech Recognition for an Embodied Agent? (Pramanick & Sarkar, EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.127.pdf