Masked Audio Text Encoders are Effective Multi-Modal Rescorers

Jinglun Cai, Monica Sunkara, Xilai Li, Anshu Bhatia, Xiao Pan, Sravan Bodapati


Abstract
Masked Language Models (MLMs) have proven to be effective for second-pass rescoring in Automatic Speech Recognition (ASR) systems. In this work, we propose Masked Audio Text Encoder (MATE), a multi-modal masked language model rescorer which incorporates acoustic representations into the input space of MLM. We adopt contrastive learning for effectively aligning the modalities by learning shared representations. We show that using a multi-modal rescorer is beneficial for domain generalization of the ASR system when target domain data is unavailable. MATE reduces word error rate (WER) by 4%-16% on in-domain, and 3%-7% on out-of-domain datasets, over the text-only baseline. Additionally, with very limited amount of training data (0.8 hours) MATE achieves a WER reduction of 8%-23% over the first-pass baseline.
Anthology ID:
2023.findings-acl.682
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10718–10730
Language:
URL:
https://aclanthology.org/2023.findings-acl.682
DOI:
10.18653/v1/2023.findings-acl.682
Bibkey:
Cite (ACL):
Jinglun Cai, Monica Sunkara, Xilai Li, Anshu Bhatia, Xiao Pan, and Sravan Bodapati. 2023. Masked Audio Text Encoders are Effective Multi-Modal Rescorers. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10718–10730, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Masked Audio Text Encoders are Effective Multi-Modal Rescorers (Cai et al., Findings 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.findings-acl.682.pdf
Video:
 https://aclanthology.org/2023.findings-acl.682.mp4