Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling

Zhuo Gong, Daisuke Saito, Sheng Li, Hisashi Kawai, Nobuaki Minematsu


Abstract
Language models (LM) have played crucial roles in automatic speech recognition (ASR) to enhance end-to-end (E2E) ASR systems’ performance. There are two categories of approaches: finding better ways to integrate LMs into ASR systems and adapting on LMs to the task domain. This article will start with a reflection of interpolation-based integration methods of E2E ASR’s scores and LM’s scores. Then we will focus on LM augmentation approaches based on the noisy channel model, which is intrigued by insights obtained from the above reflection. The experiments show that we can enhance an ASR E2E model based on encoder-decoder architecture by pre-training the decoder with text data. This implies the decoder of an E2E model can be treated as an LM and reveals the possibility of enhancing the E2E model without an external LM. Based on those ideas, we proposed the implicit language model canceling method and then did more discussion about the decoder part of an E2E ASR model. The experimental results on the TED-LIUM2 dataset show that our approach achieves a 3.4% relative WER reduction compared with the baseline system, and more analytic experiments provide concrete experimental supports for our assumption.
Anthology ID:
2022.cai-1.6
Volume:
Proceedings of the Second Workshop on When Creative AI Meets Conversational AI
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Xianchao Wu, Peiying Ruan, Sheng Li, Yi Dong
Venue:
CAI
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
42–47
Language:
URL:
https://aclanthology.org/2022.cai-1.6
DOI:
Bibkey:
Cite (ACL):
Zhuo Gong, Daisuke Saito, Sheng Li, Hisashi Kawai, and Nobuaki Minematsu. 2022. Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling. In Proceedings of the Second Workshop on When Creative AI Meets Conversational AI, pages 42–47, Gyeongju, Republic of Korea. Association for Computational Linguistics.
Cite (Informal):
Can We Train a Language Model Inside an End-to-End ASR Model? - Investigating Effective Implicit Language Modeling (Gong et al., CAI 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.cai-1.6.pdf