Factored recurrent neural network language model in TED lecture transcription

Youzheng Wu, Hitoshi Yamamoto, Xugang Lu, Shigeki Matsuda, Chiori Hori, Hideki Kashioka


Abstract
In this study, we extend recurrent neural network-based language models (RNNLMs) by explicitly integrating morphological and syntactic factors (or features). Our proposed RNNLM is called a factored RNNLM that is expected to enhance RNNLMs. A number of experiments are carried out on top of state-of-the-art LVCSR system that show the factored RNNLM improves the performance measured by perplexity and word error rate. In the IWSLT TED test data sets, absolute word error rate reductions over RNNLM and n-gram LM are 0.4∼0.8 points.
Anthology ID:
2012.iwslt-papers.11
Volume:
Proceedings of the 9th International Workshop on Spoken Language Translation: Papers
Month:
December 6-7
Year:
2012
Address:
Hong Kong, Table of contents
Venue:
IWSLT
SIG:
SIGSLT
Publisher:
Note:
Pages:
222–228
Language:
URL:
https://aclanthology.org/2012.iwslt-papers.11
DOI:
Bibkey:
Cite (ACL):
Youzheng Wu, Hitoshi Yamamoto, Xugang Lu, Shigeki Matsuda, Chiori Hori, and Hideki Kashioka. 2012. Factored recurrent neural network language model in TED lecture transcription. In Proceedings of the 9th International Workshop on Spoken Language Translation: Papers, pages 222–228, Hong Kong, Table of contents.
Cite (Informal):
Factored recurrent neural network language model in TED lecture transcription (Wu et al., IWSLT 2012)
Copy Citation:
PDF:
https://aclanthology.org/2012.iwslt-papers.11.pdf