Jiedan Zhu
2022
Joint Audio/Text Training for Transformer Rescorer of Streaming Speech Recognition
Suyoun Kim
|
Ke Li
|
Lucas Kabela
|
Ron Huang
|
Jiedan Zhu
|
Ozlem Kalinli
|
Duc Le
Findings of the Association for Computational Linguistics: EMNLP 2022
Recently, there has been an increasing interest in two-pass streaming end-to-end speech recognition (ASR) that incorporates a 2nd-pass rescoring model on top of the conventional 1st-pass streaming ASR model to improve recognition accuracy while keeping latency low. One of the latest 2nd-pass rescoring model, Transformer Rescorer, takes the n-best initial outputs and audio embeddings from the 1st-pass model, and then choose the best output by re-scoring the n-best initial outputs. However, training this Transformer Rescorer requires expensive paired audio-text training data because the model uses audio embeddings as input. In this work, we present our Joint Audio/Text training method for Transformer Rescorer, to leverage unpaired text-only data which is relatively cheaper than paired audio-text data. We evaluate Transformer Rescorer with our Joint Audio/Text training on Librispeech dataset as well as our large-scale in-house dataset and show that our training method can improve word error rate (WER) significantly compared to standard Transformer Rescorer without requiring any extra model parameters or latency.
Search
Co-authors
- Suyoun Kim 1
- Ke Li 1
- Lucas Kabela 1
- Ron Huang 1
- Ozlem Kalinli 1
- show all...
- Duc Le 1